How Crowd-Sourced Testing Reduced Mobile App Crash Rate By 90%.
Learn how we improved the customer experience for 10 million users in just 8 weeks with the help of crowdsourced testing.
The product head of a leading online business with over $1 Billion in sales, reached out to me for consultation in Q4 2016.
The crash rate of their business-critical mobile app had gone above 10% for every new deployment they were released in production.
The customer reviews had increasingly become negative. The product owner was unable to identify the root cause of the problem.
The mobile app had been live for more than a year.
The company was continuously enhancing it every two weeks. He told me the firm had engaged one of the leading quality assurance (QA) firms and was spending close to $12,000 per month on testing of this app. But the team was still not able to control the quality.
Over the last few years, I have come across many firms that are encountering a similar problem with their QA teams, and it is not very difficult to understand the reason.
The traditional QA models are becoming increasingly irrelevant in the context of a complex and fragmented mobile app landscape!
In a typical manner, the incumbent QA service provider had deployed 5 full-time staff (FTEs) that had mobile app testing experience as a part of their resume. They had also implemented a conventional test automation tool and then waited for things to go right or wrong. After all, they had been using the same approach with hundreds of large enterprises and made millions of dollars over the last decade.
But the traditional one-size-fits-all solution doesn’t work anymore.
QA has gone beyond simply following specifications given by the requirements engineer. It is about understanding the customer’s needs and evaluating a product from their perspective.
The future of software testing is no longer just about pointing out the wrong, but also about telling what is right.
The QA team has to act as a bridge between the developer and the user.
QA needs to combine technology with emotions to align with market needs and not just blindly follow the product owner’s specifications. It is high time that these traditional QA firms review their strategy to stay relevant in the future.
In my assessment of the testing model used by this company, I identified three things that were affecting quality of the app testing.
Here are the 3 key causes that were affecting the quality of testing for their app:
My product manager is my God and the specs are my Bible:
The testing team was operating in a silo and had no idea what was going wrong in the production or even in the User Acceptance (UAT) phase. They were simply writing test cases and scripts as per the given specifications and following the given instructions release after release.
If it works on 5 devices, it works on all:
The testing team was using 5 mobile devices for testing the app. They were testing all enhancements on these devices before releasing to production. The test team assumed that if the enhancements worked on these 5 devices, they should work for on all other devices.
All features are working just fine in my lab:
The test team had written standard test cases covering end-to-end functionality. But mobile app testing is much more than testing feature accuracy. Users access apps under dynamic conditions and the real-world scenarios. There are interruptions, network handovers, interfaces and other complexities involved in the real-world usage of the app.
Continuous Crowd Integration:
A carefully selected crowd of 200 test users was deployed. The group covered:
Over 100 different device models and system configurations
20 different city locations
Target user demographics
This crowd of test users performed guided exploratory testing. It reported crashes, compatibility and usability issues on the live version of the app. These issues were qualified and passed back to the product team for fixing in the subsequent releases.
Benefits for product team:
The product team received feedback on real-world issues on a daily basis.
Live crash reports on multiple device models and network conditions across different locations helped in identifying issues.
Consolidated feedbacks, concerns and improvement suggestions from real-users to incorporate in the product enhancement roadmap helped prioritize features.
A team of two mobile testing experts designed real-world test cases using Qualitrix mobile test framework. The test experts conducted both structured and context-driven testing to explore the app in depth for every release. Automated regression scripts were created to cover critical business flows along with monkey testing on 50 cloud devices on a weekly basis. Fabric crashlytics was deployed at the code level to conduct faster root cause analysis of the issues.
Benefits for product team:
Complete app assessment on both functional and non-functional aspects including real-world scenarios for interruptions, interfaces, performance etc.
Continuous regression to ensure no impact of enhancements on existing features.
Faster crash analysis and issue fixing.
Within 8 weeks of testing, we were able to bring down the crash rate from 10% to less than 1%.
With more than 150 issues and enhancement suggestions received, the user rating and feedback on the app improved dramatically for subsequent releases.
Finally, the cost of entire set-up that included mobile test experts, automated regression, automated monkey testing and continuous crowdsourced testing was less than the cost of 5 full-time testers (FTEs) deployed by the incumbent QA service provider!
Want to know how Qualitrix testing solutions can help improve your app quality?