How to decrease post-release risks. Interview with Parimala Hariprasad. Part I

Parimala Hariprasad spent her youth studying people and philosophy. By the time she got to work, she was able to put those learnings to help train skilled testers. She has worked as a tester for close to 12 years in domains like CRM, Security, e-Commerce and Healthcare. Her expertise lies in test coaching, test delivery…

Parimala Hariprasad spent her youth studying people and philosophy. By the time she got to work, she was able to put those learnings to help train skilled testers. She has worked as a tester for close to 12 years in domains like CRM, Security, e-Commerce and Healthcare. Her expertise lies in test coaching, test delivery excellence and creating great teams which ultimately fired her because the teams became self-sufficient. She has experienced the transition from Web to Mobile and emphasizes the need for Design Thinking in testing. She frequently rants on her blog, Curious Tester.

Genislab Technologies: Your most recent work has been extensively in mobile apps testing space. What according to you is important while devising a Test Strategy for Mobile Apps?

Parimala Hariprasad: Mobile apps testing groundwork begins by understanding the customer and the apps to be tested. The better we know about these two, the better test strategy will be. A high level test strategy includes understanding business goals, release goals, mobile personas, platforms to test on and testing for competitiveness of the apps. Once test strategy is ready, tests in each area can be planned and executed for good coverage. Testers must plan for surprise platforms that are problematic, but weren’t tested thoroughly enough. Whenever we optimize and narrow our focus, we run the risk of missing something important. Mitigation plans are important to have, for known risks.

Fishing net heuristic
Is your test strategy good enough? Every element in test strategy is a fishing net. Ask, ‘What kind of fishing net do you use?’, ‘Does it catch small fishes?’, ‘Does it deal with sharks?’, ‘Do you have just one kind of net or more?’. Remember, which type of sea creature you catch depends on the type of net you use! Fishing nets is a powerful heuristic to assess if test strategy is good enough or not.

I don’t have time to create a strategy
Abraham Lincoln once said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe”. So what if you don’t have the time to test? Even if you have an hour to test, you must spend time creating a strategy first, because the less time you have to test, the more effective your testing must be. These words from Jonathan Kohl keep coming back to me whenever my team feels time pressure to complete testing.

Genislab Technologies: Faster release of mobile apps is always a risk to any organization. What do you think app owners should do to decrease post-release risks and how testing can help?

Parimala Hariprasad: Post-release risks can be mitigated well if testing is backed by powerful test strategy and is context-driven. There are several techniques to gather information post-release of apps.

Real world testing
Hiring testers and users to test in real world conditions w.r.t location, network types, network speeds and so forth. Such feedback has high possibility of finding problems that might occur only in real world conditions and corner case scenarios.

App store reviews
Studying reviews and comments by users on the app store is a goldmine of information about how the app can become better in subsequent releases. App discoverability and user engagement are key metrics of measurement to increase app store ratings for apps. Testers can assimilate these inputs and come up with a ‘Recommendations’ report from user perspective.

Social media analytics
What people say about the released app on social media is a good way to assess how users are feeling about the app in general. There are several tools in the market that collect social media reports about the app from different social media and provide that information to stakeholders. Analytics gives great visibility into user distribution in real world. Based on this information, testing can focus on platforms/configurations that were not previously covered during testing. Additionally, analytics data can be used to improve test strategy for subsequent test releases.

Competitor analysis
Released app can be compared against competitor apps to test its strength and stickiness. A better approach might be to take the app to users of competitor apps and provide feedback at all levels.

Recently, there was an instance where missing out on testing a ‘so-perceived’ trivial flow cost the organization, refunds to many of its users. Until then, the testing team involved did not know the importance of that flow.

Once the apps are released, information about the quality or the lack of it, keeps flowing in all directions. It’s important for testers to work outside testing team with tech support personnel, sales/marketing teams and product owners to listen to the feedback coming in. The underlying message is, ‘Testers need to keep listening in all directions.’

Genislab Technologies: Mobile Market has millions of devices today. How do you choose which devices to test on? Can you describe your approach?

Parimala Hariprasad: I like Jonathan Kohl’s approach to choosing mobile devices. According to him, there are three basic approaches to select from an ocean of devices:

  1. Singular approach: test on one device type. This is either because that is all our team plans to support, or the most popular device in a device family, with one operating system, using one cellular carrier. Problem child device that reveals lots of problems is the best bet in this approach.
  2. Proportional approach. Which devices and how many devices to test in this approach needs research which is based either based on web/mobile traffic, analytics data or user data. For e.g. if existing historical data shows 50% Android mobile traffic, 45% Apple iOS mobile traffic, and the remaining 5% are other handset types, this data can be used to prioritize testing using android and iOS devices.
  3. Shotgun approach. For a mass market app, we may need to support all sorts of devices, and we have no self or customer-imposed restrictions on devices. It has the highest risk, because there are many, many platform combinations out there, Problem devices, research data like in proportional approach above are good places to start.
  4. Outsourced approach. There are various services you can use to supplement your own test devices with basic testing on devices that other people own and have set up. Formally this can be done using remote device access services which allows to install software and control a device remotely over the web to do basic functional tests. You can also use a crowd sourcing services where they manage people with different device types in different locations and parts of the world to do testing on their phones.

Despite above approaches, organizations face the brunt of setting up mobile device labs, maintaining them and nurturing them with latest devices over a period of time. I handle this challenge by using a collective approach:

  • In-house Mobile Device Lab with access to most popular devices based on device models, platforms, countries, mobile app types and user types
  • Online Mobile Device Lab with access to millions of devices that can be accessed from across the world through in-house or external remote access mobile device organizations
  • Simulators / Emulators for quick / basic tests [Trust these at your own risk, but there are good tools in the market which are close to being real
  • BYOD approach where millions of users across the globe can be invited to complement other mobile device labs using crowdtesting

Genislab Technologies: Success of any product depends on positive user experience. Usability testers apply various techniques to enhance user experience. One of these techniques is paper prototyping. How helpful is it and what are its weak sides?

Parimala Hariprasad: Paper Prototyping is a technique adopted from design thinking world. In this technique, a tester wears a designer’s hat and re-designs prototypes of screens or pages. Testers take existing applications (Web or Mobile), view page by page or screen by screen, understand the design and perform basic tests on Design, UI and Business Logic for each design.

Designers create prototypes anyway, why re-invent the wheel, isn’t it? Testers gather vast experience over time by testing multiple products or applications in a variety of domains. For e.g, testers might say, ‘This button must be in this position’ or ‘This UI element must be in this color’ or ‘Remove this UI element as this is redundant in my experience’. This feedback is driven by testers’ knowledge of different applications, domains and industries. A step forward from here would be to incorporate above decisions and create fresh prototypes of these applications which can then be reviewed by designers/developers/product owners for further discussions.

An advanced approach towards paper prototyping can be to design two different prototypes, show it to a group of users and gather feedback on which was a better hit with users. Going to stakeholders with such information helps testers build credibility.

What are the weak sides of Paper Prototyping?
Paper Prototyping has its weaknesses:

1. Ideas are tester dependent and may not represent an ideal user at all times
2. Users involved in getting feedback may not represent a holistic sample of users

In the next post Parimala will touch upon the topic of crowdtesting.