There are a number of mobile architectures that support effective A/B testing within mobile apps. They range from rapid prototyping ones based on HTML5 components to feature flag based ones that trigger different versions of native components. The trade-offs are between in-app performance, testing iteration time and the native look and feel within the app. The main concern for effective A/B testing is to produce as many valid experiments as possible in the shortest amount of time. Therefore the longer this process takes, the longer it will take to discover what version(s) of the app perform best for various user segments. Whichever strategy is used, A/B tests should not be dependent on infrequent App Store releases to be the most effective.
After setting up a new A/B testing framework, its important to run an A/A test and determine if it is calibrated correctly. This type of A/A test should also be run every so often to make sure the A/B testing framework still works as expected and produces the correct statistical results.
Once a basic A/B testing framework is setup, here are the steps to run an effective A/B test:
- Define a goal that can be accurately measured. The effort in this step will reap dividends later in reducing the number of failed or ineffective tests.
- Brainstorm ideas for how to satisfy the goal. These can come from a variety of places such as qualitative customer feedback, employee suggestions, behavioural economic theories, gut feelings about product improvements, etc.
- Prioritize the list of ideas above based on the ease of implementation, the estimation of improvement potential and the relative position in the funnel.
- Setup the necessary event-based analytics tracking for an individual user's flow through the entire app. These events should be wired together to produce a funnel so that it is clear what the conversion rates are at each step. Depending on what is being tested, the user’s flow should begin from their entry point in the app (direct launch, push notification or website launch) through to the point of purchase and/or post-purchase follow-up. Another important strategy is to measure not only the success of the step being tested, but also the overall engagement of a user.
- Capture a baseline set of metrics for how the app currently performs for various user segments before any testing is run.
- Build the minimum viable test (MVT) and make sure to test it with a small set of beta users prior to releasing it in order to validate the initial metrics.
- Decide on the proportion of users that will be exposed to the A/B test (e.g. new users, returning users, users who haven't purchased yet, 10% of all users, etc.)
- Run the A/B test until the results become statistically significant for the required confidence level (usually 95%). Also ensure that the A/B test occurs during a time period that is considered "usual" activity (e.g. don’t A/B test on a Sunday if users don’t often purchase on a Sunday).
- Calculate which version of the test performs better. If the newly tested version is superior, make it the default version of the mobile app and release it into production for all users.
- If the newly tested version either performs poorly or no conclusion can be reached, record the details and possibly re-assess later.
- Observe any other tangential effects that the A/B test may have caused such as increased support calls/emails, decreased retention, engineering complexity, etc. It may also be helpful to present some users with a brief survey asking them about their new experience in the mobile app. The results from this survey will add valuable qualitative feedback to the A/B test’s quantitative results.
- Repeat the process by running another A/B test.
Ultimately, executing A/B tests is about simplicity and speed. The faster the tests can be run and statistically significant winners declared, the more growth a product will see over time.
The steps given above for running A/B tests relate to users who have already downloaded the mobile app. A/B testing can also be performed on users coming from specific growth channels. Due to mobile's inherently closed ecosystem, attribution is more complicated on mobile apps. However once it is setup correctly, it is possible to track users from specific growth channels so that each channel’s revenue potential can be calculated and optimized.
2 comments:
Thanks for info!
No problem Tom, I'm glad it was helpful.
Post a Comment