What Is Mobile A/B Testing?

There’s a big difference between A/B testing on a website 
and A/B testing on a mobile app. In this guide, we’ll explain 
why they’re different, how mobile A/B testing works, and the best practices that follow with it. This is a one-stop resource for both seasoned and aspiring app teams. Want to take your mobile A/B testing a step further? Leanplum can help you a/b test anything in your app.

Chapter 1: What Is a Mobile 
A/B Test?

An A/B test is an experiment that compares two different versions of the same website at the same time, while measuring key metrics. Mobile A/B testing is the process of using these experiments to optimize a mobile app. Two different versions of the same app are simultaneously compared while measuring key metrics. Once a test achieves statistical significance, app developers can make data-driven decisions when implementing changes.

Usually A/B tests only change one variable at a time, in order to correlate the variable with the result. An A/B test that changes multiple variables at the simultaneously is called a multivariate test, or MVT. These tests are useful for measuring the interaction between different variables on the interface, but they must be designed carefully in order to find meaningful correlations.

For example, a developer of a retail app may want to test the placement of an “add to cart” button to see if it would increase conversion rates. With the results, the developer can further optimize their app and boost revenue.

Mobile apps are always evolving and require frequent testing and optimization. App developers shouldn’t be the only ones conducting A/B tests. Mobile marketers should, too. Mobile A/B testing isn’t just limited to in-app user interface, but external elements like push notifications. App testing is less about comparing buttons and more about optimizing the user experience as a whole.

a-b-test_auth_1

Chapter 2: Websites vs. 
Mobile Apps

Although website and mobile app optimization are two very different processes, their end goal is the same: improving user experience in order to drive ROI. In both website and mobile optimization, elements of the UI and UX are tested and measured.

A critical difference between the two is easily seen when testing user flows. On web, it’s all about testing and optimizing the conversion funnel. A user discovers the website through an organic search or advertising. When the user gets to your site, they either comb through a couple pages or bounce. Eventually, the user may take action through downloading a piece of content or purchasing an item. In website optimization, metrics like bounce rates, session length, and goal completions are highly significant.

On mobile, users download the app through the App Store or Play Store. Convincing a user to download the app is a different type of marketing known as App Store Optimization (ASO). The user flows aren’t as linear on a mobile app — it takes a path similar to a bumblebee, jumping from onboarding to conversion to re-engagement, often out of order. Rather than a bounce rate, mobile teams care about app abandonment. Metrics like user engagement, daily or monthly active users, and user experience are closely looked at. Keep in mind, mobile apps are continually rated by users, and most, if not all, on the basis of experience. How do you keep those ratings up? Through testing and optimization, of course.

Chapter 3: Why A/B Testing 
Is Crucial for Mobile Apps

There are no rules in making a “good” app. App developers can waste time and resources guessing what users like in their app … or they can let the data do the all the talking. We’ve analyzed hundreds of apps and found that by day one, apps only have a 21 percent retention rate. By day 30, apps only have 7.5 percent retention. And by day 90, this number falls to only 1.89 percent. This is why A/B testing your app experience is critical.

Apps with high ratings turn up higher in search results. And — this should go without saying — users are more likely to download an app with higher ratings. According to Marketing Land, 96 percent of users say they would install an app with four stars, compared to an app with a lower rating. Every extra star that apps garner in the App Store increases the chances of download. From Marketing Land …

“Jumping from a 2-star to a 3-star rating effectively increases your app store conversion by 340 percent … [and] jumping from 3 to 4 stars increases your conversion by an additional 
89 percent.”

The point is, when an app has a four- or five-star rating, it becomes obvious from the first glance that users will have a great experience. They wind up trusting the app before they 
even install.

Chapter 4: How to Run 
Your First Mobile A/B Test

If you’ve never run a mobile A/B test before, don’t worry — it’s not as complicated as it sounds. The A/B testing process can be broken down to four steps:

Step 1: Form Your Hypothesis

Every experiment should begin with a hypothesis. What are you trying to determine from this experiment?

For example, an app publisher might want to test the effectiveness of a new onboarding flow compared to an old one. The hypothesis would be that users find it easier to understand a simpler, visually appealing flow, resulting in more onboarding completions and a higher rate of daily average users.

Step 2: Select Your Audiences

You don’t need to target all users when A/B testing. Following the example above, it might
not make sense to test a segment of users that have high onboarding completion rates. It’s the user segment with low onboarding completions that should be tested.

Based on this segment, decide exactly what you want to test, whether it is the amount of 
text that exists in the flow or the order of the flow itself. The hypothesis would be that a visual-based onboarding flow will yield in more completions.

Step 3: Conduct the Experiment and Collect Data

Running the actual test varies as it depends on your A/B testing platform of choice. This usually involves determining up parameters, segmenting your audience, and setting goals.

Effective mobile A/B testing software should estimate how long it will take to see a statistically significant change between the control and variant.

Step 4: Analyze The Results

After the test has run and enough samples have been collected, the A/B testing software will calculate if the difference between the two measured variants is not because of a random chance. A common threshold for determining statistical significance is a p-value of .05, meaning there is a 5 five chance of error (which is totally okay).

If the control group didn’t reach the goals you set, but the variant did, then your hypothesis was correct. If your hypothesis didn’t hold up, then you rerun the test with different variants and a different hypothesis. A/B testing also requires optimizations too.

And if the test fails … well, “failed” A/B tests aren’t totally useless. They may indicate that something about your app is working but could also be used to further optimize the experience. If the order of an onboarding flow currently works, then the copy and visuals could be tested for further optimization, encouraging more users to come back and re-engagement in the subsequent days.

In the end, it’s about the user experience. Frequent tweaking is often required for optimization.

auth-mobileimgch4

Chapter 5: A Complete List 
of App Variants You Need to 
A/B Test

Messaging

Test your push notifications and in-app messaging to guarantee higher engagement. 
There’s no limit to what you can A/B test within messages. This includes:

  • Tone
  • Copy length
  • Subject line

In our recent data science report, we found that implementing push notifications can 
increase app retention by at least 20 percent. A/B test those messages and increase your 
ROI even further.

Onboarding Flows

Apps lose 30 percent of users after first use. Consider A/B testing your onboarding flows to prevent turning off users on first use. Many apps force first time users to register an email and password, a loss easily prevented by an A/B test. Furthermore, apps should have tutorials showing users how to use the app. Test the order of your onboarding flow by measuring how many users make it through to the end.

Conversion Pages

Like any other conversion page, web or mobile, optimization is essential. A/B test size and color of buttons, different call-to-action texts, and images within the page. With our App Editor, mobile marketers can test UI on the fly without coding or an app developer’s help. No need for App Editor resubmissions either.

Chapter 6: How to Analyze a Completed Mobile A/B Test

Finishing an A/B test is a big step, but it’s not the last step. It can take a lot of time and effort to analyze experiment results and discover actionable insights. Consider using this checklist if it’s your first time analyzing an A/B test.

The first step is the most obvious step: search for statistically significant changes. In any given A/B test, it’s recommended to check the impact of your changes on several different metrics.

Sometimes, it’s easy to discover which metrics differ the most in your experiment. You might discover that a new CTA button increased clickthrough rates, or that a push notification with personalized copy led to more opens than one with generic copy. Analytics vendors often make these changes easier to discover by highlighting every metric that experienced a statistically significant change.

However, sometimes the true impact of your A/B test is harder to discern. Don’t give up hope just because your aggregate test results are flat or negative; you can still filter your results by user segment. It’s possible that an overall flat result actually performed well with a specific group of users, while it performed poorly with another group.

There’s one more factor to keep in mind: always look at both negative and positive results. It’s possible that a superficially positive result, like a higher percentage of push notification opens or more add-to-cart completions, won’t actually help your top line. If your main goal (like revenue or DAU) stays the same — or, worse yet, declines — it’s worth considering why the sub-goal went up. This would suggest that the results are more ambiguous than they first appear.

Two-tailed testing helps with this process. While one-tailed tests only flag changes as significant if they exceed a certain threshold, two-tailed tests also flag changes that drop below a given threshold. This way, it’s easier to spot unintended consequences in your 
A/B test.

auth-mobileimgch6

Chapter 7: Other Types of Mobile App Testing

A/B testing is relatively simple to learn, but difficult to master. Below are a couple of best practices that we recommend for anyone looking to improve their mobile A/B testing process.

Start With an A/A Test

For the most reliable A/B testing results, it’s recommended to run an A/A test first. This is in order to calibrate your tools and ensure that your future testing results will be accurate.

In brief, an A/A test is like an A/B test except the control and the variant are the same. Since there’s no difference between the two versions of your app, the results of the “change” should be completely flat.

It takes time for any split test to collect enough samples to reach statistical significance, so 
A/A testing is not an overnight process. The downside is that it keeps you from effectively A/B testing for as long as it’s running. But the pros generally outweigh the cons, because it only takes a single A/A test to confirm that your implementation is working.

Experiment With Multivariate Tests

Once you’ve gotten the hang of A/B testing, consider experimenting with multivariate tests (MVTs). An MVT is an A/B test that compares the control group to several different variants, allowing you to test proposed changes against each other.

MVTs sound better than A/B tests in theory, but the downside is that they take longer to complete. It’s impractical to run an MVT on a user segment that’s too small. Even if you run the test on your whole app audience, adding too many variants will slow down the test results due to the amount of samples needed.

MVTs aren’t intended to cheat the system and provide faster or more efficient results. However, they’re useful when you want to pit several changes against one another before committing to one. It may be beneficial to analyze the relative advantages of one variant over another instead of only comparing each one to the control group.

Chapter 8: Other Mobile 
A/B Testing Resources

When in doubt, you can consult our four commandments of mobile A/B testing for further tips and best practices.

Leanplum was founded by Momchil Kyurkchiev, CEO, and Andrew First, CTO, former Google engineers with years of experience in optimization. After building YouTube’s A/B testing product, Momchil and Andrew realized the mobile industry lacked similar tools. Together, they joined TechStars and turned their passion for optimization into a successful SaaS company.

Today, we’re experts in mobile A/B testing, and have expanded our platform to include more than just optimization. Get Messaging, Automation, Personalization, App Editing, A/B Testing, and Analytics in one integrated platform.

Get a demo today to learn how Leanplum can help you drive more app ROI.

Ask for a demo today and 
learn how Leanplum can help you drive more app ROI.