← Back to blog

Why A/B Testing Fails for SMBs — And What to Do Instead

TL;DRIf you're a small or medium-sized business struggling to optimize product conversion through A/B testing, you're not alone. In fact, A/B testing tends to work best for large companies, while most other businesses lack the necessary traffic, expertise, and scale to achieve a positive ROI. This article explores the main barriers SMBs face with A/B testing, alternative strategies they can use, and how AI is changing the game for them.

Key obstacles SMBs face with A/B testing

We have interviewed dozens of SMBs to understand their challenges with A/B testing and have run over 100 A/B tests in our own careers at companies of all sizes, from startups to big tech. From this, we've identified four main obstacles that SMBs face when running A/B tests with in-house teams:

Barriers to A/B testing for SMBs
Barriers to A/B testing

Problem #1: Not enough traffic

The single most important barrier to A/B testing is a lack of traffic or sample size. A/B testing requires testing a sample to determine a statistically significant difference in performance between two or more variants. The required sample size for a significance test is determined by:

Now, let's take the example of an SMB eCommerce store with 400k monthly website visitors and a 1.5% website visit-to-purchase conversion rate. They want to test two versions of their website. The smallest effect size they want to detect is a 5% conversion rate uplift. This test requires a minimum sample size of 830k website visitors (use a free calculator to try different variables).

This test would need to run for about two months to collect a sufficient sample size. Now the question is: Does it make sense to let a test run for that long? The clear answer is no. That's because:

In sum, SMBs typically don't have the required traffic to run A/B tests within a reasonable timeframe. This is especially true for incremental conversion rate optimization, which aims to find many small uplifts (read more in our blog post on testing algorithms).

Problem #2: Lack of expertise

A second key barrier to A/B testing is a lack of experience and expertise. That's because the success of each test ultimately depends on the quality of the hypotheses being tested.

A/B testing involves some technical complexities. However, these can be overcome relatively easily, as the required knowledge is available to anyone interested in learning about it. Hence, this is not the type of expertise that is difficult to build or acquire. The real limitation lies in experience with formulating hypotheses specific to the conversion problem at hand—e.g., creating a UI that minimizes drop-off on the sign-up step, nudging users to add items to a shopping cart, or designing pricing tables that help users choose a subscription plan.

Typically, defining the hypothesis is done by a product manager (PM) whose team “owns” the codebase for the particular part of the user journey. In big tech companies this ownership is highly departmentalized. For example, one PM owns the registration and login flow, while another owns the onboarding experience. These PMs are experts in their specific domains, having built a deep understanding of the respective problem space through data analysis, user interviews, and extensive A/B testing. Each test further expands this knowledge.

However, building this kind of knowledge internally is much harder for smaller organizations. First, they can't compete with big tech companies when it comes to hiring and retaining the best PMs with expertise in growth optimization. Second, they can't afford to assign one PM (and a tech team) solely to growth optimization. Often, it's the business owner or a technical marketer who creates A/B tests.

As a result of lacking specialization and competing priorities, such teams fail to develop the expertise needed to formulate high quality hypotheses that yield statistically significant outcomes. Instead, testing neither delivers the expected impact on conversion nor builds incremental knowledge.

Problem #3: Lack of time

A/B testing is a manual and time-consuming process, considering all the steps involved:

This is particularly challenging for small companies without a dedicated team for A/B testing. For example, consider a very successful Shopify store generating a few million in annual revenue with a D2C product and a small team (five people or fewer). For such organizations, it's extremely hard to carve out time for A/B testing, as there are always more pressing core business activities to take care of, such as procurement, shipping, handling customer complaints, and developing the brand or product they sell.

Problem #4: Negative ROI

Considering all the points mentioned above, for most small and medium-sized businesses, the expected rewards of A/B testing often do not outweigh the cost. In other words, they'd be better off not running any A/B tests with internal resources.

But what are the costs of A/B testing? Let's take the example of an SMB SaaS company with €10M ARR and a dedicated team for running experiments. In Western Europe, such a team costs at least €500k per year (PM, 2–3 developers, designer, analyst). An A/B testing tool costs this company about €100k per year.

Now, let's assume this team is not entirely new to the task and has already acquired some domain expertise. On average, they deliver a 5% incremental uplift in conversion through A/B testing each year. This team barely pays back its cost (€500k in incremental revenue vs. €600k in team and tooling costs). Considering that newly formed teams rarely deliver any incremental uplift in the first 12–18 months, it's an investment only high-growth companies with deep pockets should make.

Which options do SMBs have?

Given all these constraints, what options do SMBs have for data-driven product and conversion rate optimization? Simply put, there are three paths forward:

The future of optimization: How AI is changing the game

While classic A/B testing has remained largely unchanged for decades, machine learning and AI are now transforming product optimization at an unprecedented pace. This is especially good news for small and medium-sized businesses, as it makes continuous product growth optimization accessible to companies that can't (or shouldn't) rely on traditional A/B tests for the reasons mentioned above.

The main advantages of AI-powered optimization over classic A/B testing lie in its data efficiency and the high degree of automation that can be achieved:

As a result, in the coming years, we can expect a wave of AI-powered product optimization tools that will make a large part of A/B testing obsolete. These innovations will empower SMBs to improve their products, lower customer acquisition costs, and reinvest those savings into building products their customers love.