It is common to see a sharp improvement in performance during the first few months of a new CRO programme. You launch your initial tests, the numbers head in the right direction, and the return on investment looks strong.
This early progress is not unusual, it often comes from fixing obvious issues that have already been identified but not prioritised. In other words, you are not discovering new opportunities yet, you are simply acting on what was already known.
But let’s be honest, those early wins are not always driven by a long-term strategy. They happen because you have finally cleared your internal backlog. Then comes the inevitable slowdown. The list of obvious fixes runs dry and the low-hanging fruit is gone.
And then the weekly update meeting starts to feel very different. Your stakeholders have got used to seeing green arrows and double-digit growth. Now the results are flat. They want to know why the numbers are not as positive as they were before.
The pressure builds to just get a win on the board. The team stops taking risks. They stop running bold experiments that might fail and start testing safe changes to avoid difficult conversations.
This is why, according to industry observations, most CRO programmes lose momentum within 90 days.
The energy of weeks 1-6 fades, and by week 10, results become contradictory or confusing. This is the plateau, and it is where most CRO programmes start to stall.
Why does this happen? It isn't a lack of ideas that holds back a testing programme. It is an obsession with the “average” user, a user who doesn't actually exist.
The problem with aggregated insights
The primary reason for the flatline is an obsession with top-level, aggregated data.
Teams assume their analytics setup is ready to rely on, when often it is held together with duct tape. They look at the overall funnel and see the same insights repeatedly.

Let’s look at a concrete example
Imagine your team is staring at a Product Listing Page. They see that only 50% of visitors actually click through to view a product. They decide this is the bottleneck. The goal is set: get it to 60%. At your current traffic levels, that lift is worth £50k a month.
For the next two months, they run multiple A/B tests on the grid layout, the filter logic, and the size of the product cards. You see an increase, but not to the level that you expected.
This happens because the “50%” figure is masking what is really happening.
The power of deep segmentation
If that team stopped looking at the aggregate and started breaking down the data, the picture would change immediately.
Look closer, and that 50% average falls apart. Returning users are flying through at 80% because they know what they want, whilst new users are barely reaching 20%.
Or perhaps the device split is the culprit. Desktop progression sits at a healthy 70%, while Mobile traffic drags the average down with a 30% progression rate. Suddenly, you don't have a "listing page problem." You have a "mobile usability problem" or a "new user trust problem."
But why do teams miss this in the first place? Relying on aggregate data is rarely just a mistake. It is usually a symptom of a deeper capability gap caused by poor stakeholder alignment.
When a testing programme stalls, the pressure for quick wins intensifies. To maintain experimentation velocity and keep the business happy, teams are forced to cut corners. The proper insight-generation process is often abandoned. They take aggregate numbers at face value just to get tests live.
Segmentation is often the single biggest factor in getting a stagnant testing programme back on track. By isolating these groups, you can stop testing generic layouts and start solving specific challenges.
Moving from generic to specific
To keep your programme moving beyond the 90-day mark, you must adjust your strategy.
Stop writing hypotheses that look like this:
"By changing the layout of the listing page, we will increase progression to product pages."
That is too broad. It assumes the problem is visual, and it assumes the problem applies to everyone. Instead, based on our segmented data, the hypothesis becomes:
"New users on mobile are abandoning the listing page because they cannot easily compare product features. By implementing a 'key feature' badge to the mobile card view, we will increase the progression rate by 10%."
This is actionable. It addresses a specific friction point for a specific group of users.
Next steps
The honeymoon phase is great, but it is not sustainable. Real growth comes from understanding that your traffic is a collection of distinct groups.
Deep segmentation is not just a data exercise, it is a strategic pivot that changes your entire operating model:
-
It changes how you structure your roadmap
You stop building queues based on URLs and start building them around specific user friction points. -
It changes the scale of your experiments
You break free from the pressure of launching endless minor tweaks, giving your team the mandate to build larger, more impactful tests that actually solve complex user problems. -
It changes how you measure success
You stop chasing aggregate conversion lifts and start optimising for specific user groups.
Do not fall into the trap of thinking optimisation is a light switch you can just flip on to please stakeholders. Go back to your analytics, find the specific segments dragging your averages down, and rebuild your entire experimentation strategy around solving their problems.
If your testing programme has hit a plateau, you do not have to figure out the pivot alone. Our CRO experts help teams move beyond aggregate data to build targeted, cohort-driven testing roadmaps. If you are ready to start running bold experiments that actually impact revenue, get in touch.

Ready. Steady. Grow!
We've helped some of the world's biggest brands transform and grow their businesses. And yours could be next.
So if you've got a business challenge to solve or a brief to answer, we'd love to hear from you. Simply complete this form and one of our experts will be in touch!




