Recently, Amazon agreed to pay $2.5 billion to settle a case with the U.S. Federal Trade Commission. The charge was that the company had misled customers into enrolling in its Prime membership and made cancellation unnecessarily difficult.
For years, Prime sign-ups and renewals looked like a runaway success. Subscriptions were climbing, retention was strong, and revenue from recurring payments kept growing. By the numbers, it seemed like an ideal outcome. But customers were telling a very different story. Many felt tricked into signing up or trapped in a subscription they no longer wanted. What appeared to be success on the dashboard turned out to be frustration and loss of trust in practice.
This is the danger of relying too heavily on isolated metrics.
The Seductive Power of Strong Numbers
Metrics are attractive because they are concrete. They give teams something to celebrate and present to leadership. But they rarely show the full picture.
A high completion rate can either signal an efficient flow or a design that pushes people through a process they don’t actually want. Time spent on a screen might mean users are engaged, or it might mean they are stuck and confused. Even a metric like retention, often seen as a gold standard, can hide situations where users are staying because leaving is too difficult, not because the product is genuinely valuable.
This is why metrics, while useful, can also create blind spots.
What the Numbers Miss
Quantitative data can tell us what happened and how often, but it cannot explain why. It doesn’t show intent, emotion, or context.
The Amazon case makes this clear. Subscription numbers kept rising, but those figures said nothing about how customers felt during the sign-up or cancellation experience. Only when the gap became undeniable through complaints and legal action did the full story come to light.
Without the human side of the equation, numbers can look like progress while satisfaction quietly erodes.
How UX Research Complements Data
This is where UX research changes the game. It adds the missing context that turns metrics into real understanding.
Analytics can flag anomalies and patterns. Research explains the motivations and obstacles behind them. The best approach combines the two:
- Use analytics to spot where users are dropping off or behaving in unexpected ways.
- Follow up with interviews, usability tests, or observation to uncover what they were trying to do and how the experience made them feel.
This mixed approach not only prevents wrong assumptions but also saves teams from pouring time and resources into solving the wrong problem.
The Human Side of Metrics
At its core, good research is about storytelling. It connects data points into a narrative about real people’s needs, frustrations, and goals.
An education company learned this when exploring how learners valued practice with native speakers. Analytics suggested the demand was strong, but field research revealed a deeper truth: in some classrooms, students waited nearly an hour just to say one sentence. That lived experience reframed the challenge, inspiring the company to design immediate speaking opportunities. The numbers pointed in a direction, but research uncovered the story that made the numbers meaningful.



.png)


%20(741%20x%20839%20px)%20(1240%20x%202000%20px).png)