Monday, 10 a.m., product meeting. Someone says: "Our users want the feature to be simpler." Everyone nods. No one asks: How do you know? Because the question would be rude. Because everyone pretends they know. The feature gets prioritized, the sprint gets planned, the team gets to work. Three months later, the numbers show: barely any usage.
You don't know your users
That sounds harsh. Let me be more precise: you know data about your users, not what comes from them.
Personas are useful fictions. They help make target groups tangible. But fictions remain fictions. "Marketing Maria, 34, two kids, uses the product while commuting in the morning" – that's a story, not a person. Maria doesn't age, she doesn't change her mind, she doesn't experience crises. Real people do.
Analytics show behavior in aggregates. Thousands of clicks, conversion funnels, heatmaps. You see what happens; you don't see why someone drops off on page three, even though everything has been "optimized." You don't see the second of hesitation before someone decides not to buy after all.
Sales anecdotes are filtered. The field team tells you what customers say – colored by the sales situation, by what they want to hear, by what they consider relevant. Three rounds of telephone before it reaches the product team.
The difference between knowledge about users and knowledge from users is fundamental: one is derived, the other is the source.
Your data conceals more than it reveals
Quantitative data has a blind spot: it captures what's measurable, not what matters.
Your NPS is 42. What do the sevens really think – the ones who aren't enthusiastic but aren't leaving either? Are they loyal or indifferent? Waiting for an alternative, or do they simply not care? The number won't tell you.
Conversion rate is up 12%. The dashboard shows green. But you don't know whether you just made the purchase decision easier for people who would have bought anyway – and lost the ones who were still considering. The story behind the number stays invisible.
Qualitative data – real conversations, open questions, listening without an agenda – delivers something different: context. The sentence that explains why the number is what it is. The frustration that no metric reflects. The workaround users have found because your feature doesn't work the way they need it to.
But qualitative data is expensive, slow, and uncomfortable. That's why it remains the exception.
Research as an event is structural blindness
Most companies do user research like dentist visits: rarely, laboriously, unpleasantly. A project every six months. Apply for budget, brief an agency, recruit participants, conduct interviews, analyze, present. Eight to twelve weeks until insights arrive. By then, the sprint is long over.
In between, assumptions become facts because no one challenges them. "We know that..." – no, you assume. But the assumption has been repeated often enough that it feels like knowledge.
This isn't a criticism of teams or people, but of the infrastructure. When research is an event, the time in between is a blind flight – not from negligence, but structurally.
Teresa Torres calls what's missing Continuous Discovery – an approach built on the premise that product development requires permanent learning, not occasional. The concept isn't new; her book came out in 2021. Most teams still don't implement it, because the tools were missing.
Five user voices a day
Five real user voices per day. Not data points, but stories, quotes, contradictions. Someone explaining in their own words why they use your product – or why they stopped.
Assumptions would die faster – not in quarterly meetings, but the same day. "Our users want X," and that evening you read three interviews saying the opposite. Uncomfortable, but cheaper than three months of development in the wrong direction.
Decisions would become more concrete. Not "the user," but "Thomas, electrician, who uses the feature on the construction site and gets frustrated that he needs three clicks instead of one." Abstraction makes decisions easier; concreteness makes them better.
Then there's the compound effect. A single interview changes little. But after four weeks you have twenty conversations, after three months a hundred voices – and you start seeing patterns no dashboard shows. That's no longer a sample; it's a continuous stream of information. The map in your head begins to match the terrain.
Why this wasn't possible before
The problem was rarely willingness, but infrastructure.
Recruiting alone eats days or weeks: finding the right people who have time and are willing to talk. For every study, all over again. Then an hour per conversation, plus preparation, plus follow-up – and who on the product team has that time when they also need to deliver?
At the end, there are transcripts to read, patterns to recognize, insights to formulate. Work that doesn't scale. By the time results arrive, the question is yesterday's; the feature was built, the meeting happened, the decision was made.
The system was designed for events, not continuity – because events were the only thing that could be organized.
Continuous user contact is no longer wishful thinking
At QUALLEE, we automate exactly this step: AI-powered interviews that collect user voices daily, without anyone on the team investing an hour per conversation. The AI leads the conversation; your team gets the insights.
This doesn't replace deep-dive studies. But it closes the gap between them – the months when product teams currently rely on assumptions. Torres' Continuous Discovery becomes operationally feasible, even for teams without a dedicated research budget.
The more interesting question isn't whether continuous user contact is possible. It's what you build when your team starts each morning with five fresh user voices – instead of a dashboard.
Try it yourself
QUALLEE runs AI-powered user interviews and delivers daily insights, without your team handling recruiting, interview facilitation, or analysis.


