I built an AI tool that automates qualitative interviews. At the same time, I have 25 years of UX research behind me. So I stand with one foot in the world being replaced, and the other in the world doing the replacing.
That gives me a perspective that's uncomfortable. For both sides.
The Numbers, Briefly
Early 2022: over 3,000 open UX research positions in Germany on Indeed. Early 2026: about 350. Minus 89 percent.
In the US, Challenger, Gray & Christmas counted 55,000 jobs that companies in 2025 explicitly attributed to AI. Entry-level positions have dropped by 15 percent. Gen Z is being locked out of the job market.
That's the statistics. Now for the question nobody asks.
The Wrong Conversation
The discussion revolves around "Will AI replace researchers?" That's the wrong question. It implies that there's a fixed state – researcher – that either persists or disappears.
The right question: What about research is actually valuable?
Because if we're honest: much of what researchers do was already questionable before AI. Endless transcripts that nobody reads. Reports that disappear into drawers. Insights that never lead to decisions.
AI isn't replacing "the researcher." AI is replacing the parts of the work that were never particularly valuable – and thereby exposing how thin the value creation often was.
That's the uncomfortable truth.
Where Researchers Really Fail
In 25 years, I've seen many researchers fail – myself included, more than once. Not from lack of empathy. Not from poor methodology. But from three things:
They ask the wrong questions. They research what's interesting, not what's decision-relevant. They produce knowledge nobody needs.
They can't translate. They speak research language to people who speak business. The results starve on the way to the decision.
They don't understand their own business. They know everything about users but nothing about margins, roadmaps, technical debt. They can't explain why their work is worth money or creates value.
These were always the problems. AI just makes them visible because the routine work falls away and only the strategic work remains – which many never learned.
What AI Can Really Do (And What It Can't)
Here's my assessment after hundreds of hours of work with my own and external AI tools:
AI can:
- Transcribe, summarize, tag – faster and cheaper than humans
- Find patterns in large data sets that humans miss, even needles in haystacks
- Deliver good first drafts for discussion guides, screeners, reports
- Serve as a sparring partner for hypotheses
AI cannot:
- Decide which questions should even be asked
- Recognize when someone is lying, evading, concealing something important
- Navigate the political landscape of an organization
- Convince a stakeholder to change their mind
- Know when a study would be a waste of time
The first list is what eats time. The second is what counts.
Why I'm Building an AI Tool Anyway
QUALLEE doesn't automate researchers away. It makes research possible that didn't happen before.
The reality: Most product decisions are made without user research. Not because teams don't want it, but because a dozen in-depth interviews cost tens of thousands of dollars and take eight weeks. So they decide based on gut feeling or a focus group with five participants.
QUALLEE changes that. Not by replacing human researchers, but by bringing research where there was none before – into the 90 percent of decisions that are made blind today.
The New Researcher
If I were hiring someone today, three things would matter to me:
Strategic framing. Not: "How do I conduct a good study?" But: "Do we even need a study? What's the right question? What do we do with the answer?"
Business fluency. The ability to explain in a meeting with the CFO why this research saves or generates money. In their language, not ours.
AI judgment. The ability to recognize when AI output is garbage. When it's gold. And when it's dangerously close to the truth but pointing in the wrong direction.
Empathy? Yes, of course. But empathy without these three skills is a hobby, not a profession.
The Fear Is Real – And Useful
46 percent of researchers in the State of User Research 2025 Report find AI "scary." 43 percent know someone who lost their job because of AI. Phew.
I take this fear seriously. It's a signal. Fear says: something fundamental is changing here. Fear says: your previous strategy is no longer enough. Fear says: move.
But the problem isn't the fear – it's what people do with the fear. Some freeze, some deny. Some run in the wrong direction.
The right response to fear is: understand what's changing. Then act.
What You Should Do Now
I don't believe in advice by career stage. The situation is individual. But here are three questions every researcher should ask themselves:
1. How much of your work is routine?
Transcription, tagging, scheduling, screener distribution – that's being automated. Not maybe. Definitely. If that's 60 percent of your time, you have a problem. Not because you're bad, but because your role is defined that way.
2. Can you explain why your last study was worth the money?
Not in research terms. In dollars. In avoided bad decisions. In time-to-market. If you can't do that, you're replaceable – not by AI, but by anyone who can.
3. When did you last prevent a study?
The most valuable skill of a researcher is knowing when research is a waste of time. If you always say yes, you're a service provider. If you sometimes say no and can explain why, you're a strategist.
My Bet
I'm betting that research becomes more valuable – but researchers become rarer. The demand for user understanding isn't disappearing; it's growing. Every company wants to know what its customers want. AI makes this knowledge more accessible, not obsolete.
That's no comfort for everyone. Some will profit, some will struggle, some will leave the industry. I don't know how to make that fair. I only know that looking away doesn't help – neither for those who stay nor for those who go.


