EU AI Act and GDPR: What User Researchers Need to Know in 2026
Back to Insights
UX & CX Research

EU AI Act and GDPR: What User Researchers Need to Know in 2026

Emotion recognition banned, transparency required for AI interviews – a practical overview without legal jargon

The EU AI Act has been enforceable law since February 2025. GDPR fines crossed the two billion euro mark last year. Anyone conducting user research with AI support is now operating on a playing field with new rules.

Most articles about AI regulation read like they were written by lawyers for lawyers. This one explains what has actually changed, what matters for research practice, and where the typical pitfalls lie.

The Three Waves of the EU AI Act

The EU AI Act didn't arrive all at once. It rolled out in waves.

In February 2025, the prohibited AI practices came into force. Anyone using emotion recognition software in workplace research or educational settings had that door slammed shut. The ban is categorical: no facial analysis to infer emotional states in these contexts.

In August 2025, the rules for General Purpose AI followed. Providers of models like Claude or GPT now have transparency obligations. They must document what went into training and disclose when someone is interacting with an AI.

In August 2026, the full weight of high-risk requirements arrives. That's seven months away. Anyone conducting research that touches hiring decisions, credit scoring, or educational assessments should start preparing now.

What Remains Allowed and What Doesn't

Emotion recognition is banned in workplaces and schools. Anyone who planned to use facial analysis software to measure user reactions during employee research or tests of educational products: that's no longer possible. It doesn't matter whether participants consent. The practice itself is prohibited in these contexts. Medical applications and security scenarios are the only exceptions.

Text-based sentiment analysis remains legal. Analyzing what people write or say, extracting themes, identifying positive or negative sentiments from language: all of this is still permitted. The crucial difference is that you're analyzing content, not biometric signals. You still need proper GDPR consent, of course.

AI-conducted interviews require disclosure. Article 50 of the AI Act mandates that users must be informed when they're interacting with an AI system. Anyone deploying an AI moderator for interviews cannot pretend it's a human. This isn't just ethically required; it's now law.

The research exemption is narrower than many think. Article 2 does contain exemptions for scientific research, but these primarily apply to developing and testing AI systems themselves. Conducting user research with AI tools doesn't automatically qualify. Check this carefully before relying on it.

The Three Most Common Pitfalls

After more than two decades in this field, I see teams falling into the same traps repeatedly. Regulation has only raised the stakes.

Sloppy consent. GDPR requires informed consent in clear language, freely given, with an understandable explanation of what data is being collected and why. Most consent forms are either too vague or buried in legal boilerplate that nobody reads. If your consent process doesn't pass the "Would my mother understand this?" test, it's probably not compliant.

Keeping data too long. The study ended six months ago. The recordings are still sitting on a shared drive. Transcripts exist in three different tools. Nobody knows who has access. These are exactly the situations that become GDPR violations. Retention periods should be defined before data collection begins.

Hiding AI usage. You're using an AI tool for transcription, initial coding, or the interviews themselves. Participants should know. Not because it's scary, but because transparency is now legally required and has always been ethically appropriate.

How QUALLEE Implements This

When I started building QUALLEE, it was clear that compliance couldn't be an afterthought. Too many tools treat privacy as a feature to add later.

Our servers are located in Germany, hosted by Hetzner. Research data doesn't leave the EU. We don't route it through US cloud providers or store it in countries with weaker data protection.

We don't use biometric analysis. No facial recognition, no emotion detection from video or audio signals. Our AI analyzes what participants say, not what their faces look like. This isn't just a legal decision; it's also a methodological one. Text reveals motivation. Facial expressions are notoriously unreliable for inferring internal states.

Every QUALLEE interview makes clear that participants are speaking with an AI. No deception, no ambiguity. This transparency actually improves data quality. Participants often share more openly with AI interviewers because they don't worry about social judgment.

Data retention is built into the system. Retention periods are set when creating a project. When they expire, data is actually deleted, not just hidden.

Regulation as a Baseline

The EU AI Act and GDPR are forcing the industry to do things we should have been doing anyway. Obtaining genuine consent. Being transparent about methods. Not keeping data longer than necessary. Respecting the people who participate in our research.

These aren't obstacles to good research. They're prerequisites for it.

Teams that treat compliance as a minimum standard to reluctantly meet will always lag behind. Teams that build ethical practices into their workflow from the start will find that regulation is hardly a brake.

User research has always been about understanding people. Treating them with respect isn't just legally required; it's the foundation of trust that makes good research possible in the first place.

Try It Yourself

QUALLEE conducts AI-powered interviews that are compliant from the start. Servers in Germany, no biometrics, full transparency. An interview takes about 20–30 minutes.

Join now →

Marcus Völkel
Share Article
EU AI Act and GDPR: What User Researchers Need to Know in 2026 | QUALLEE