There's no ignoring it anymore. AI has moved from the periphery of our field squarely into the center of how many teams plan, conduct, and analyse user research.
I started this process afraid of AI and what it could mean for the industry. I worried about ethics and sharing bad data the most.
I have spent the last year experimenting with AI tools and I've developed my own view of where these tools genuinely help and where they quietly undermine the work we do.
I recently read this paper: We Reject the Use of Generative Artificial Intelligence for Reflexive Qualitative Research and I was very surprised by the closed view expressed. We should be looking at AI as augmenting humans in the research process, just as you would work with a colleague when working on a project together.
This is my simple and honest personal look at what's working, what isn't, and what every user researcher should be thinking about before weaving AI into their practice.
The Good
Scaling what was previously unscalable
Traditionally, qualitative research has been bounded by our capacity. You can only run so many interviews in a sprint. AI opens the door to hybrid approaches where conducting a smaller set of deep interviews while using AI-moderated conversational surveys to gather qualitative-feeling data from hundreds or thousands of participants. The depth-versus-breadth tradeoff doesn't disappear, but it softens a lot.
Reducing time-to-insight without reducing rigor
Speed has always been the pressure point for embedded user research teams. Stakeholders want answers yesterday. AI tools that auto-generate highlight reels from usability sessions or draft preliminary findings reports give researchers a faster path to sharing early signals without forcing them to cut corners on methodology. When used well, AI becomes an accelerant, not a shortcut.
Democratising research operations
AI can help non-researchers in an organisation conduct lighter-touch research activities such as screening participants, running unmoderated tests, even doing initial passes at data coding. This doesn't replace dedicated researchers, but it extends research thinking into parts of the organisation that previously operated on pure assumption. When supported by proper frameworks and oversight, this is a net positive for user research culture.
The Bad
The empathy gap
AI can't sit across from a participant*, notice the slight hesitation before they answer, pick up on the tension between what they say and how they say it, and probe gently into the emotional undercurrent of an experience. The richest insights in user research often come from these subtle, deeply human moments. AI can process language. It cannot truly listen. When teams lean too heavily on AI-moderated interviews or automated analysis, they risk producing findings that are technically accurate but emotionally hollow, capturing what users do without ever understanding what it feels like to be them.
Bias doesn't disappear
There's a dangerous assumption that AI analysis is more objective than human analysis. It isn't. AI models carry the biases embedded in their training data, and they introduce new biases through the way they categorise, weight, and summarise information. A skilled human researcher would recognise that a rare but powerful insight deserves amplification. An algorithm optimises for patterns, not for significance.
The "good enough" trap
AI makes it remarkably easy to produce research artifacts such as reports, journey maps, persona documents that look polished and complete. This creates a dangerous temptation for organisations to accept surface-level analysis as sufficient. When a stakeholder sees a neatly formatted insight report generated in an afternoon, they rarely ask whether the underlying analysis had the depth it deserved. AI can make mediocre research look professional, and that's a problem the field needs to talk about more honestly.
Participant trust
We ask participants to share vulnerable, sometimes deeply personal experiences with us. The introduction of AI into that exchange, whether it's an AI conducting the interview, an algorithm analysing their words, or their data being fed into a model, it raises legitimate questions about consent and trust. Many participants don't fully understand what it means for their responses to be processed by AI, and our informed consent practices haven't caught up with the technology. Researchers have an ethical obligation to be transparent about AI's role, and right now, the industry standard for that transparency is inconsistent at best.
Flattening of context
AI excels at finding patterns across data. It struggles with context, the organisational politics that shaped a product decision, the cultural norms that influence how a user population talks about their needs, the historical baggage a brand carries into every interaction. Research that strips away context in favor of pattern-matching produces insights that are legible but not necessarily actionable. The most valuable research connects findings to the messy, specific reality of a product team's situation, and that connection still requires a human mind.
A framework for thoughtful adoption
After a year of experimentation, I've landed on a simple principle: use AI for processing, not for judgment.
Let AI handle transcription, initial planning and scoping, data organisation and first-pass pattern identification. Keep researchers in charge of study design, rapport-building, interpretive analysis, ethical oversight, and storytelling. The goal isn't to remove humans from the loop, it's to give humans better tools so they can focus on the parts of research that demand empathy and critical thinking.
A few practical guardrails:
- Audit AI outputs ruthlessly. Never publish AI-generated analysis without a thorough human review. Check for flattened nuance and conclusions that feel too tidy.
- Be transparent with participants. If AI will touch their data in any way, say so clearly in your consent process. Give them the option to opt out.
- Resist the speed trap. The fact that you can deliver findings faster doesn't mean you should. Protect time for reflection, discussion, and the slow thinking that produces breakthrough insights.
- Invest in AI literacy for your team. Every researcher should understand, at a basic level, how the AI tools they use work, what they optimise for, where they're likely to fail, and what assumptions they encode.
The future
I don't believe AI is going to disappear from user research. The tools will get better, more integrated, and more persuasive. That makes it all the more important for researchers to engage with them critically rather than reactively, and to not dismiss them out of professional anxiety or adopt them out of organisational pressure.
* Unless we start building AI robots that is.
