Usability Research at a Crossroads: Will AI Replace UX Researchers?
1 day ago / Read about 23 minute
Source:TechTimes

By mid-2025, AI has moved from being a sidekick in usability research to becoming part of the everyday toolkit. The Future of User Research Report 2025 by Maze shows that nearly six in ten product professionals now rely on AI, up more than 30% from last year. Instead of spending hours on transcribing interviews, crunching data, or drafting research plans, teams are handing those tasks to machines, leaving researchers with more time to do what matters most: interpreting results and shaping strategy.

Yet this shift raises a tougher question: can AI really capture the hesitation, emotion, and contradictions that often reveal what users think and feel?

Maksim Kozlov

That's the problem space Maksim Kozlov has worked in for years. He brings a rare mix of technical depth and product vision. A product leader and UX research expert, he built and scaled Senso (formerly Fabuza), a B2B UX‑research platform serving 100+ enterprise customers—including L'Oréal, Burger King, Unilever, AliExpress, and Societe Generale Bank. He headed both product and UX research, kept churn under 5%, sustained an NPS above 50, and delivered consistent double‑digit annual revenue growth. Beyond the metrics, his work spans banking, insurance, cybersecurity, e‑commerce, telecom, and retail—helping teams lift conversion, extend customer lifetime value, and raise satisfaction.

We spoke with Maksim about where AI already helps, where it falls short, and what's next for usability research.

Maksim, AI in usability research is surrounded by hype and fear—some see it as a breakthrough, others predict it will replace researchers altogether. Where do you stand?

The truth lies somewhere in between. On the one hand, AI is already useful: it processes huge volumes of feedback, clusters responses, generates hypotheses, and can even draft research plans or reports. Tasks that once took weeks can now be done in hours. But the risk is in assuming it can do more than it actually can. These systems still "hallucinate," invent sources, or miscategorize data. More importantly, they don't understand context or empathy. They don't see hesitation, emotion, or contradictions in respondents—and often those subtle cues are what reveal the real pain points. That's why I don't think AI will replace researchers. What it will do is change the role. Instead of spending time on transcription or categorization, we'll spend more time on strategy, interpretation, and applying insights to product decisions.

You called your recent conference talk "Do AI-Moderators Dream of Electric Respondents?" What's behind this metaphor?

It's a playful nod to Philip K. Dick's novel "Do Androids Dream of Electric Sheep?" but also a serious question. Today, we already have AI systems that can take a research script, ask questions, generate clarifying follow-ups, and cluster the answers into categories. They can even build simple summaries, which gives you more insights than a static survey ever could. But they still operate at the surface level—they don't build trust, they don't adapt deeply to context. In practice, they're closer to advanced survey engines than true human moderators.

In fact, we're already seeing the emergence of synthetic respondents to match these synthetic moderators—closing the loop, so to speak. But their quality is still far from real people: they can simulate conversation flow but fail to generate deep, emotionally grounded insights.

While building Senso's UX research SaaS platform for 100+ enterprise clients such as L'Oréal and Burger King, you saw how automation accelerated processes. From that perspective, in which parts of the usability research process is AI delivering the most value today?

Its biggest strength right now is in scale and speed. During the analysis stage, AI can process thousands of responses, cluster them, and highlight recurring themes in hours instead of weeks. In the planning phase, it generates useful drafts—study scripts, interview plans, even research agreements—which saves teams a lot of time. And in early validation, it helps provide a quick "first layer" of insights to check whether a direction makes sense before investing deeper resources. The quality isn't perfect—you still need to review and correct—but as a starting point, it's a massive time saver.

You've run projects across banking, insurance, and e-commerce, where user behavior often shapes design decisions. From that experience, how well can AI capture those nuances?

One of the biggest issues is that AI often produces responses that look neat but lack authenticity. In interviews, people contradict themselves, hesitate, or reveal emotions that point to the real problem. AI-generated outputs don't show that nuance—they often sound like polished résumés. Another limitation is consistency. Ask the system the same question twice, and you might get two different answers, because the neural model builds connections slightly differently each time.

So while AI helps reduce workload, you can't rely on it to uncover the deeper insights that real user interaction provides. Still, the hype around AI sometimes creates a dangerous illusion of completeness. In some companies, teams skip human research altogether, trusting AI tools to handle the process end to end—even in niche domains where models perform poorly. This saves costs in the short term but increases the risk of wrong product decisions based on synthetic or incomplete data.

You've noted that many AI errors stem not from algorithms but from inputs. To what extent is research quality today shaped by the researcher's craft versus the model's capability?

In most cases, the errors don't come from the model itself but from what we feed into it. If the recording is poor quality or on a single audio channel, transcription mistakes multiply. If the prompt is vague, the answers will be vague too. But when you provide clean, structured data—good audio, clear categories, well-prepared guides—the system performs much better.

So, the researcher's craft is critical: the way you set up the process, the clarity of your inputs, the discipline of checking outputs. The model adds speed and scale, but it can't replace context, empathy, or the ability to catch contradictions in human responses. In that sense, the quality of research today is still driven far more by the human side than the algorithm.

When working with global brands like Unilever, scaling research across markets is always a challenge. Could AI-driven hybrids that combine qualitative depth with quantitative scale help solve that problem?

Absolutely. Hybrid models are where things get really interesting. The real potential lies in scale: running large numbers of interviews with real respondents, moderated by AI systems that can ask clarifying questions in real time. Then researchers can use AI to aggregate and analyze those responses across markets. In this setup, humans provide authentic insights, while automation reduces transaction costs and speeds up synthesis. It's not a substitute for real respondents, but it's an effective way to connect UX insights with CX and analytics data faster and at greater depth.

Looking ahead, where do you see AI taking usability research in the next five years?

I expect significant progress. Five years ago, language models were primitive compared to what we have now. The pace of change is enormous. The next step will be integration: bringing together UX research data, CX analytics, and product metrics into one ecosystem. That will let companies calculate ROI more precisely—for example, linking a usability issue directly to lost revenue. At that point, research won't just be about user experience in the abstract, but about measurable business outcomes.

Even then, we'll still need human oversight. AI can highlight correlations, but deciding which ones matter in context is still a very human task. And as that happens, the UX researcher's role will evolve, becoming something closer to a shepherd of neural systems, guiding them across multiple contexts, aligning their findings with business goals, and bringing empathy back into interpretation.

When you're not running research or building products, do you ever catch yourself analyzing everyday experiences? Like a restaurant menu, a travel app, or even a family routine, the way you'd analyze a user journey.

All the time. It's almost impossible to turn off that mindset once you've spent years in UX. I'll open a travel app and notice how the filters are arranged, or walk into a café and immediately think about whether the menu flow makes sense. Even at home, when I watch how my kids interact with technology, I can't help but see it as a kind of live usability test. Sometimes it's funny, sometimes frustrating—but it's also a reminder that design isn't just about interfaces or products. It's about how people move through everyday life. And once you see the world that way, you never really stop analyzing it.