1160 x  580 (70)

The researcher's paradox

In debates about AI in user research, the question asked most often is whether tools will start replacing researchers. In practice, the more important problem lies elsewhere: the operational cost of work that begins after an interview, test, or survey is over. Transcription, organizing material, synthesis, and translating observations into decisions have long slowed teams down more than the research itself. This is where AI delivers its most tangible effect today - not as a substitute for human judgment, but as a way to shorten the path from raw material to a meaningful design question.

A one-hour interview does not end when the conversation ends. It expands into transcription, coding, synthesis, building shared understanding across the team, and the effort required to turn raw observations into something a team can actually use.

Maze's 2026 report finds that 66% of respondents say demand for user research is higher than a year earlier, up from 55% the year before - an 11‑point, about 20% year‑over‑year increase. At the same time, 69% already use AI in at least some research projects, a 19‑point jump over the prior year. User Interviews' 2025 State of User Research shows a parallel pattern: AI use reaching 80%, though results and sentiment remain mixed.

Automation has rapidly permeated research workflows not because human interpretation stopped mattering, but because the operational workload around research stopped keeping pace.

Note on sources: this article draws primarily on surveys conducted by tool vendors - Maze's 2026 Future of User Research Report (nearly 500 respondents, December 2025–January 2026), User Interviews' 2025 State of User Research (485 respondents), and User Interviews' State of Research Operations (21 ReOps specialists) - alongside Nielsen Norman Group practitioner guidance. These are market snapshots rather than longitudinal studies. Claims should be read as directional and practical observations, not universal laws.

Where AI actually helps in the research cycle

In practice, meaningful gains appear at three stages: planning, execution, and analysis.

Planning: better drafts, not better questions

In planning, AI works well as a partner for exploring variations: sketching out research plans, generating possible questions, organizing areas of exploration, and producing supporting materials. It does not replace the researcher in defining the problem, choosing a method, or evaluating which questions are genuinely on target. Nielsen Norman Group notes that AI can assist with research plans, questions, methods, and collateral - as long as it receives enough context and its suggestions are edited rather than accepted wholesale, and they explicitly warn against asking for "a complete research plan" in a single prompt because generic prompts tend to produce generic plans.

At Mindsailors, we use large language models to widen the field of possible questions, not to set the agenda. On one industrial project addressing high scrap rates during the final assembly of a medical device, we fed anonymized failure reports into an LLM to generate alternative ways of framing the problem and approaching root‑cause exploration. The model surfaced additional perspectives - but the final interview guide only took shape after those suggestions were tested against the realities of the project and its technical constraints.

Execution: throughput, not nuance

In execution, the clearest gains come from AI handling repetitive work. Maze reports that teams most often use AI for transcription, synthesis, and generating research questions, which is consistent with Nielsen Norman Group's view that AI is most helpful in the planning and analysis stages. What changes first is not how teams interpret data, but how much raw material they can move through in the same amount of time.

A practical distinction is critical here. Attitudinal research - interviews, surveys, open‑ended feedback - generates text‑based data that AI can transcribe, tag, and cluster with reasonable reliability. Behavioral research - watching how users actually interact with an interface or device - remains much harder to delegate. Nielsen Norman Group is explicit that current tools cannot properly observe usability testing, because unlike an experienced researcher they cannot understand what users are doing in context.

Analysis and always‑on signals

For analysis, Nielsen Norman Group recommends using AI as a first‑pass processor: transcribing, summarizing, clustering, and doing preliminary coding that a human then reviews. A faster first pass through raw material significantly narrows the gap between rough notes and something a researcher can interrogate in depth.

For more mature teams, the same underlying infrastructure can run continuously. Natural language processing (NLP) systems can work over app reviews, support tickets, social media commentary, and product analytics to create a persistent feedback signal that does not stop between formal studies. Theme detection does not have to wait for the next sprint. When product analytics surface an unusual pattern or a spike in a particular support category, AI‑assisted tools can help teams quickly identify what to ask in follow‑up studies with real users.

In that model, research stops being a series of isolated projects and becomes a continuous signal that can actually steer product decisions - not just comment on them after the fact.

1160 x  580 (71).png

Democratization and the limits of automation

As AI lowers the cost of processing research inputs, more people in the organization can get closer to real user data. Maze reports that in addition to researchers, 39% of organizations have product managers conducting user research, 35% involve market researchers, and 23% have marketers running studies. This democratization can improve adoption of findings, because the people making trade‑offs see the evidence directly rather than a finished slide deck.

But the same data show a clear support gap. While 61% of organizations provide tools and templates for non‑researchers, fewer than half provide dedicated support from specialized researchers, structured training, or research libraries - and 13% offer no support at all. User Interviews' Research Operations study adds that specialists working on democratization see governance and quality‑control demands rising as AI‑assisted tools become easier to use and easier to misuse.

In parallel, Maze respondents identify interpreting nuance and emotion (82%), ethical decision‑making (80%), and framing the right research questions (76%) as areas where human involvement remains essential.

What is not in the transcript

The point is not that AI cannot help with analysis. It is that it cannot notice what is not in the transcript: the workaround a user never verbalizes, the contradiction that matters more than the pattern, or the complaint that is really about values rather than features. Those are still human jobs.

Risks: overtrust, underinvestment, and governance

With AI now standard in research workflows, the main risks are less about the tools themselves and more about how teams use them.

Overtrusting outputs. Nielsen Norman Group repeatedly warns that AI analysis can include inaccurate information, weak methodological suggestions, shallow clusters, omissions, and hallucinated details, and their guidance is simple: do not outsource analysis to AI; treat it as an initial pass that always requires human review of codes, clusters, and conclusions.

Using efficiency to justify underinvestment. User Interviews' 2025 report and its joint "4 key takeaways" summary with UserTesting show a "high adoption, high anxiety" picture: 80% use AI, but 41% view it negatively; 91% worry about accuracy and hallucinations; 63% worry AI could devalue human insight. At the same time, researchers report budget pressure and shrinking opportunities, so if an organization treats AI purely as a cost‑cutting tool, it is very easy to convert better throughput into faster production of weaker insights.

Democratization without standards. ReOps specialists describe growing governance and quality‑control demands as more non‑researchers run AI‑assisted studies, and without shared standards, clear principles, and training, democratization can spread weak evidence as efficiently as it spreads access.

Privacy and data governance. ReOps teams are increasingly tasked with ensuring AI tools handle personal data appropriately, do not use sensitive inputs to train models without approval, and meet internal storage and security requirements, and Nielsen Norman Group similarly advises researchers to be deliberate about what they share with AI systems during planning and analysis. As AI becomes standard infrastructure, privacy becomes an ongoing operational responsibility, not a one‑time checklist item.

For researchers themselves, this tension is visible in career sentiment. User Interviews reports that AI adoption is rising, but many respondents worry about how it affects quality, their perceived value, and the shape of research roles. That is not resistance to automation; it is a profession renegotiating what expertise has to look like in automated workflows.

What this means for product design and R&D

In work on physical products, the same pattern is particularly clear. AI does not change what a good solution ultimately means, but it can significantly accelerate variant exploration, the organization of early signals, and the filtering of directions worth developing further. This matters most where the cost of a wrong decision is not a lost sprint but months of misdirected engineering, prototyping, or tooling.

For us, this shows up most clearly in concept screening. On recent projects, we have used rough SolidWorks volumes combined with generative image tools to produce sketch‑like variations faster and test a wider range of directions. This makes it possible to eliminate options that clearly clash with user needs, manufacturing constraints, or brand logic before a team invests in developing polished concepts.

The same pattern appears at larger scale. Nike's A.I.R. initiative combines athlete input, computational design, rapid 3D prototyping, and AI to compress footwear prototyping loops that once took weeks or months into much shorter cycles, enabling far more iterations before committing to production. Mondelez reports that using AI to explore ingredient combinations and constraints virtually helped cut snack development timelines by a factor of five and contributed to dozens of product launches, including Gluten‑Free Golden Oreo.

The details differ from footwear or snacks, but the logic is the same one we apply in our own projects: use AI to widen and filter the early option space, then let user research, engineering constraints, and domain expertise decide which directions deserve real investment.

This is where we see AI's real value in physical product development: at the early stages of design research and R&D, widening the funnel of directions a team can consider without asking AI to decide on users' behalf. Getting the research and framing right early is where the whole project either earns its budget - or irreversibly burns it.

The question that now matters

At this point, the question is no longer whether to use AI in user research. For many teams it is already part of daily practice in planning, analysis, and reporting. What matters far more is what the organization does with the time and capacity it reclaims.

At Mindsailors, AI has its place where it does the most good and the least harm: early in the project planning process, sorting and organizing data from multiple sources gathered during conversations with our clients, and supporting faster exploration of new directions and unconventional perspectives.

Organizations that reclaim that time can use it for better, more frequent, more precisely targeted research - or merely for faster production of shallow results. The advantage will go to those who can tell the difference.

Also check
Let's talk

Schedule an initial talk and get to know us better! You already have a basic brief? Send it over so we can have a more productive first meeting!

Contact
Set up
a meeting

This site gathers statistical data in order to enhance user experience and to improve the content we deliver. We never store any of your personal data. You can read more in our Privacy Policy.