
What is a user interview?
A user interview is a one-on-one conversation between a researcher and a participant, used to understand how people think, what they need, and how they experience a product or a broader area of their life that a product is trying to serve. The researcher asks questions, listens carefully, and follows the conversation where it's most revealing.
The defining quality of an interview compared to other research methods is depth. Surveys can tell you what percentage of users found something confusing, but they rarely tell you why. Analytics show where users drop off in a flow, but not what they were thinking or what they were hoping to accomplish. An interview creates the conditions for that richer understanding: a participant can describe their reasoning, share a story about a recent experience, or express a frustration in terms specific enough to actually guide a design decision.
User interviews are used throughout the product development cycle, from early discovery work to concept testing to post-launch evaluation. They're one of the most widely used methods in UX research precisely because they're flexible, relatively quick to conduct, and capable of generating insights that change the direction of a product.
How are user interviews structured?
The format of a user interview sits on a spectrum from structured to unstructured, and the choice depends on what the research needs to accomplish.
- Structured interviews use a fixed set of questions asked in the same order with every participant. They're useful when consistency across interviews is important, when the research questions are narrow and well-defined, or when multiple researchers are conducting interviews and comparability matters. The trade-off is that they leave little room to follow unexpected but valuable threads.
- Semi-structured interviews are the most commonly used format in UX research. The researcher prepares a guide of topics and questions but treats it as a framework rather than a script. When a participant says something interesting or unexpected, the researcher can follow that thread with follow-up questions before returning to the guide. This balance between structure and flexibility tends to produce richer and more actionable findings than a purely structured approach.
- Unstructured interviews are conversational and exploratory, with minimal predetermined questions. They're useful in very early discovery phases when the goal is to understand a domain broadly rather than to answer specific research questions. They require more skilled facilitation to keep productive.
How many interviews are needed?
A commonly cited benchmark is five to eight interviews per distinct user segment. The reasoning is practical: qualitative research operates on pattern recognition rather than statistical significance. After a certain number of interviews, the same themes begin to repeat, and additional interviews add diminishing returns. The point where patterns become clear enough to draw conclusions is typically much earlier than researchers expect.
This doesn't mean 5 interviews is always sufficient. If a product serves multiple distinct user types with different needs and contexts, each segment warrants its own set of interviews. If early interviews surface significant disagreement or unexpected complexity, more interviews are needed to understand the variation. The goal is saturation, the point where additional interviews aren't revealing new themes, not a specific number.
What makes a user interview question effective?
Interview questions that generate useful insights share several characteristics. They're open-ended rather than yes/no. They're grounded in specific past experiences rather than hypothetical preferences. They don't embed assumptions that steer the participant toward a particular answer.
"Tell me about the last time you tried to do X" tends to produce richer responses than "Do you find X easy?" The first question invites a story. The second invites a one-word answer and a polite follow-up. "How did you feel when that happened?" opens an emotional dimension that quantitative methods rarely capture. "What would your ideal version of X look like?" can surface latent needs, though it should be used carefully since users are often better at describing problems than designing solutions.
Questions to avoid include leading questions ("Did you find the checkout confusing?"), double-barreled questions ("Was it easy and fast?"), and questions about future hypothetical behavior ("Would you use X if we built it?") since users are notoriously poor at predicting their own behavior in hypothetical scenarios.
How are interview findings synthesized and shared?
Collecting interview data is only part of the work. How findings are synthesized and communicated determines whether they actually influence product decisions.
After conducting interviews, researchers typically review recordings or notes and identify recurring themes, notable quotes, and specific observations that answer the research questions. Affinity mapping, clustering related observations together to surface patterns, is a common synthesis method. The output might be a research report, a set of insight statements, an updated set of personas, or a revised user journey map, depending on what the team needs.
Sharing findings with stakeholders is as important as the synthesis itself. Direct quotes from participants tend to land with more impact than summarized conclusions, because they make user perspectives concrete and specific. A stakeholder who hears a user describe a frustration in their own words responds differently than one who reads that "users found the onboarding confusing." The goal is to create enough shared understanding of user perspectives that design decisions are grounded in real evidence rather than internal assumptions.
How has user interviewing changed with AI?
AI tools have meaningfully changed the mechanics of synthesis without changing the fundamentals of what makes interviewing valuable.
Platforms like Dovetail now automate significant parts of transcript analysis. Features that previously required hours of manual tagging and clustering, like grouping similar observations across interviews, surfacing recurring themes, and flagging notable moments in recordings, can now be completed in a fraction of the time. Teams that previously spent three or four days on synthesis after a round of interviews are completing the same work in under a day, with more consistent pattern identification.
Remote interviewing has become the default for most teams. Tools like Zoom, Lookback, and Maze allow researchers to conduct interviews with participants anywhere, which has expanded the practical reach of qualitative research. It's now feasible to interview users in different geographies or demographics that would previously have been logistically difficult to recruit and bring into a physical research setting.
What hasn't changed is the core value of the method: sitting with a user, listening carefully, and following the conversation toward understanding. AI tools can synthesize what was said, but they can't replace the judgment of a skilled researcher in the moment, knowing when to follow a thread, when to step back, and when a participant is hinting at something more important than what they're saying directly.




