There is a common misconception that research, in general, is all about numbers. While numbers are definitely an integral part of research, it is not the only one. In UX research, for instance, many questions cannot be answered by numerical data alone.

For example, why do your users behave the way that they do? How do they perceive your product? What are their motivations and pain points? What are the core thoughts, fears, and attitudes that shape their decisions and actions? Why do they love some parts of your products and neglect others? These questions can be answered by qualitative research methods that collect quotes, anecdotes, observations, or narrative descriptions from users. 

Competitor analysis

Competitor analysis

A competitor analysis helps you understand where your product stands relative to others in the market, and where you have room to differentiate. Rather than a broad audit of everything your competitors do, it works best when it's scoped to a specific goal.

For a fast-casual pizzeria, that might mean studying how Domino's handles online ordering, how Mod Pizza communicates its "build your own" model, or how a popular local competitor prices its menu. Each of these reveals something different about the market landscape. In practice, you'd typically look at user demographics, product features, content tone and language, and the visual design of their digital touchpoints.

What a competitor analysis can reveal depends on what you're looking for. It commonly surfaces the landscape of the market, including various user types and potential users not yet reached. It can expose gaps in the market, show your product's unique selling proposition, and highlight strengths and weaknesses in your branding and UX strategies. It also helps track the latest trends and innovations shaping user expectations in your industry.[1]

Pro Tip! Make sure you include both direct and indirect competitors. Brands in adjacent categories, like fast-casual burger or bowl chains, often reveal experience patterns your users have come to expect, even from a pizza brand.

Content audits

Content audits

A content audit is an in-depth process for evaluating the content of your product. Rather than a high-level impression, it gives you concrete evidence of what is working and what needs to change.

The process starts with a content inventory. You compile a list of the product pages you want to audit, their URLs, page types, and any relevant notes. Once you have that foundation, you evaluate your content against your objectives. Those might include whether your content is readable, findable, accessible, or easy to understand.

When content falls short of your standards, the audit becomes most valuable. Real usage data points you toward actionable solutions. For example, if users are uninstalling your product in large numbers after an update, a content audit can trace where the drop-offs happen and surface the specific language or structure causing friction. For more targeted insights, you can pair the audit with qualitative research methods to understand not just where users leave, but why.[2]

Pro Tip! Don't wait for a crisis to run a content audit. Treating it as a routine checkpoint, rather than a reactive fix, makes improvements easier to scope and prioritize.

Card sorting

Card sorting

Card sorting helps you understand how users naturally group information, which is the foundation of a well-structured product. When users can find what they're looking for without thinking too hard, it's usually because the information architecture reflects their mental model, not the team's assumptions.

The process is straightforward: participants receive labeled cards and sort them into groups that make sense to them. For a fashion retailer, this might reveal whether users expect shorts under Clothing or Sportwear, two valid options that would each lead to a different navigation structure. Studies work best with 30-60 cards and 15-20 participants.

There are 3 variations to choose from, depending on your goals:

  • Open. Participants create their own category names and sort freely. Use this when building something new and wanting to understand how users think from scratch.
  • Closed. You provide the categories, and participants sort into them. Use this when evaluating or refining an existing structure.
  • Hybrid. Combines both approaches, giving participants predefined categories while letting them create new ones when nothing fits.

Card sorting can be run online or in person, with or without a moderator. Whichever format you choose, briefing participants clearly on the purpose of the study makes a significant difference in the quality of what you get back.[3]

Ethnography

Ethnographic research is a qualitative method rooted in anthropology. In UX, it means immersing yourself in users' natural environments to observe how they actually behave, rather than how they think they do. Unlike a contextual inquiry session, ethnographic studies typically span days or weeks. That extended time is what allows observed behavior to return to normal, reducing the distortion that comes from users knowing they're being watched.

The value of this depth shows up in what you uncover. A researcher shadowing restaurant servers during a dinner shift might notice they frequently input orders while carrying plates. This is a behavior that no interview would surface because servers have simply accepted it as part of the job.

Data collection can take several forms: field notes, photography, video recording, and artifact analysis. The researcher's role can be passive, observing without interfering, or active, participating alongside users to build rapport and access more candid behavior.

Pro Tip! Ethnographic research is expensive and time-consuming. Use it when your team's assumptions about users are weak or untested and when getting the design direction wrong carries real consequences.

Contextual inquiry

Contextual inquiry

Contextual inquiry is a qualitative research method that combines observation with in-context interviewing. Instead of bringing users into a lab, you go to them, watching how they work in the environment where they actually use your product. This matters because what users say they do and what they actually do are often very different things.

The method is guided by 4 principles:

  • Context means conducting the session where the user naturally works, whether that's their home, office, or elsewhere.
  • Partnership means treating the user as the expert and the researcher as the learner, letting both parties steer the conversation.
  • Interpretation means checking your understanding in real time by sharing observations with the user and asking them to confirm or correct.
  • Focus means keeping the session anchored to your research goals, even as the conversation flows naturally.

A session typically runs around 2 hours and follows a loose structure.

Contextual inquiry works especially well for understanding complex workflows and uncovering habitual behaviors users can't easily describe in a standard interview. It can be conducted in person or remotely, with screen sharing standing in for physical presence.[4]

Pro Tip! The most valuable moments are often the workarounds. When users do something unexpected to get a task done, that's a signal your product isn't supporting them the way it should.

Cultural probes & diary studies

Cultural probes & diary studies

A diary study is a qualitative research method where participants self-report their behaviors, thoughts, and feelings over an extended period. Instead of observing users directly, researchers ask them to log entries at regular intervals, capturing experiences as they happen rather than relying on memory after the fact.

For example, a design agency studying why users make repeat purchases from brands sent participants a diary kit with questions touching on relationships, routines, and expectations. Over time, those entries revealed patterns that a single interview never could.

Like ethnographic research, diary studies run over a longer period, which makes them well-suited for understanding behaviors that unfold gradually or vary day to day. They can surface user needs, goals, and personas, and are particularly useful for understanding how users carry out specific tasks in their own time and context. This makes them most valuable at the beginning of the design process, during discovery.

After the logging period, researchers typically follow up with participants in interviews to fill gaps, clarify ambiguous entries, and probe deeper into patterns that emerged.[5]

Pro Tip! Diary studies work best when logging feels easy for participants. The lower the friction of recording an entry, the more honest and consistent the data.

Heuristic evaluation

Heuristic evaluation

A heuristic evaluation is a usability inspection method where expert evaluators assess an interface against a set of established usability principles, known as heuristics. The most widely used are Jakob Nielsen and Rolf Molich's 10 usability heuristics, though teams can adapt or supplement these depending on the product type.

Unlike usability testing, which involves real users, heuristic evaluation relies on UX professionals. This makes it faster and cheaper to run and particularly useful early in the design process before user testing begins. Nielsen recommends using 3-5 evaluators. A single evaluator finds only about 35% of usability problems on average, while five evaluators together can surface up to 75%.

The process follows these steps:

  1. Define the scope and choose the heuristics you'll evaluate against
  2. Select and brief your evaluators independently, so they don't influence each other
  3. Have each evaluator examine the interface individually and document usability problems
  4. Bring evaluators together to compare findings and rate each problem by severity
  5. Prioritize issues and work with your team to implement solutions[6]

Pro Tip! Heuristic evaluation gets easier with practice. Over time, evaluators develop instincts for spotting common usability problems without needing to refer to the heuristics as frequently.

Participatory design

Participatory design

Participatory design is a way of involving users in the design process through simple exercises. The goal is to better understand their needs and goals.

Your choice of exercise will depend on the exact nature of the information you are looking for from your users. Some examples of participatory design exercises include:

  • Asking users to make visual empathy collages to map out their perceived connection and interaction with your product
  • Asking users to visually draw out the hierarchy of their goals and needs
  • Getting users to role-play and act out their problems and potential solutions
  • Brainstorming and improving on ideas and solutions together in groups, etc.[7]

These exercises can be shaped in the manner of your choosing. Just remember that the purpose is to elicit solutions and answers from users naturally to create more user-centric designs.

In-depth interviews

In-depth interviews

A user interview is a qualitative research method where you engage one-on-one with users to understand their experiences, motivations, and attitudes toward your product. Unlike a survey, which captures what users think at a surface level, an interview gives you space to ask follow-up questions, explore unexpected directions, and dig into the reasoning behind what users say. Sessions can be conducted in person or remotely.

Depending on your research goals, interviews can take 3 forms:

  • Structured: follows a fixed set of questions in a set order, useful when you need consistent, comparable responses across participants
  • Unstructured: has minimal predetermined questions and lets the user guide the conversation freely, making it most useful in early discovery when you don't yet know what you're looking for
  • Semi-structured: the most common format in UX research, where you prepare a guide with key questions but stay flexible enough to follow up on responses that open up interesting directions

Use user interviews when you want to gather feedback on a product or feature launch, build user personas, understand user needs and goals, identify opportunities to improve an existing product, or explore attitudes toward your visual design and overall experience.[8]

Pro Tip! Prioritize open-ended questions over closed ones. Questions that invite users to describe, explain, or walk you through an experience will almost always yield richer insights than questions that can be answered with yes or no.

Usability testing

Usability testing

Usability testing is a research method used to measure how easy it is to use a product. A researcher asks participants to complete a set of realistic tasks while observing their behavior and listening to their feedback. The goal is to uncover usability problems, discover opportunities, and learn about users in the context of real interactions rather than self-reported opinions.

Tasks in a usability test mirror what users would actually do with the product, like making a purchase or placing an order.

Usability testing can be run at any stage of the design process. Early sessions with low-fidelity prototypes or wireframes help catch structural problems before they become costly to fix. Later sessions on a live product validate whether those problems have been resolved and surface new ones.

Sessions can be run in several formats:

  • Moderated tests involve a facilitator guiding the participant through tasks in real time, either in a lab or remotely.
  • Unmoderated tests have participants complete tasks on their own using a testing platform, with no facilitator present.
  • Guerrilla testing is a lightweight, in-person variation where participants are approached in public spaces for quick, informal sessions.[1]

Pro Tip! Avoid giving participants too much guidance during a session. Watching where users struggle without stepping in is often where the most valuable insights come from.