The relevance and usefulness of your UX research are heavily influenced by your participant recruitment process. The quality of data you collect will depend on:

  • The number of participants you induct into your study
  • Their representativeness of your target audience
  • The screening protocols you have in place

While recruiting participants, it’s important to brief them about the goals of your study and how their information will be used. Also, ensure you gain their full consent and inform them of any incentives they’ll receive for participating.

How many participants do you need?

Participant count is not a universal number in UX research. It depends on what you are trying to learn and which method you are using to learn it. Getting this wrong means either collecting data you cannot act on or spending budget on participants who would not have changed your findings.

For most qualitative usability studies, 5 participants is a well-established starting point. At that number, you tend to surface most of the critical issues with an interface. Adding more participants raises costs while delivering fewer new insights, because qualitative research looks for patterns, not statistical significance.

Other methods follow different rules entirely:

  • Quantitative studies need at least 20 participants to produce statistically reliable numbers
  • Card sorting studies require at least 15 participants per group to reveal meaningful groupings
  • Eye tracking studies require 39 participants to generate stable heatmaps

The logic in each case is the same: the method determines what counts as "enough." Qualitative work reaches a point of saturation where new sessions stop revealing new themes. Quantitative work needs enough data points to calculate confidence intervals and generalize to a broader population.[1]

Using internal staff

Recruiting participants from within your own organization can look like a budget-friendly shortcut, but it tends to produce unreliable data. People who worked on the product already know how it is supposed to work. That familiarity shapes how they navigate it, what they notice, and what they miss, all of which pulls your findings away from how real users would actually behave.

There are two cases where involving internal team members is appropriate.

  • Pilot testing is one of them. Before running your main study, you can use colleagues to check that tasks are clearly worded, timing is realistic, and the session flows correctly. The key condition: none of those results feed into your final findings.
  • The other is when internal team members match your target audience. If you are designing a tool for software engineers and your colleagues are software engineers who were not involved in building the product, they can be legitimate participants. Their domain expertise is the point, not a liability.[2]

Pro Tip! Internal participants work for pilot tests only if the results stay out of final findings. For real study data, recruit from outside the team.

Recruitment criteria

Your participants should reflect your primary target audience, not just in who they are, but in how they behave. Behavioral characteristics tend to matter more than demographic ones. Two people of different ages who book travel at the same frequency will behave more similarly in a study than two people of the same age with opposite habits.

Relevant demographics still count. Age, geography, and domain experience can shape how people use a product. It is also worth excluding certain professionals: UX designers, marketers, and IT specialists tend to analyze interfaces rather than use them naturally, producing expert feedback rather than realistic user behavior.

Including people at the extremes of your user spectrum adds further value. Power users and novices tend to surface issues that average users overlook. Designing for the edges tends to improve the experience for everyone in between.

The more criteria you add, the harder it becomes to recruit. Screen only for characteristics that are likely to affect your research questions.[3]

Pro Tip! Screen for behavior first, demographics second. A screener that focuses on what participants do, not just who they are, produces more relevant findings.

Screening participants

Screening participants

A screener is a set of questions used to qualify or disqualify candidates before they enter your study. Getting it right protects your data from the start.

The most important rules to follow are:

  • Avoid revealing the study purpose in your questions. When candidates can guess what you are looking for, they tend to adjust their answers to qualify, which skews your participant pool before the study begins.
  • Structure questions using the funnel technique: start broad and only get specific later.
  • Ask about general digital habits before narrowing in on the behavior your study depends on.
  • Pilot your screener before launching it. A small test run catches unclear wording, broken skip logic, and questions that unintentionally hint at the study goals.

Written online surveys are the most common format and work well for filtering large groups quickly. For sensitive or revealing questions, a follow-up phone call adds a layer of protection, since participants find it harder to game a live conversation than a visible list of answer choices.

The screener is also the first point of contact candidates have with your research process. Keep it short, use plain language, and be clear about next steps.[4]

Screener surveys

Screener surveys

A screening survey contains a list of qualifying questions that can be administered to people online or offline. Often, people being screened may find ways to answer favorably to participate in the study.

To avoid influencing participants' answers:

  • Ensure your survey contains open-ended questions. This encourages people to think on their own and avoid guessing what’s “right.”
  • When using multiple choice questions in your screener survey, ensure you include distractors in the options. They closely resemble the correct answer but are, in fact, wrong. This helps you identify people who guess or pick favorable answers.
  • Avoid sharing too many details about your study during the screening stage as this too can influence people to answer a certain way.[5]

Working with a recruiter

When direct access to your target audience isn't available, a professional recruiting agency can fill the gap. Agencies maintain large participant databases, handle scheduling and communication, and pre-vet candidates before sessions start. That pre-vetting matters: it reduces no-shows and saves you from filtering unqualified applicants yourself.

Agencies are especially valuable for hard-to-reach populations, such as participants with specific disabilities, niche professional backgrounds, or life circumstances that are difficult to find through general channels.

The tradeoff is cost. Specialized profiles cost considerably more to recruit than general consumers, and some agencies operate on a recruit-more-than-you-need model. They source a larger pool than you will actually test, as a buffer against no-shows and mismatches, and you are often billed for the full pool.

To get useful results from an agency, give them a detailed brief:

  • How many participants you need
  • Where and when sessions take place
  • How long each session runs
  • What incentives you plan to offer, and any inclusion or exclusion criteria

The tighter your brief, the less room there is for mismatch.

Pro Tip! Brief agencies on who to exclude, not just who to include. Missing exclusion criteria is one of the most common reasons for participant mismatches.

Participants with accessibility needs

Recruiting participants with disabilities requires more planning than general recruitment. People with disabilities are underrepresented in standard panels, so your starting point is often disability advocacy organizations and community groups. Many connect researchers with participants, and building those relationships over time makes future recruitment faster.

Be specific about what experience you need. Someone who is blind and uses a screen reader interacts with your product very differently from someone with low vision who relies on browser zoom. Including one person per category does not mean you have covered the full range of disability-related perspectives.

Ask participants to bring their own assistive technology rather than using lab equipment. Researchers see more realistic behavior when people work with tools they have configured for their own needs. Schedule extra time between sessions for setup and compatibility checks, and factor in transportation costs when setting incentives. People who use assistive technology have developed specialized skills, and your compensation should reflect that.

Pro Tip! Run an accessibility audit before scheduling participants. Testing compliance gaps wastes participants' time and yours.

Before a research session begins, every participant deserves to know what they are agreeing to. A consent form makes that exchange formal: it documents that you explained the study and that the participant voluntarily agreed to take part. Without it, you risk using someone's data in ways they never agreed to, which violates research ethics regardless of intent.

A well-written consent form covers the study's purpose, what participants will be asked to do, what data will be collected and how it will be stored and shared, the right to ask questions before and during the session, and the option to withdraw at any point without penalty. Send it ahead of the session where possible so participants have time to read it without pressure.

When your study includes minors or adults who cannot fully understand or independently agree to the terms, guardian consent is required before any research begins. For minors who are old enough to understand what is being asked of them, guardian consent alone is not enough. Seek assent from the child directly, using plain language appropriate to their age. Both are needed before you proceed.[6]

Incentivizing participants

Incentives are rewards offered to research participants in exchange for their time and feedback. Incentives serve 3 purposes:

  • To thank participants for their time
  • To help you reach a broader pool of candidates
  • To signal that you take the research seriously

The type and amount you choose affects not just who applies, but who shows up and how engaged they are:

  • Cash and gift cards are the most common options because they are easy to distribute and simple to understand.
  • Non-monetary incentives like product credits, early feature access, or merchandise can work well if they are genuinely useful to your audience.

The key test is whether the incentive works for everyone you recruit. A gift card for a retailer not available in all regions can quietly exclude part of your participant pool.

To set the right amount, factor in session length, task complexity, and how specialized the required profile is. A 30-minute survey with general consumers needs a smaller incentive than a 90-minute session with senior healthcare professionals. Many researchers use a per-minute rate as a starting point, scaling up for niche or high-expertise participants.

Tell participants when and how they will receive their incentive before the session. Pay people who withdraw early. Withholding compensation from someone who exercises their right to leave undermines the voluntary nature of consent.