
What is a KPI?
A Key Performance Indicator is a metric that measures progress toward a specific strategic goal. The word "key" is doing important work in that definition: not every metric is a KPI. A KPI must be tied to an outcome that matters, not just an activity that can be counted.
The distinction is practical. A product team might track hundreds of metrics: daily active users, session length, feature click rates, error frequency, page load times, support ticket volume. Most of these are informational. A KPI is the subset of those metrics that directly answers the question "are we achieving what we set out to achieve?" If the goal is to increase user engagement, daily active users and session depth might be KPIs. Page load times might be a supporting metric that helps explain changes in the KPIs, but it's not itself measuring the goal.
KPIs exist at multiple levels. Company-level KPIs track overall business performance: revenue growth, customer retention, net promoter score. Product-level KPIs track how a specific product or feature is performing: activation rate, feature adoption, churn. Team-level KPIs track whether a specific team is delivering on its commitments. The same underlying data can appear at multiple levels, measured at different granularity.
How do KPIs differ from metrics?
Every KPI is a metric, but not every metric is a KPI. The difference is strategic intent.
A metric is any quantified measurement: the number of sign-ups today, the percentage of users who opened a specific modal, the average time to complete a task. Metrics describe what is happening.
A KPI is a metric that has been designated as a measure of progress toward a specific goal. It's the metric the team has decided to optimize for, allocate resources toward, and report against. Designating a metric as a KPI is itself a strategic decision: it says "this is what success looks like for this goal."
Vanity metrics are a useful concept here. A vanity metric is one that looks impressive and is easy to improve but doesn't indicate progress toward anything meaningful. Total downloads might grow consistently while daily active users stagnate. Page views might be high while conversion rate is low. Vanity metrics are seductive because they tend upward, but they don't tell you whether the product is actually working for users and the business. Good KPI selection avoids vanity metrics and focuses on measures that reflect genuine outcomes.
What makes a KPI effective?
The SMART framework is the most widely used guide for evaluating KPI quality: Specific, Measurable, Achievable, Relevant, and Time-bound.
- Specific means the KPI is defined precisely enough that everyone agrees on what's being measured. "Improve engagement" is not a KPI. "Increase 7-day user retention from 38% to 45% by Q3" is.
- Measurable means the data needed to track the KPI can actually be collected. A KPI that requires instrumentation that doesn't exist yet isn't actionable until that instrumentation is built.
- Achievable means the target is ambitious but realistic. A KPI with an unreachable target creates demoralization rather than motivation. One set too low doesn't drive meaningful improvement.
- Relevant means the KPI is actually tied to the goal it's meant to measure. The most common failure here is tracking what's easy to measure rather than what matters. Session length is easy to measure; whether users accomplished what they came to do is harder, but more relevant to many product goals.
- Time-bound means the KPI has a specific measurement window. Without a timeframe, a KPI provides no urgency and no natural point for evaluation and revision.
How many KPIs should a team track?
Most practitioners recommend three to five KPIs per goal, with fewer being better when possible.
The argument for fewer KPIs is attention. A team tracking fifteen KPIs has divided its optimization effort fifteen ways. If every metric is a priority, nothing is. Fewer, better-chosen KPIs create clearer direction and easier trade-off decisions.
The argument for more KPIs is completeness. A product optimized for a single engagement KPI might improve that metric while inadvertently harming other dimensions of the experience. This is the Goodhart's Law problem: when a measure becomes a target, it ceases to be a good measure. Counter-metrics or guardrail metrics, KPIs that measure something the team doesn't want to sacrifice in pursuit of the primary KPI, are a common way to manage this. An engagement KPI paired with a quality or support volume guardrail metric reduces the risk of gaming.
The practical answer depends on context. A focused sprint toward a single objective might track one or two primary KPIs. An ongoing product health dashboard might track more. What matters most is that the team can explain why each KPI is on the list and what decision it would change.
How do KPIs connect to design and UX work?
KPIs are relevant to design work in ways that aren't always obvious. Design decisions are not just aesthetic choices; they affect the behaviors and outcomes that product KPIs measure.
A redesigned onboarding flow's impact can be measured against activation rate. A new navigation structure's success can be measured against task completion rate or feature discoverability. An accessibility improvement's impact might show up in support ticket volume or user satisfaction scores. Connecting design work to product KPIs helps design teams make the case for their contributions in the language the broader product organization uses for decision-making.
User experience KPIs specifically include measures like task completion rate, time-on-task, error rate, user satisfaction (often measured through CSAT or NPS surveys), and system usability scale (SUS) scores. These are distinct from product-level engagement KPIs but can serve as leading indicators: a deteriorating task completion rate often precedes a decline in retention, since users who can't accomplish their goals with a product eventually stop trying.




