Design by Metrics
A practical guide to pre-release and post-release product design metrics and how top teams use them to drive growth.
What are product design metrics?
Quantifiable signals of user understanding, ease, and value that guide design decisions. Use them to prioritize work, validate changes, and prove impact.
Pre-release (before you ship)
Use these in concept tests, prototype studies, and usability sessions.
Task Success Rate
% of participants completing a critical task unaided.
Use for: flow clarity.Time on Task / Time to First Value (prototype)
Median time to reach the “aha.”
Use for: onboarding speed.Error Rate & Critical Issue Count
Observed errors and blockers per test.
Use for: removing friction.First-Click Success
% who click the correct element first.
Use for: IA/labeling.Perceived Ease (SEQ) & SUS
7-point Single Ease Question and/or System Usability Scale.
Use for: benchmarking over iterations.Copy Comprehension (5-second test)
% who correctly describe value after a brief view.
Use for: messaging clarity.Feature Value Intent
% who say they’d use the feature weekly + why.
Use for: prioritizing bets.
Goal: exit pre-release with evidence that users can succeed and want the value.
Post-release (after you ship)
Instrument events and read behavior at scale.
Activation Rate
New users who reach first value ÷ signups.Time to Value (TTV)
Median time from signup → first value event.Task Success (in logs)
Successes ÷ (successes + errors) for key workflows.Funnel Drop-off by Step
1 − (completions at step ÷ entrants to step).Feature Adoption
Users who used feature ÷ MAU (or per-segment).Retention (W1/W4/W12)
% of cohort returning in weeks 1/4/12.NPS / CSAT / CES
Sentiment + themes to pair with behavior.Support Contact Rate
Tickets per 1k active users—great friction proxy.Performance as UX
P95 latency for core interactions.
Goal: find the leaks, confirm the loops, and double-down on what builds habit.
How great companies do this
Pick a North Star that reflects user value (e.g., “weekly active workspaces that complete ≥1 shared task”).
Build a metric tree: break the North Star into input metrics design can move (activation, success, drop-off, adoption).
Set guardrails: quality, accessibility, privacy—so changes don’t game the numbers.
Instrument before launch: name events plainly (
Sign_Up_Completed,First_Value_Completed,Task_Success,Feature_X_Used).Write metric goals in the PRD: “Reduce step-2 drop-off from 34% → 24%.”
Ship with hypotheses and run A/B tests with clear stop rules (power, MDE, ramp).
Operate on a weekly cadence: review one page (activation, TTV, worst drop-off, adoption, retention), decide one design move.
Pair quant + qual: logs + session replays + survey themes.
Segment deliberately: new vs returning, team size, plan, country—don’t average away the truth.
Annotate every chart with releases; keep a decision log.
30-day rollout plan
Week 1 – Instrument: events + one dashboard; define first-value.
Week 2 – Fix the leakiest step: address the highest drop-off; re-measure in 72h.
Week 3 – Accelerate first value: remove one field, add defaults/templates; track TTV.
Week 4 – Prove a feature belongs: improve entry points/empty states; track adoption + W1 retention.
Quick formulas (copy/paste)
Activation:
First_Value_Users / SignupsTTV (median):
median(First_Value_ts − Signup_ts)Task Success:
Success / (Success + Error)Step Drop-off:
1 − (Completions_at_step / Entrances_to_step)Feature Adoption:
Feature_Users / MAURetention (Wn):
% of cohort active in week nNPS:
% Promoters (9–10) − % Detractors (0–6)
Use this to decide faster
Pre-release metrics prove can they succeed?
Post-release metrics prove do they return?
Tie both to a North Star, run one metric-moving change each week, and let behavior—not opinions—steer the roadmap.

