
That conversation stuck with me. Not because I felt stupid, but because I realised: if you can’t measure impact, you can’t improve it systematically. And if you can’t show value clearly, you can’t influence what gets prioritised.
Why metrics actually matter
I used to think metrics killed creativity. They felt reductive. Like judging a painting by counting brushstrokes. Turns out, metrics don’t limit creativity. They focus it. Without numbers, design conversations stay subjective. You end up debating aesthetics instead of solving problems. With metrics, you’re discussing people, time, friction, cost. Things the business already understands. “Make it better” becomes “reduce task time by 30%.” Decisions happen faster. No more endless Slack threads debating button colours.
Test it, learn, move on.
You speak everyone’s language. Product cares about conversion, engineering cares about velocity and business cares about cost. Metrics bridge all three.
Most designers measure the wrong things
There’s a difference between tracking activity and tracking impact. Measuring activities show effort, they don’t show value. (It’s the professional equivalent of posting your gym check-in on Instagram. Cool, but did anything actually improve?).
Impact metrics show outcomes.
Activities
Screens designed
Research hours
Interviews conducted
Impact
Task time down 40%
Support tickets down 55%
Activation up 23%
The metrics that matter
I recommend to pick one primary metric and track 2-3 secondary to catch unintended consequences.
For product teams:
- Activation rate (% completing core action)
- Feature adoption (% using new feature)
- Task completion rate
- Time to value (how fast users get outcome)
For business:
- Support ticket volume (cost savings)
- Time saved per user (efficiency gains)
- Conversion rate (revenue impact)
- Churn reduction (retention)
For users:
- Task completion time
- Error rate
- Success rate on first attempt
- Cognitive load (steps to complete task)
It’s not only the metrics, it’s the storytelling
The numbers matter, but so does how you frame them. Most design improvements don’t create dramatic step-changes overnight: they create small, measurable gains that compound over time. The narrative you choose determines whether stakeholders see them as trivial or transformative.
For example:
You reduced task completion time by 12 seconds.
You could say: “We have shortened the flow 12 seconds.”
Or you could say: “We saved users nearly 2 months of cumulative time this year.”
Same improvement, completely different impact.
500 users/day × 12 seconds → 100 minutes saved daily
100 minutes × 250 work days → 25,000 minutes/year
25,000 minutes = 417 hours → 52 work days
In your current project, what’s the ONE metric that would prove this is working?
Measure the baseline today → Translate it into language stakeholders understand → Track what changes.
That’s how you shift from execution to strategy.
Choosing the right metric is simple when it’s obvious (task completion time, anyone?). But what about when it’s not? Let’s explore real examples of creative metric selection.
How to prove design success when data isn’t that obvious