
Now, we’ve established that metrics focus creativity and that a strong narrative can show how big the true impact of design is. But how do you actually measure a design project? Here’s the process any senior designer would recommend:
- Define the problem in one sentence
- Choose one metric
- Measure baseline before you design anything
- Design to improve that number
- Measure after release
- Translate it into a story the stakeholders understand
Let’s say you are redesigning a registration flow. It’s pretty straightforward: you measure task completion time before, you address the pain points, you measure it again after the changes. If the time was reduced it’s done, easy peasy.
But what happens when metrics aren’t that easy to identify? That’s where the real creativity comes in.
When the feature works but
users are still confused
The challenge
Imagine this scenario: a team launches a new feature that technically works fine. Users eventually complete their tasks, but they also open support tickets with CS asking how to use it. The team could measure error rates or task completion, they are both valid metrics. But neither connects to what leadership cares about.
The solution
Support tickets equal direct cost. Let’s say that each ticket costs roughly £6 in support time (you can calculate this using benchmark salaries for the CS teams and the estimated time for them to work on the ticket). I guarantee that Finance will understand that immediately! 💸
Before: 98 tickets/week about this feature
After: 44 tickets/week
The translation
“We reduced support tickets by 55%, saving £16,640 annually. That’s essentially a junior designer’s salary funded through better UX.”
Finance hears “cost savings.” Product hears “happier users.” Engineering hears “fewer interruptions.”
Same metric, three stakeholder languages.
The learning: Don’t just pick a metric you can measure. Pick one that translates directly into language stakeholders already care about. Support tickets weren’t the obvious choice, but they connected straight to cost.
How do you measure a
successful rebrand?
The challenge
A team is evolving their brand. New visual identity, new tone of voice, the works. Leadership keeps asking: “How will we know if it’s successful?” Good question. What does “successful rebrand” even mean? Better brand perception? More consistent? Faster to create work? All of the above?
The solution
Let’s assume we are working on a B2B brand evolution. The company wants the brand, their identity, to be perceived as “modern”, “trustworthy” and “sleek”. We can then pick three metrics that make the brand perception measurable.
1. Consistency score
Audit 50 customer touchpoints quarterly (website, emails, social posts, presentations, etc.) and check each against your brand guidelines (right colours, typography, tone of voice). Score each identity value on a scale of 0-10 and track improvement over time. 📊 (e.g. start at 4/10, end at 7.8/10)
2. Perception data
Add a question to your customer survey: “Does our brand feel [modern]?” or send a survey asking customers to write three adjectives to describe the brand. Track the percentage of responses matching the identity values. (e.g. Modern: 34% → 67%)
3. Time to create
Measure how long it takes designers to create on-brand work. Blog post graphics go from 45 minutes to 25 minutes. Presentation decks from 3 hours to 1.5 hours. Why? Because clearer guidelines equal less guessing.
The translation
“Our brand consistency improved by 95% across touchpoints. Customer perception of ‘modern’ doubled. And our team creates on-brand work 40% faster.”
Three different metrics, all proving the same thing: the rebrand worked.
The learning: Subjective things can be measured if you get creative. Don’t accept “we’ll know it when we see it.” Define what success looks like before you start, then measure it.
45 new components built.
But did it matter?
The challenge
A team builds a design system. 45 components, comprehensive documentation, the whole thing. Leadership’s question: “Was it worth the investment?”
Easy to measure what they built (45 components). Hard to prove it mattered.
The solution
They stop measuring what they built and start measuring what others could build because of it. Smart move! 🎯
- Component adoption rate: Track percentage of new features using design system components vs. custom one-offs. You can do this by auditing shipped features monthly.
- Other team velocity: Measure how fast product teams ship features before and after adopting the system (track time from design handoff to development complete).
- Engineering review time: Survey the engineering team about sprint time allocation.
The translation
“Component adoption hit 75%. Product teams ship 40% faster. Engineering spends 20% less time on design review. The system didn’t just make our work faster, it made everyone’s work faster.”
The learning: Sometimes your impact isn’t direct. The design system’s value wasn’t the 45 components built. It was the velocity improvement across four product teams and the marketing team. Measure the ripple effect, not just the stone you threw
What this gets you
Here’s how consistent measurement impacted my career:
- After 1 project: “Interesting.”
- After 3 projects: “What did we improve this time?”
- After 5 projects: Design has a seat at strategy meetings.
You earn influence by proving design drives business results.
Metrics aren’t the whole story. They won’t tell you if something feels delightful or if your solution is ethical. But designers who combine craft with measurement have way more influence than those who only do one.
Beautiful work without metrics might not matter. Measurable improvements without craft feel soulless. Real strategy comes with both.
Now you can measure impact and translate it into stakeholder language. In the next article, I’ll address the uncomfortable truth about moving into leadership: you’ll miss making things.