If you are a project team, design thinking coach, or project sponsor, and you need to measure design thinking at a project or intervention design level, for you, this flight level might be the most relevant one. But one word of caution: design thinking projects and interventions can be very diverse in nature, e.g., product, service, or business design; process optimization; improvement of marketing communication; organizational development workshops; large-scale systems change, etc. . In the table below, we listed an overview of activity metrics that apply to almost all of the project types, especially if they shall improve a product or service.
Let’s take the widespread design thinking process representation by the D-School Potsdam and map the metrics to each of its stages. The table does not claim to be complete. We just listed the metrics that we at co:dify paid attention to in projects in the past (most of the time, we did so unconsciously and unofficially; always without reporting them to the project sponsors but discussing them with the teams themselves).
Team diversity and multidisciplinarity
Ratio of people with design and user research skills to other team members
# and quality of assumptions made clear
# and quality of hypotheses developed
Willingness of users and partners to participate in research and co-creation
# and depth of interactions with user/customers (e.g. via observations or interviews)
# and depth of self-immersions
# of intentional learnings
# of unintentional learnings
Sponsor users convinced to support the team in the synthesis phase
Project sponsor/ decision-maker participates in synthesis
# and depth of derived insights
# of sponsor users and/or third parties convinced to support the team in ideation
# of ideas generated
# of ideas submitted
# of ideas chosen
Quality of ideas generated » judged by (sponsor) users
# of prototypes, experiments, or experimental interventions build
# of tests run
The specific ‘metrics that matter’ / behaviors we measure our experiments with
# of intentional learnings/hypotheses tested
% of tests resulting in intentional learnings
# of unintentional learnings
% of tests resulting in unintentional learnings
# and quality of partnerships and collaborations for co-creation
# of higher fidelity idea concept directions and prototypes explored
# of sprint cycles gone through
Average amount of € spent per stage in the process
Average cost of learning
Ratio of of sprint demos to project length
% of demos that elicit meaningful feedback
# and quality of externalized knowledge artifacts and visualizations
Design thinking metrics mapped to the HPI Potsdam re-entry process model
The most important activity metrics for a typical design thinking project with a focus on product/service/business model innovation
The three most important activity metrics, which accompany each of the stages of the design thinking process in the table above (and apply to Lean Startup teams too), however, are:
Insight Velocity: How fast are we learning in each cycle (e.g. a sprint week) about our users and their problems? » Others might also call this learning velocity. [ more info ]
Validation Velocity: How fast do we (in)validate our assumptions about our project at hand? » Others might also call this ‘experiment velocity’. [ more info ]
Assumption-to-Knowledge-Ratio: Related to the aforementioned metric, this asks how many assumptions did we verify or falsify already and how many blind spots are still open? [ more info ]
If you are an Agile / Innovation / Lean Startup or Design Thinking Coach, you will pay close attention to those metrics and reflect them with the team over and over. Well, sometimes you might also use them to give teams a kick into the butt … This is why they are also called »coaching metrics«.
When it comes to generic impact metrics on a project level, the most important ones in our opinion are:
Your Team Confidence Level: How confident are we that this project is going to be a success? This can be reported with confidence and investment readiness level meters like the Gilad Confidence Meter or Steve Blank’s TRL/IRL/ARL, which help you to show your project sponsors the critical levels of fit you’ve already reached.
Alignment With Innovation Thesis and Portfolio: How well does the market or improvement opportunity we discovered match our search fields for innovation? [ more info ]
Your own Metrics That Matter: In design thinking work you’ll often discover through experimentation what really matters to users and how they actually measure progress from their perspective. This means: their metrics and value evaluation criteria will influence yours. This is the reason why you can’t define your most important outcome/impact metrics upfront, or even more extreme: you need to be prepared that they might change (see this example of Intuit: Small Business Big Game). So, you will have to discover them. If you are really good, you might even discover what product managers call your ‘North Star Metric’, the most important metric, which drives all the others.
The next two figures show example projects which are typically run in a design thinking mode. These projects can use all the metrics mentioned earlier and some additional ones, depending on what design thinking is getting applied to. In our cases: service (re)design and early-stage business design.
Service Design Project: Redesign of a Service Experience
Downe, L. (2020). Good Services: How to Design Services That Work (1st edition). BIS Publishers.
Schmiedgen, J., Rhinow, H., Köppen, E., & Meinel, C. (2015). Parts Without a Whole? - The Current State of Design Thinking Practice in Organizations (Study Report No. 97; Technische Berichte, p. 144). Hasso-Plattner-Institut für Softwaresystemtechnik an der Universität Potsdam. http://thisisdesignthinking.net/why-this-site/the-study/
Jan is an educator, innovation strategist and founding partner at co:dify. He loves to understand complex systems and make them manageable so that the new can enter the world without politics and productive friction only.