Measuring Design Thinking in Projects

How do we know that our project or intervention is making progress and creates its desired outcomes?

If you are a project team, design thinking coach, or project sponsor, and you need to measure design thinking at a project or intervention design level, for you, this flight level might be the most relevant one. But one word of caution: design thinking projects and interventions can be very diverse in nature, e.g., product, service, or business design; process optimization; improvement of marketing communication; organizational development workshops; large-scale systems change, etc. . In the table below, we listed an overview of activity metrics that apply to almost all of the project types, especially if they shall improve a product or service.

Activity Metrics

Let’s take the widespread design thinking process representation by the D-School Potsdam and map the metrics to each of its stages. The table does not claim to be complete. We just listed the metrics that we at co:dify paid attention to in projects in the past (most of the time, we did so unconsciously and unofficially; always without reporting them to the project sponsors but discussing them with the teams themselves).

Design thinking re-entry process model after HPI D-School Potsdam
Design thinking re-entry process model after HPI D-School Potsdam

Project Scoping

Understand

Observe

Define

Ideate

Prototype

Test

Sprint and

Project Reviews

  • Team diversity and multidisciplinarity
  • Ratio of people with design and user research skills to other team members
  • etc.
  • # and quality of assumptions made clear
  • # and quality of hypotheses developed
  • Willingness of users and partners to participate in research and co-creation
  • etc.
  • # and depth of interactions with user/customers (e.g. via observations or interviews)
  • # and depth of self-immersions
  • # of intentional learnings
  • # of unintentional learnings
  • etc.
  • Sponsor users convinced to support the team in the synthesis phase
  • Project sponsor/ decision-maker participates in synthesis
  • # and depth of derived insights
  • etc.
  • # of sponsor users and/or third parties convinced to support the team in ideation
  • # of ideas generated
  • # of ideas submitted
  • # of ideas chosen
  • Quality of ideas generated » judged by (sponsor) users
  • etc.
  • # of prototypes, experiments, or experimental interventions build
  • # of tests run
  • The specific ‘metrics that matter’ / behaviors we measure our experiments with
  • # of intentional learnings/hypotheses tested
  • % of tests resulting in intentional learnings
  • # of unintentional learnings
  • % of tests resulting in unintentional learnings
  • Prototyping efficacy
  • # and quality of partnerships and collaborations for co-creation
  • # of higher fidelity idea concept directions and prototypes explored
  • # of sprint cycles gone through
  • Average amount of € spent per stage in the process
  • Average cost of learning
  • Ratio of of sprint demos to project length
  • % of demos that elicit meaningful feedback
  • # and quality of externalized knowledge artifacts and visualizations
  • etc.
Design thinking metrics mapped to the HPI Potsdam re-entry process model
The most important activity metrics for a typical design thinking project with a focus on product/service/business model innovation

The three most important activity metrics, which accompany each of the stages of the design thinking process in the table above (and apply to Lean Startup teams too), however, are:

  • Insight Velocity:
    How fast are we learning in each cycle (e.g. a sprint week) about our users and their problems? » Others might also call this learning velocity. [ more info ]
  • Validation Velocity:
    How fast do we (in)validate our assumptions about our project at hand? » Others might also call this ‘experiment velocity’. [ more info ]
  • Assumption-to-Knowledge-Ratio:
    Related to the aforementioned metric, this asks how many assumptions did we verify or falsify already and how many blind spots are still open? [ more info ]

If you are an Agile / Innovation / Lean Startup or Design Thinking Coach, you will pay close attention to those metrics and reflect them with the team over and over. Well, sometimes you might also use them to give teams a kick into the butt … This is why they are also called »coaching metrics«.


Impact Metrics

When it comes to generic impact metrics on a project level, the most important ones in our opinion are:

  • Your Team Confidence Level:
    How confident are we that this project is going to be a success? This can be reported with confidence and investment readiness level meters like the Gilad Confidence Meter or Steve Blank’s TRL/IRL/ARL, which help you to show your project sponsors the critical levels of fit you’ve already reached.
  • Alignment With Innovation Thesis and Portfolio:
    How well does the market or improvement opportunity we discovered match our search fields for innovation? [ more info ]
  • Your own Metrics That Matter:
    In design thinking work you’ll often discover through experimentation what really matters to users and how they actually measure progress from their perspective. This means: their metrics and value evaluation criteria will influence yours. This is the reason why you can’t define your most important outcome/impact metrics upfront, or even more extreme: you need to be prepared that they might change (see this example of Intuit: Small Business Big Game). So, you will have to discover them. If you are really good, you might even discover what product managers call your ‘North Star Metric’, the most important metric, which drives all the others.

The next two figures show example projects which are typically run in a design thinking mode. These projects can use all the metrics mentioned earlier and some additional ones, depending on what design thinking is getting applied to. In our cases: service (re)design and early-stage business design.

Service Design Project: Redesign of a Service Experience
Activity Metrics

All the metrics mentioned above, plus …

The Good Services Scale:

A service quality assessment tool based on the 15 principles of good service design by Lou Downe , which you can apply with or without users in your assessment jury/panel. Some of her assessment criteria can also be seen as output metrics.

Output Metrics

All the metrics mentioned above, plus …

  • $ of savings in cost of service
  • Reduction in task completion time
  • Identification and elimination of waste
  • Service performance
  • Service uptime and availability
  • Upward trend in classic KPIs for UI, UX, CX, and brand perception
  • User/customer satisfaction, e.g. via NPS or Customer Centricity Score
  • etc.
Example I: Service (re)design project. This can be customer-facing external or internal, e.g. process improvement.
Business Design Project: Creating a Value Proposition
Activity Metrics

All the metrics mentioned above, plus …

  • # and depth of customer interviews and tests per cycle
  • etc.
Output Metrics

All the metrics mentioned above, plus …

  • Reduced time to Problem-Solution Fit
  • Reduced Time-to-Market
  • Improved Assumption-to-Knowledge Ratio.
  • etc.
Example II: Customer discovery project, e.g. value proposition and business design.
References
Downe, L. (2020). Good Services: How to Design Services That Work (1st edition). BIS Publishers.
Schmiedgen, J., Rhinow, H., Köppen, E., & Meinel, C. (2015). Parts Without a Whole? - The Current State of Design Thinking Practice in Organizations (Study Report No. 97; Technische Berichte, p. 144). Hasso-Plattner-Institut für Softwaresystemtechnik an der Universität Potsdam. http://thisisdesignthinking.net/why-this-site/the-study/

What do you think? Leave a reply!

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Want to read more of our thinking?