Overview
Method category: Evaluative product experiment
How to Use This in GLIDR
Paper Prototyping is exactly what it sounds like; making a simulated version of your software product using a piece of paper that you move around to mimic real interactions during a user test.
In GLIDR, Paper Prototyping is an Experiment focused on your product. First, create an Experiment and connect ideas about the usability and features in your product. Then, in the Run phase, create an Evidence - Interview for each user who tests your paper prototype and record observations. Finally, in Analyze, figure out how to update your project accordingly based on what you learned during this test.
Learn more about each of those aspects of GLIDR:
---
Paper Prototyping
Article excerpted from The Real Startup Book
In Brief
Quickly test a solution idea by simulating user-product interactions using paper drawings and cutouts. Paper prototyping is often a form of usability testing, although it can also be used to help generate and evaluate design alternatives during ideation.
One person plays the role of product user while others simulate the behavior of the product in response to the user’s actions using paper mock-ups of screen images or other user interface elements.
Helps Answer
What basic form(s) might our solution take?
Where are there usability issues in our solution concept?
Is our solution intuitive for our customer to use/navigate?
Are there other use cases or error conditions we didn’t think of?
What pieces of information do we need to provide to our customers?
Tags
B2C
Qualitative
Description
Time Commitment
Anywhere from a few minutes to several days. As a formal usability testing approach, more time may be needed to allow for advance planning of time with customers and more complex preparation of mockups and planning of scenarios. Alternatively, can be used in the moment as part of generative design work with very little prep in the time it takes a to produce a few rough sketches to help quickly visualize a concept and test it out for weaknesses.
How To
For each screen/interface that is part of the interaction you are testing, create a simple mockup on paper illustrating what will appear on the screen for the various scenarios you are exploring.
One person (or a pair) plays the role of the customer/end user (customer).
Another person (or several people) plays the role of the software (computer).
The customer interacts with the paper prototype as if it is a “real” application, physically interacting with the paper interface just as they would the real thing. They are encouraged to explain their thinking out loud as much as possible.
For each action the customer takes, the computer then moves/updates the paper prototype to reflect the new state in response to the customer’s action. As a general rule, the computer should not talk.
For more formal forms, observers would typically watch for additional insights and capture notes. A facilitator may also be desired, as they can encourage the customer to ask questions and think aloud.
If desired, alternate interaction flows and layouts, and/or modifications to proposed interactions can be quickly mocked up and tested if changes are desired.
Interpreting Results
You are looking for places where the customer got stuck, was not able to find what she was looking for, or accidentally went down the wrong path. Anything that is misleading, confusing, or hard-to-find is noteworthy. For items like this, dig into what information the customer was missing that led to the confusion and how best that information might be provided (or how might the need for it be eliminated). You can look for situations that may have come up where the response by the computer was not defined, or where an action was possible that you did not want to be possible. Also consider revisiting assumptions about your customers’ motivations or the knowledge and experience base they bring – the test may reveal incorrect assumptions and provide insight for improving your solution.
Potential Biases
Confirmation bias is possible, especially if a facilitator or other participant knowledgeable about the software provides too many “hints” to the customer.
Can miss more fine-grained usability issues such as those relating to placement, size, scrolling, mouse/keyboard interactions, etc. Digital tools may be more appropriate for this level of detailed usability testing.
Customers may hide feedback if they are concerned about appearing incompetent or offending others.
If the user performing the test is more experienced, you may miss issues that a less savvy user would have revealed.
Field Tips
Got a tip? Add a tweetable quote by emailing us: realbook@kromatic.com
Case Studies
Got a case study? Add a link by emailing us: realbook@kromatic.com
References
Got a reference? Add a link by emailing us: realbook@kromatic.com