The primary difference between (generative) research and an (evaluative) experiment is the existence of a clear, testable hypothesis. This article outlines the elements of creating strong hypotheses.
In GLIDR, there are two types of Discovery Activities: Research and Experiments. This article will help you decide when to conduct each one and how to formulate better hypotheses.
Once you've figured out a good hypothesis, visit the Index of Methods page to figure out the best testing methodology.
Generative vs. Evaluative
Article excerpted from The Real Startup Book
Do we have a clear hypothesis to evaluate or do we need to generate a clear idea? This distinction depends on our ability to understand what makes a clear hypothesis.
Consider this example: “Our customers really want our product.”
This hypothesis is clearly bad for a number of reasons. The most obvious is that it’s tautologically correct and not worth investigating. If they are our customers, then technically they have already purchased our product and that is a good sign they actually want it. It is roughly equivalent to, “If the piece of paper is flammable, it will burn when ignited.”Yet these types of flawed hypotheses are common.
Here is a slightly more subtle example: “If 250 Los Angeles teachers were asked to treat minority students with more respect, then at least 50 teachers would do so.”
While not as flawed as the first example, it has fundamental problems that would prevent us from designing a good experiment. If we force an experiment, we will most likely have ambiguous data or be unable to interpret it correctly.
In this case, several things are unclear:
- Which teachers? Teachers at schools with a number of minority students? How many minority students are sufficient for this test?
- How should we ask the teachers? Will we ask each teacher differently? Will we let the principals ask them?
- What is respect in this context? What behaviors would indicate “more respect”?
Without defining the hypothesis very clearly, we might let the principals of schools ask the teachers on our behalf and they might ask them with varying degrees of persuasiveness.
We might also argue about the results. Is calling a student “Mr.” instead of their first name a sign or respect or a sign of sarcasm?
When we do not have a clear, well defined, and falsifiable hypothesis we are best served by doing generative research instead of an experiment. In this case, our learning goal could be “What teacher behaviors indicate teacher respect to minority students?”
Given this goal, we are better off doing customer discovery interviews (a.k.a. speaking to the students) rather that testing our vague hypothesis. The outcome of the generative research should be a clear, well defined, and falsifiable hypothesis that we can then go and test with an Evaluative Experiment.
Defining good hypotheses can be a challenge, so here are some things to look for and a short checklist:
Simple and Unambiguous
The hypothesis should be clear and unambiguous so that anyone reading it will understandthe context and be able to clearly interpret the results.
“If 250 Los Angeles teachers were asked to treat minority students with more respect,then at least 50 teachers would do so.”
In this case, we may have different opinions as to what “respect” means. In order for us to agree that someone is being treated with “more respect,” we must agree on what behaviors would indicate respect.
“If 250 Los Angeles teachers were asked to treat minority students with more respect, then at least 50 teachers would refer to their students using an honorific.”
While this is more specific, not everyone knows what an honorific is, so we should avoidusing any specialized vocabulary or jargon.
“If 250 Los Angeles teachers were asked to treat minority students with more respect, then at least 50 teachers would call their students using ‘Mr./Ms.’ and their last name instead of their first name.”
“Our customers have a strong desire to donate to charitable causes.”
This hypothesis may be true, but it is not observable. At least not until we invent telepathy.
“Our customers donate to charitable causes twice per year.”
This new hypothesis has some other issues, but it is at least something observable.
Describes a Relationship
“50% of students at Dalton High School get a C or lower in at least one class per year.”
This again may be true and it is observable, but it doesn’t tell us anything about the cause of the low grades. A good hypothesis should allow us to change one thing and observe the effect in another.
“Students at Dalton High School that study less than four hours a week get a C or lower in at least one class per year.”
There are still more issues, but the hypothesis must relate two or more variables to each other.
Cause and Effect
“During the summer, ice cream consumption increases and more people drown per day.”
This is a true statement, but does not tell us how those two variables relate to one another.Are people drowning because they ate too much ice cream? Or are they eating more ice cream because they are sad about all the drownings?
“During the summer, people who eat ice cream will drown at a higher rate than people who do not eat ice cream.”
This specifies a clear relationship and the causal direction of that relationship. Simply using an IF _______, THEN _______ sentence structure can help make sure cause and effect are apparent.
“If we feed ice cream to people, then the average # of drownings per day will increase.”
“If an astronaut in a stable orbit around a black hole extends one foot past the event horizon of a black hole, then they will be pulled in entirely.”
There are many theoretical physicists who create a number of hypotheses which are not testable now, but may be testable at some point in the future. While this black hole/astronaut hypothesis is theoretically testable, it is not testable today.
Unfortunately, as entrepreneurs, we should restrict our hypotheses to ones that can be tested within the immediate future or within our current resources.
Warning call out: Many things seem untestable today but clever application of lean thinking can simplify the hypothesis into a testable first step.
All of these conditions add up to a hypothesis being falsifiable . If a hypothesis cannot be proven incorrect, then it is not relevant to run a test on it.
“There is an invisible, intangible tea cup floating between the Earth and Mars.”
When it doubt, we can ask ourselves, “What evidence would prove this hypothesis incorrect?”
If there is no amount of evidence that would prove our hypothesis is invalid, then either the hypothesis is flawed or we are very stubborn.
There are a number of frameworks and checklists for forming hypothesis, one of which is popular enough to comment on to avoid confusion:
We believe <this capability> will result in <this outcome> and we will know we have succeeded when <we see a measurable signal>
The entire sentence is not the hypothesis. Let’s break this into it’s parts:
That section just confirms we think the hypothesis is correct . It is not part of the hypothesis and there are many situations where we may test a hypothesis that we believe is incorrect .
"... <this capability> will result in <this outcome> ..."
That is the hypothesis.
"...we will know we have succeeded when <we see a measurable signal>"
That is the data we will collect including any information about sample size, margin of error, success conditions, or fail criteria.
Ready to start adding Research and Experiments to GLIDR?