Skip to main content
Experience Sampling

The Experience Sampling research method helps you learn about user behavior in real time & in the context of their everyday lives

Jordan Duff avatar
Written by Jordan Duff
Updated over a week ago


Method category: Generative market research

How to Use This in GLIDR

The Experience Sampling research method helps you learn about user behavior in real time & in the context of their everyday lives. 

In GLIDR, you can run Research for the target behavior that you're looking to understand from this test. Then, for a cohort of users who are sampled across a particular time period, you can add a piece of Evidence - Other and attach your raw data plus write up a summary of the trends and takeaways. Once you've added all the pieces of Evidence that you gathered in this test, you can analyze the results to discuss the impact on your project.

Learn more about each of those aspects of GLIDR:

Experience Sampling

Article excerpted from The Real Startup Book

In Brief

This method is used to get information about a participant's daily behaviors, thoughts, and feelings in real-time or as close to it as possible. Participants are asked to stop at certain times in their natural settings and make note of their experiences. It is also known as the daily diary or experience sampling method (ESM).

Helps Answer

  • Who is our customer?

  • What are their pains?

  • What are the jobs to be done?

  • How do we find them?


  • B2C

  • B2B

  • Qualitative

  • Customer segment

  • Channels

  • Value proposition


The key to experience sampling is asking the right questions. Be especially careful with phrasing, since you will be asking the question over and over again. This method makes most sense when you want to solve a frequently reoccurring problem. You get the most useful and viable input when asking about repeated behavior and, more specifically, the last time it occurred.

Time Commitment and Resources

Your participants' time commitment will depend on the amount of data you want to collect. The more data you get, the more confident your interpretations. You should aim for at least 100 data points, depending on your goals and customer segment. There are three dimensions to expanding your data pool: How many times a day are you asking the question? On how many days are you asking the question? To how many participants do you ask the question about repeated behavior? Keep in mind that usually two-thirds of the answers will be useful, and adjust your planning accordingly.

The recruiting process will often take a lot of time since only participants who are part of your target group can ensure valuable data. That’s why you will need to create a screener to make sure the participants qualify for your target group. How long the recruiting takes depends on how many participants you want and could range from a day to multiple days. One fast and cheap way to find them is to look in social media settings corresponding to the theme of your study.

If you plan on a big amount of data you should have a team ready to analyze that data. Aim for a couple of analysis sessions; each after a certain amount of data is obtained. The first session will take the longest and depending on the amount of data could range from two hours to a day.

You should offer your participants some kind of incentive. The amount depends on the number of questions answered—ranging from $5 to $50, a coupon or similar.

How To


  • Carefully phrase the question.

  • Make sure the answering process takes no more than a minute.

  • Plan how often you want to send alerts - how many times a day and distribution over the days. Be careful that the frequency doesn't lead to the perception of being nagged. If the user hasn't completed the behavior, another alert may create an undesirable affect."

  • Choose your medium of contact - SMS, phone, email, app, etc.

  • Plan how to collect the data - a spreadsheet is common.

  • Decide how many participants you want and start recruiting as soon as possible.

  • Plan the analysis depending on the expected amount of data - team size, process, etc.

Finding Participants

  • Use a screener to select relevant participants.

  • Identify participant criteria and formulate questions accordingly - if possible use quantifying questions (e.g. how often participant does something).

  • Additionally think of non-criteria that your questions might not cover yet.

  • Check willingness to participate by collecting contact information.

  • Check availability.

  • Select your participants.

  • Set their expectations on how often they will be asked to give answers.

Start Collecting Data

Remember to thank the participants for each participation.


  • Check the first answers to see if they are sufficient for your research - if necessary expand your question or explain to participants the level of detail you need.

  • Check if the questions are correctly understood - if necessary adjust your question or correct individual participants.

  • Begin the analysis as soon as possible - do not wait until you have collected all the data.

  • Eyeball the data to get a general impression.

  • Decide on categories to organize the data.

  • Adjust categories during the process if necessary - split if too big, combine if too small.

  • Clean the data of answers that are not useful as you run across them.

  • If you analyze in a team, work on the first 50-100 data points together, deciding on categories and classifying the answers.

  • Distribute the following data among the team for classification - answers may match multiple categories.

  • Switch the data within the team for a second blind classification and discuss possible discrepancies.

  • Create frequency charts.

Interpreting Results

First, look at the frequency distribution and identify common themes to gain insight into participants' pain points and delights. Then pinpoint what you have and have not been doing well in solving your target group's problems, as well as opportunities for improvement. You may find that the problem is slightly different than expected, or what you thought was a problem is not one at all. You may get ideas for additional product features. In any case, you end up with data on different experience categories and therefore many opportunities.

Potential Biases

  • Prediction bias: Do not ask about people’s opinions on potential products, situations, or what they think they need. People are bad at predicting the future! Ask about recent behavior and problems.

  • Confirmation bias: Be careful not to use leading questions or give examples for what kind of answers you expect.

Field Tips

  • “Run a comprehension test before a landing page test or you won’t understand why it doesn’t work.” - @TriKro

  • “Don’t ask for opinions, observe behavior.” - @tsharon

  • “Often, what customers say they want and what they actually need differ significantly.” - @macadamianlabs

  • “Trying to understand users without actually observing them is the same as learning to ride a bike by reading about it.” - @MarkusWeber

  • Got a tip? Add a tweetable quote by emailing us:

Adding your Experience Sampling data to GLIDR is a great way of validating your Ideas before going to market, or pushing a new feature. Sign up here.

Did this answer your question?