Survey Design

Surveys are not dead.

You can find a lot of articles, point-of-views, or CX pundits on social media preaching that the survey is dead. Admittedly, we here at InMoment tell our current and prospective clients that they may be focusing too much on surveys and that less than 10% of their customer feedback is likely to come from surveys. An IDG stat says unstructured feedback is growing at 85% year over year which also threatens the value of traditional score-based surveys.

All this being said, the survey is not dead. As a matter of fact, it isn’t going away any time soon. And, I hope it never does! Surveys still present a unique opportunity to have a 1:1 conversation with your customer. And, to illustrate our support for this concept, we’ve developed some ‘Survey Bumpers’—much like the rails in bowling—to help guide you toward crafting a survey that achieves a ‘strike.’ These tips are designed to ensure that your survey stays on track to hit all the right points and maximize its effectiveness in a world where the reality is that surveys may no longer represent the lion’s share of feedback. However, they are still a critical part of what we refer to as an integrated customer experience.  

Survey 101

Before we dive into the survey bumpers, let’s recap surveys as a whole. When it comes to surveys, they can all generally fall under two categories: Transactional and Relationship. To be honest, I still talk to prospects (not as many clients) who don’t always understand this difference. 

Transactional surveys are typically conducted following a specific transaction or interaction between a customer and a company. The primary goal of transactional surveys is to gather feedback on the customer’s experience during that specific interaction – or as we like to say (tongue in cheek) in the moment. They are often used to assess satisfaction levels, understand the ease of doing business, identify areas for improvement, and address any issues or concerns in real-time.

Relationship surveys, on the other hand, focus on measuring the overall satisfaction and loyalty (to the brand and the products) of customers over a longer period. Rather than targeting a single transaction, relationship surveys aim to understand the broader relationship between the customer and the company. These surveys typically cover various touchpoints and interactions across the customer journey over a longer period, providing insights into overall brand perception, loyalty, and advocacy.

For many companies, relationship surveys rely on the Net Promoter Score (NPS) as the primary metric. This can help them understand several factors including the customer’s likelihood to recommend or repurchase, and overall satisfaction with the brand.

Understanding the basics of surveys is important to know before moving on to survey design. While the designs of surveys may vary from one to the other, the fundamentals of surveys will always remain the same.

What Is Survey Design?

Survey design is the detailed process of creating surveys that optimize the potential results that can be collected from a well-made questionnaire. Decent design takes into account the kind of questions, the quality of questions, the flow and organization of the survey, and the possible biases or conflicts of both questions and participants.

Though creating a questionnaire may seem simple at first, it can be a complicated and tedious process. Questions can be asked in different ways, both in form and language. How much context or detail is provided can sway a participant’s opinion. What questions are presented first will likely influence the questions posed later in the survey, which can impact results. 

How to Design A Survey

Outside of the types of surveys, we believe that every survey should have a “North Star Metric” to anchor on. This metric does not have to be the same for every touch point, but it should directly correlate with a business goal. Referencing my bowling metaphor from earlier, a survey with no goal is like bowling into a lane with no pins: pointless. 

How Long Should a Survey Be? 

When it comes to survey design, shorter is better. Your customers don’t want to take long surveys. Nobody does. Research shows that surveys that take just a few minutes to complete (4-7 questions max) have the highest percentage of completion rates. Not only should your survey be short, it should be targeted. All surveys, regardless of objective or format, should have the same structure of concise language, open-ended questions, and confirmation texts. 

Concise, Inviting Language

Surveys should open with a brief introduction that is on brand and invites the users to complete the survey. For example, some common intros include:

  • We want to hear from you
  • Tell us how we did
  • Your feedback is important to us

Regardless of the approach you choose, the user should immediately feel like their feedback is valuable and will be used to direct business decisions, not just improve a score. 

Open-Ended Question

One of my biggest survey design peeves is the “conditional” open end that is based on a good score (“Great – tell us what was awesome”) or a bad score (“Sorry we failed you”). We want our clients to get both sides every time they survey. To do that, you need to pose a question that allows the user to explain the good and the bad from their recent experience. An example of this would be: 

  • “Please tell us why you gave that score including what wowed you and where we need to improve.”

Confirmation Text

Whenever a survey has been submitted, make sure you add a step in your workflows that thanks the user for their time. In this step, being short, sweet, and on-brand is key. Just extend a small gesture that shows the user they have completed the survey process. An example might look like this: 

  • “Thank you for taking the time to share your feedback. We use this feedback to improve our products, service, and experience.”

Survey Design Best Practices

Now that we have the basics covered, let’s dive into a few survey bumpers that will lead you toward gathering insights – not just completion rates. These survey bumpers are aimed at outlining a strategy centered around business improvement. Rather than sending a survey for the sake of sending it, this strategy will help you achieve data that can be used, not just analyzed and archived. We want to pick up the spare – not leave the 7-10 split.

Design with the End in Mind

Before you start this process, you need to establish your objectives, goals, and desired outcomes. This foundational step lays the groundwork for a strategic approach to survey design, ensuring that every question and element serves a purpose in driving toward a measurable business outcome. By clearly identifying measurable outcomes, your survey will have a much better probability of capturing insights that you can turn into actions. By answering these questions, you will have a clear understanding of the goal of your survey: 

1. What business problem(s) are you trying to solve?

Understanding the specific business problem(s) or challenge that the survey aims to address is arguably the most important part of this process. It helps define the scope of the survey, frame relevant questions, and ensure that the collected customer feedback directly contributes to solving the problem. Without a clear understanding of the problem, the survey will render itself useless. And, for anyone who works with me or has read my POVs, your business problem must have a financial lens. CX programs sustain and grow if they drive a financial return to the business.

2. Who will be the internal champions of the data?

As part of a program design discussion, the target customer personas will evolve based on what you are measuring and who you can contact based on the availability of data and accessibility to it. But, to me, the more important question to answer is who in the company will be accountable for taking action based on the insights captured by the survey. Another rule I try to follow is that every question needs to have an owner – someone who wants the customer’s voice to take measurable action toward a business. No owner or no goal? Don’t ask the question.

For example, if we offer a closed-loop system, is there a resource aligned to close the loop? Or, if our goal is to understand the ease of completing a purchase on our website, is there an e-commerce team leveraging the customer feedback?

3. What are you doing today? How are you measuring success? 

Assessing the current state of the union within your organization provides context for interpreting survey results and evaluating the effectiveness of existing strategies. By understanding what your organization is currently doing, and whether or not it is achieving the desired results, can help identify areas of success and areas for improvement. Related to this, has the program been continuously updated to reflect changing team players and changing business conditions

4. Do people across the organization care about the score or the insights?

If the answer is the score – how do I say this nicely – I would suggest stepping back to see what role scores play in your CX strategy and what role they should play moving forward. If I can offer any wisdom it’s this: score-focused CX programs fail over time. Don’t let score trends paralyze modernization. To truly understand your customers and improve their experience, you need to care about the insights that come from these types of initiatives. And, broken record time, you need to be able to point to financial proof points from the actions taken. 

Just to be clear, scores are a critical part of a survey program. Understanding the impact of elements of your product/service delivery as measured by customer scores is important. Culturally, scores can be a rallying cry across the business. Advanced financial models can show how scores impact the bottom line. My “parting shot” for this topic is to just make sure the scores don’t become the program’s primary success metric.

How to Design the Best Survey for Your Business

Now that we have our bumpers in place. Let’s get into the details of how your business can bowl that perfect game. These steps to survey design are designed to get your business the cleanest, most actionable feedback that can be combined with other omnichannel data to round out a complete view of the customer experience so you can start improving it. 

1. Ask the Main Metric Question First

Starting with the main metric question allows you to capture the customer’s overall perception without any bias from subsequent questions. This question – and metric – should tie to the business outcome you are trying to achieve.

2. Follow Up with A Non-Conditional Open Ended Question

Following up the main metric question with an open-ended question encourages respondents to elaborate on their initial response. Open-ended questions allow for more conversational and qualitative feedback that provides deeper insights into the reasons behind their initial answer. See the guidance earlier in this article about ensuring this question is unconditional. 

3.  Identify A Small Group of Business drivers Related to Your Problem

This step involves selecting a focused set of business drivers or factors that are directly relevant to the business outcome you are hoping to achieve. By narrowing down the scope to a small group of key elements, you can ensure that your survey remains concise and targeted, making it easier for respondents to provide meaningful feedback. 

4. Offer to Follow Up

A recommended next step in this process is to offer to follow up or close the loop with the customer. Closing the loop is important because it demonstrates to customers that their feedback is valued and taken seriously. Research shows that when a company closes the loop with a customer, the customer is more likely to respond to subsequent surveys. It also allows you to save an at-risk customer if they have an issue you can fix. When customers see that their input leads to tangible changes or improvements in products, services, or processes, they feel heard and appreciated. 

However, you should only offer to do this if you have the staff to support it. Otherwise, you are only hurting yourself and negatively impacting the customer experience. 

5. Thank the Customer 

Always end the survey by expressing gratitude to respondents for taking the time to participate in the survey. This step is important for fostering goodwill and encouraging future engagement. A simple thank-you message at the end of the survey acknowledges the respondents’ contribution and reinforces the idea that their feedback is valuable to the business. Even better, I worked with a client who used their “thank you” page to highlight a couple of changes they made as a direct result of their survey program. 

The Future of Surveys with InMoment

To reiterate, surveys need to remain an important element of your customer listening strategy. While it is easy to say they are “dead,”  the truth is that their role is simply evolving to fit the modern landscape of customer feedback. Rather than being viewed as the endpoint of customer feedback, we see them as the first rung on the ladder of an integrated customer experience program – the opening frame to go back to our bowling analogy. 

For them to continue to be useful surveys need to be integrated into a broader strategy that encompasses various feedback channels such as social media, online reviews, customer service interactions, and more. By building out an integrated customer experience program that brings in a wide variety of data sources, businesses can capture a more comprehensive understanding of the customer journey and tailor their strategies accordingly. 

Think of this article as an InMoment PSA: Since surveys are still a vital channel to hear from your customers, you should make them the best they can be. 

See how Barry Nash & Company partnered with InMoment to merge traditional survey data with text analytics and market research to develop groundbreaking research and reports for the entertainment industry! 

Tell us more about yourself so we can tailor your demo for you

The Shortcomings of Comment-Based Surveys

Comment-based surveys can be effective for immediately gathering feedback from customers. However, there are several arenas in which brands use comment-based surveys when another survey type would yield better intelligence.

Comment-based surveys can be effective for immediately gathering feedback from customers. And when it comes to customer experience (CX), timeliness can make or break an organization’s ability to act on that feedback.

However, there are several arenas in which brands use comment-based surveys when another survey type would yield better intelligence. Today, I’d like to dive into several shortcomings that can make using comment-based surveys challenging for brands, as well as a few potential solutions for those challenges. Let’s get started.

Outlet-Level Analysis

As I discussed in my recent article on this subject, comment-based surveys are often less effective than other survey types for conducting outlet-level analysis. In other words, while brands can see how well stores, bank branches, and the like are performing generally, they usually can’t determine where individual outlets need to improve .

The reason for this has as much to do with the feedback customers leave as the survey design itself. From what I’ve seen across decades of research, customers rarely discuss more than 1-2 topics in their comments. Yes, customers may touch upon many topics as a group, but rarely are most or even a lot of those topics covered by singular comments.

What all of this ultimately means for brands using comment-based surveys to gauge outlet effectiveness is that the feedback they receive is almost always spread thin. The intelligence customers submit via this route can potentially cover many performance categories, but there’s usually not that much depth to it, making it difficult for brands to identify the deep-rooted problems or process breakages that they need to address at the unit level if they want to improve experiences.

(Un)helpful Feedback

Another reason that brands can only glean so much from comment-based surveys at the outlet level is that, much of the time, customers only provide superficial comments like:“good job”, “it was terrible”, and the immortally useless “no comment.” In other words, comment-based surveys can be where specificity goes to die.

Obviously, there’s not a whole lot that the team(s) running a brand’s experience improvement program can do with information that vague. Comments like these contain no helpful observations about what went right (or wrong) with the experience that the customer is referring to. The only solution to this problem is for brands to be more direct with their surveys and ask for feedback on one process or another directly.

How to Improve Comment-Based Surveys

These shortcomings are among the biggest reasons brands should be careful about trying to use comment-based surveys to diagnose processes, identify employee coaching opportunities, and seeing how well outlets are adhering to organization-wide policies and procedures. However, none of this means that comment-based surveys should be abandoned. In fact, there’s a solution to these surveys’ relative lack of specificity.

Brands can encourage their customers to provide better intelligence via multimedia feedback. Options like video and image feedback enable customers to express themselves in their own terms while also giving organizations much more to work with than comment-based surveys can typically yield. Multimedia feedback can thus better allow brands to see how their regional outlets are performing, diagnose processes, and provide a meaningfully improved experience for their customers.

Click here to read my Point of View article on comment-based surveys. I take a deeper dive into when they’re effective, when they’re not, and how to use them to achieve transformational success.

Tell us more about yourself so we can tailor your demo for you

The Role of the Relationship Survey in CX Programs

Most comprehensive customer experience programs are made up of several different types of studies, the two most common of which are Transactional and Relationship studies. Here we will describe the differences between these two types of studies.

Transactional or trigger-based studies are the base of most customer experience programs. This type of study is conducted among current or recent customers and is used to ascertain the customer experience for a specific transaction or interaction. This type of research looks at near or short-term evaluations of the customer experience and often focuses on operational metrics. 

In contrast, the relational or relationship customer experience study is typically conducted among a random sample of the company’s customer base. Relational customer experience is used to understand the cumulative impressions customers form about their entire customer experience with the company. Importantly, this type of customer experience research is often the chassis for ascertaining specific aspects of the experience important to predicting loyalty and other customer behaviors. 

A. Transactional Customer Experience

In a transactional customer experience study, we focus on the details of a customer’s specific recent transaction. For example: 

  • The respondent’s most recent visit to Wendy’s 
  • The customer’s visit yesterday to her local Deutsche Bank branch 
  • Last week’s call to the Blue Cross/Blue Shield customer service center 
  • The respondent’s visit, 10 days ago, to Nielsen Nissan in Chesterton, Indiana, for routine auto maintenance. 

The overall rating we ask is the respondent’s overall evaluation of the specific transaction (visit, stay, purchase, and service). The attribute ratings are also specific to the specific transaction. 

B. Relational Customer Experience  

A relational customer experience study is broader in coverage. Here, we ask about the totality of the relationship with a company. In a relational customer experience study, the questions relate to the overall, accumulated experience the customer has had with the company. So rather than ask about the timeliness of an oil change at Nielsen Nissan and the quality of that service, the relational survey would ask for the respondent’s overall perceptions of Nielsen Nissan’s services across all the times the customer has interacted with that dealership. 

The overall ratings are often overall satisfaction with the relationship as a whole, willingness to recommend, and likelihood to return. Attributes are similarly broader in scope. We would not ask the customer about her satisfaction with the speed of service for her last oil change, instead we would ask about her satisfaction with the speed of service she usually gets when she visits Nielsen Nissan. 

C. Sampling Differences Between Transactional and Relationship Studies

In addition to the content of the surveys, a critical difference between these two studies is the sampling frame. In a transactional customer experience study, we sample customers who have interacted with the company recently. This is also sometimes called “trigger-based” customer experience since any type of interaction with the company can “trigger” the inclusion in a transactional customer experience study. 

In a relational customer experience study, we typically sample from the entire base of customers, including people who may not have interacted with the company recently. A relational customer experience study is projectable to the entire customer base, while a transactional customer experience study is a sub-set of customers – those who have interacted recently. 

When leveraging customer experience information with internal information, transactional customer experience information is often linked to operational metrics (such as wait time, hold time, staffing levels, etc.). In turn, through the use of bridge modeling, transactional research is often linked to relational customer experience, which is then linked to downstream business measures, such as revenue, profitability and shareholder value-add. 

D. Recommendations for Relationship Surveys 

Survey Content: As mentioned above, relationship surveys are meant to measure the totality of customers’ experiences with a given company. They are also meant to determine how customers are feeling about the company NOW. It is important to note that customers overall feelings about a company (as measured in relationship surveys) are often NOT the average of their transactional experience evaluations. This is because different transactions, especially if they are negative, can have a much larger effect on overall feelings toward a company than other transactions. 

Most relationship surveys contain questions addressing: 

  • Overall Metrics such as Likelihood to Recommend the Company, Overall Satisfaction with the Company, and Likelihood to Return or Repurchase 
  • High-level brand perceptions 
  • Company service channels usage and evaluations such as store/ dealership, finance company, call center/problem resolution teams, etc. 
  • Product usage and evaluations 
  • Share of Wallet measures 
  • Marketing/communication perceptions 

Survey Sampling: Who, how often and how many customers do you need to survey? There are no hard and fast rules but remember the idea is to obtain a representative sample of your customers. With that in mind: 

Who to Survey: All customers (whether they are recently active or not) should be available for sampling. You also might want to oversample small but important groups of customers (e.g., millennials, new owners, etc.) to ensure that you receive enough returns to analyze these groups separately. However, if you do oversample you will need to weight your data back to your customer demographics to ensure representative overall results. 

How Often to Survey: While transactional CX research is usually done on a continuous basis, relationship studies are usually conducted once or twice per year. How often companies conduct relationship studies is usually determined by the number of customers available (i.e., are there enough to conduct the study twice per year?) and when and how often decisions will be made based on the findings. 

How Many to Survey: This is often the most frequent question clients ask and the basic answer is that it depends on what organizational level you need the results to be representative of. The good news is that if you are only concerned about making decisions on the entire company level, only about 1000 well-sampled responses is sufficient. For most large companies that is a very small percentage of their customers. However, if you want the finding to be representative of lower levels of the organization for comparison purposes (e.g., zones, districts, stores) or want findings to be representative of certain customer groups (e.g., millennials, minorities, long-term customers, etc.) calculations need to be performed to determine the number of responses needed for these groups. Unfortunately, as demonstrated in the chart below, as the population size (e.g., company customers, zone customers, store customers) goes down, the percentage of customers needed to represent that population goes up. For instance, to obtain +/- 3 percentage point precision for a population of 3,000,000 people you only need 1067 randomly sampled returns. That is just 0.04% of the population. For a population of 30,000 people, you need 1030 returns which is 3.4% of the population. For a population of 3,000 the number of returns needed drops to 787, but that is 26.2% of a population of 3,000. For a very small population like 300, you need returns from 234 people (78.0%) of the population. 

population survey

E. Summary 

Both transactional and relationship surveys are key parts of any comprehensive customer experience program. Transactional surveys are great for assessing the quality of specific customer touch points and making improvements in those areas. Relationship surveys allow for the assessment of the entire customer experience across all touchpoints and therefore more closely relate to customer behaviors such as loyalty, customer spend, and customer advocacy.

Tell us more about yourself so we can tailor your demo for you

Change Region

Selecting a different region will change the language and content of inmoment.com

North America
United States/Canada (English)
Europe
DACH (Deutsch) United Kingdom (English)
Asia Pacific
Australia (English) New Zealand (English) Asia (English)