Survey Design 101

When gathering accurate and useful data, how the data is collected is just as important as the results it yields. Reliable insights are discovered when the questions are fair, unbiased, and relevant to the participants. This is why the design of the survey can ultimately determine the survey’s success.

What Is Survey Design?

Survey design is the detailed process of creating surveys that optimize the potential results that can be collected from a well-made questionnaire. Decent design takes into account the kind of questions, the quality of questions, the flow and organization of the survey, and the possible biases or conflicts of both questions and participants.

Though creating a questionnaire may seem simple at first, it can be a complicated and tedious process. Questions can be asked in different ways, both in form and language. How much context or detail is provided can sway a participant’s opinion. What questions are presented first will likely influence the questions posed later in the survey, which can impact results. 

5 Steps for a Seamless Survey Design

Consider the following survey design best practices that have been narrowed down to 5 essential steps. Depending on the topic or purpose of the survey, some steps should be taken into account more carefully than others.

Step #1: Identify the Survey Purpose

First and foremost, identify the purpose of the survey so that you can include the most relevant content in your survey design. It’s helpful to have an overarching purpose, and it’s even better to have multiple objectives that outline the details of your main goal. If you aren’t sure what your objectives should be, start asking some brainstorming questions to solidify your goals and establish a plan:

  • What is the demographic you are targeting?
  • What do you hope to discover by distributing this survey?
  • What kind of questions does this type of topic demand?
  • Do there need to be personalized questions at any point?
  • How will the answers be compiled and transformed into useful data?
  • What is your business or organization prepared to do based on the responses?
  • Who in your organization needs to be involved with the creation of the survey?

Knowing the objectives also helps you structure the survey correctly. It’s best to include the right sections within the survey outside of the actual questions. Some standards sections to include (which are often separated into different blocks for online surveys) are:

  • Introduction: The introduction needs to convey the purpose of the survey, provide instructions, set expectations for how long the survey will take, encourage honest answers, and reassure participants that their responses are secure.
  • Screeners: This section should ask questions that ensure that participants fall within the survey requirements for your objectives. This can include some appropriate demographic information, someone’s position in the company, or any other relevant information.
  • Content Questions: This is the main portion of the survey that features the most focused and topic-relevant questions.
  • Demographics: If you didn’t include demographic information in the screener section, add one after the main questions.
  • Redirect: After the questions and having participants submit their final answers, redirect them to a thank you page of some sort.

Understanding the ins and outs of the main purpose will not make the design of the survey better, but also keeps every question intentional and focused. Plus, if you completely understand the objectives and have a solid plan, you’ll be able to act on the results easier.

Step #2: Come Up with Questions

The most extensive part of the survey is often creating the actual questions. If you’ve planned according to your objectives, it’s easier to determine how many questions are needed and how long the questionnaire will be. Depending on the feedback you’re looking for, certain types of questions will be more beneficial than others. The most common options in surveys include the following forms.

  • Open/Close-Ended Questions: Open-ended questions allow for free responses that can completely vary but give more detailed answers, which will result in more qualitative. Close-ended questions only have so many variables and combinations, which will lead to more quantitative answers.
  • Multiple Choice: One of the most common types of questions, this format offers limited responses but also keeps things simple for participants and straightforward data.
  • Scale Questions: Scales are a great way to get multi-dimensional data while offering a measurable and simple set of options. Compared to multiple-choice, this offers much more range for more accuracy since they measure both which direction someone leans as well as the intensity of that leaning.
  • Slide bars: Similar to scale questions, bars help participants indicate to what degree they feel, think, or prefer one thing over the other on a more granular level. They also provide an interactive element to the survey.
  • Ratings: Rating questions are a great way to offer a range, but specifically for satisfaction about an activity, a product, an experience, a company, etc. Make sure these questions aren’t leading if you want authentic answers.
  • Multi-Select: One way to get more detailed data out of multiple-choice questions is to allow participants to select more than one of the multiple options (depending on the nature of the question, of course.) If you’re attempting to measure what kind of activities participants would like to see in the office, you can allow them to check multiple activities rather than just one.
  • Personal/Demographic Questions: These questions answer more specific questions to the individual and should be left until the end of the survey (see step 4).

Step #3: Refine Survey Questions by Eliminating Bias Factors

Biased answers are a top concern when it comes to surveys. The best results reflect the true feelings of participants without influencing answers one way or another. Remember: people take the path of least resistance, so simple, clear, and thoughtful questions work best. These are common pitfalls to watch for if you’re trying to figure out how to create a balanced survey.

Question Wording

The language used to create questions can yield different results, like the words “assistance to the poor” getting more support than “welfare”. Also, longer questions tend to be more confusing and more easily misinterpreted. Simple wording with clear, short questions helps the customer to answer more accurately.

Answer Order 

Sometimes the order of the answers provided affects results. If a survey is over the phone or in person, people struggle to remember multiple answers and sometimes choose either the first or last answer that they hear because that’s what they remember easiest.

Medium of the Survey 

Make sure that the form of the survey makes sense for the target audience and topic that you’re surveying. Things change between telephone, online, email, and in-person surveys.

Sensitive Subjects 

People don’t always want to share information about sensitive subjects and answer dishonestly about it. Consider your phrasing and reassure participants that responses are secure and confidential.

Social Pressure

Similarly, sensitive subjects that are either politicized or contentious sometimes lead people to not answer truthfully if they worry about social repercussions. Don’t use prestige bias where you associate a topic or answer with one group, i.e., describing a point by associating it with a trusted authority and then asking someone to agree or disagree with it.

Close-Minded or Non-Exhaustive Questions

The available answers need to allow respondents to answer as truthfully as possible. When the list of answers in a survey does not accurately reflect or fit all the potential answers of a consumer, the data can be skewed. For example, if you ask, “Do you ALWAYS exercise in the morning?” If someone almost always exercises in the morning, but not every day, they will put “no” which disrupts your data.

Open or Close-Ended Questions 

Does the survey want qualitative or quantitative data? Also note, too many open-ended questions can lead to burnout for the customer, so be mindful of how many questions you include in that format.

Length of Survey 

Speaking of burnout, length is one of the leading factors for survey completion and accuracy. The longer a survey is, the less accurate the results will be and the more likely the customer will not finish it.

Leading Questions 

Leading someone to answer in one or another doesn’t survey people’s sincere opinions. For example, asking a customer “How enjoyable was your visit with us today?” instead of “Rate your visit with us today” suggests to the customer that they at least enjoyed their visit a little bit and discourages honest answers. The first question is biased and leading.

Number of Questions per Page 

If there are too many questions on a page, the customer may mix up the questions and answer the wrong ones, or simply get overwhelmed and stop taking the survey. Take advantage of white space, and if you’re doing a survey online or with a program, don’t make the participants scroll for too long.

Branding 

Deciding whether or not the survey design should reflect the company’s brand or if it should be a blind survey. This usually depends on the purpose of the survey. If, for example, a company is looking for competition information, removing its brand would be wise.

Step #4: Have an Intentional Question Sequence

The order of the questions matter. Many people opt for a “funnel” sequence where the questions are more general, then specific, and then general again. 

  1. Broad at the Start: These questions will usually warm up the participants to the survey topics and help them familiarize themselves with the formatting and flow of the survey.
  2. Details for the Majority: The middle portion makes up most of the detailed questions that require more focus or deliberation.
  3. Personal Questions at the End: Any necessary or useful personal questions should be saved for the end, which eases participants out of the deep-concentration section and offers more closure.

There are other approaches and sequences to consider, but the funnel approach is fairly universal for most topics. Keep the questions concise and order them logically—people may get easily frustrated if the subjects bounce back and forth too much. By not jumping around excessively, you also prevent accidentally providing too much context for future questions, which can influence the responses given.

Along those same lines, listing more specific questions first can influence a question, too. If you first ask if someone enjoys their position at work and then follow it with a broader question about their overall work satisfaction, the first question will likely influence how the second one is answered.

It’s also important to remember that personal questions work best at the end. Studies show that too many personal questions in the beginning can make some respondents feel anxious about their demographics; they usually feel most worried about their own demographics affecting results. These questions are usually easier to answer and offer more of a cool down, encourage unbiased responses, and create a sense of resolution for participants.

Step #5: Test Out the Survey Design Before Distributing It

Finally, if you really want to perfect your survey research design, test, test, test. Even if you believe that the first iteration of the survey is a masterpiece, it’s essential to test the survey with a focus group or via pretesting. This helps ultimate biased questions, catches misinformation, and prevents wasting time and resources on an ineffective questionnaire. Testing should consider:

  • How long the survey takes
  • Confusing questions
  • Repetitive questions
  • Leading questions or wording issues
  • Missing questions or spelling errors
  • Miscellaneous problems that arise

Survey Design Is Easy with InMoment

Reliable data is the key to sincere, realistic, and effective improvement within a company. Businesses with this kind of feedback can make informed decisions that directly impact the people, clients, and consumers of your organization.

Now that you are prepared with persuasive survey design skills, you can optimize both the quality, design, and effectiveness of your survey with InMoment. Our XI intelligent platform allows you to easily put together an intuitive, clear, and sharp-looking survey. Discover just how simple it is to use InMoment for all your survey design needs.

The Shortcomings of Comment-Based Surveys

Comment-based surveys can be effective for immediately gathering feedback from customers. And when it comes to customer experience (CX), timeliness can make or break an organization’s ability to act on that feedback.

However, there are several arenas in which brands use comment-based surveys when another survey type would yield better intelligence. Today, I’d like to dive into several shortcomings that can make using comment-based surveys challenging for brands, as well as a few potential solutions for those challenges. Let’s get started.

Outlet-Level Analysis

As I discussed in my recent article on this subject, comment-based surveys are often less effective than other survey types for conducting outlet-level analysis. In other words, while brands can see how well stores, bank branches, and the like are performing generally, they usually can’t determine where individual outlets need to improve .

The reason for this has as much to do with the feedback customers leave as the survey design itself. From what I’ve seen across decades of research, customers rarely discuss more than 1-2 topics in their comments. Yes, customers may touch upon many topics as a group, but rarely are most or even a lot of those topics covered by singular comments.

What all of this ultimately means for brands using comment-based surveys to gauge outlet effectiveness is that the feedback they receive is almost always spread thin. The intelligence customers submit via this route can potentially cover many performance categories, but there’s usually not that much depth to it, making it difficult for brands to identify the deep-rooted problems or process breakages that they need to address at the unit level if they want to improve experiences.

(Un)helpful Feedback

Another reason that brands can only glean so much from comment-based surveys at the outlet level is that, much of the time, customers only provide superficial comments like:“good job”, “it was terrible”, and the immortally useless “no comment.” In other words, comment-based surveys can be where specificity goes to die.

Obviously, there’s not a whole lot that the team(s) running a brand’s experience improvement program can do with information that vague. Comments like these contain no helpful observations about what went right (or wrong) with the experience that the customer is referring to. The only solution to this problem is for brands to be more direct with their surveys and ask for feedback on one process or another directly.

How to Improve Comment-Based Surveys

These shortcomings are among the biggest reasons brands should be careful about trying to use comment-based surveys to diagnose processes, identify employee coaching opportunities, and seeing how well outlets are adhering to organization-wide policies and procedures. However, none of this means that comment-based surveys should be abandoned. In fact, there’s a solution to these surveys’ relative lack of specificity.

Brands can encourage their customers to provide better intelligence via multimedia feedback. Options like video and image feedback enable customers to express themselves in their own terms while also giving organizations much more to work with than comment-based surveys can typically yield. Multimedia feedback can thus better allow brands to see how their regional outlets are performing, diagnose processes, and provide a meaningfully improved experience for their customers.

Click here to read my Point of View article on comment-based surveys. I take a deeper dive into when they’re effective, when they’re not, and how to use them to achieve transformational success.

The Role of the Relationship Survey in CX Programs

Most comprehensive customer experience programs are made up of several different types of studies, the two most common of which are Transactional and Relationship studies. Here we will describe the differences between these two types of studies.

Transactional or trigger-based studies are the base of most customer experience programs. This type of study is conducted among current or recent customers and is used to ascertain the customer experience for a specific transaction or interaction. This type of research looks at near or short-term evaluations of the customer experience and often focuses on operational metrics. 

In contrast, the relational or relationship customer experience study is typically conducted among a random sample of the company’s customer base. Relational customer experience is used to understand the cumulative impressions customers form about their entire customer experience with the company. Importantly, this type of customer experience research is often the chassis for ascertaining specific aspects of the experience important to predicting loyalty and other customer behaviors. 

A. Transactional Customer Experience

In a transactional customer experience study, we focus on the details of a customer’s specific recent transaction. For example: 

  • The respondent’s most recent visit to Wendy’s 
  • The customer’s visit yesterday to her local Deutsche Bank branch 
  • Last week’s call to the Blue Cross/Blue Shield customer service center 
  • The respondent’s visit, 10 days ago, to Nielsen Nissan in Chesterton, Indiana, for routine auto maintenance. 

The overall rating we ask is the respondent’s overall evaluation of the specific transaction (visit, stay, purchase, and service). The attribute ratings are also specific to the specific transaction. 

B. Relational Customer Experience  

A relational customer experience study is broader in coverage. Here, we ask about the totality of the relationship with a company. In a relational customer experience study, the questions relate to the overall, accumulated experience the customer has had with the company. So rather than ask about the timeliness of an oil change at Nielsen Nissan and the quality of that service, the relational survey would ask for the respondent’s overall perceptions of Nielsen Nissan’s services across all the times the customer has interacted with that dealership. 

The overall ratings are often overall satisfaction with the relationship as a whole, willingness to recommend, and likelihood to return. Attributes are similarly broader in scope. We would not ask the customer about her satisfaction with the speed of service for her last oil change, instead we would ask about her satisfaction with the speed of service she usually gets when she visits Nielsen Nissan. 

C. Sampling Differences Between Transactional and Relationship Studies

In addition to the content of the surveys, a critical difference between these two studies is the sampling frame. In a transactional customer experience study, we sample customers who have interacted with the company recently. This is also sometimes called “trigger-based” customer experience since any type of interaction with the company can “trigger” the inclusion in a transactional customer experience study. 

In a relational customer experience study, we typically sample from the entire base of customers, including people who may not have interacted with the company recently. A relational customer experience study is projectable to the entire customer base, while a transactional customer experience study is a sub-set of customers – those who have interacted recently. 

When leveraging customer experience information with internal information, transactional customer experience information is often linked to operational metrics (such as wait time, hold time, staffing levels, etc.). In turn, through the use of bridge modeling, transactional research is often linked to relational customer experience, which is then linked to downstream business measures, such as revenue, profitability and shareholder value-add. 

D. Recommendations for Relationship Surveys 

Survey Content: As mentioned above, relationship surveys are meant to measure the totality of customers’ experiences with a given company. They are also meant to determine how customers are feeling about the company NOW. It is important to note that customers overall feelings about a company (as measured in relationship surveys) are often NOT the average of their transactional experience evaluations. This is because different transactions, especially if they are negative, can have a much larger effect on overall feelings toward a company than other transactions. 

Most relationship surveys contain questions addressing: 

  • Overall Metrics such as Likelihood to Recommend the Company, Overall Satisfaction with the Company, and Likelihood to Return or Repurchase 
  • High-level brand perceptions 
  • Company service channels usage and evaluations such as store/ dealership, finance company, call center/problem resolution teams, etc. 
  • Product usage and evaluations 
  • Share of Wallet measures 
  • Marketing/communication perceptions 

Survey Sampling: Who, how often and how many customers do you need to survey? There are no hard and fast rules but remember the idea is to obtain a representative sample of your customers. With that in mind: 

Who to Survey: All customers (whether they are recently active or not) should be available for sampling. You also might want to oversample small but important groups of customers (e.g., millennials, new owners, etc.) to ensure that you receive enough returns to analyze these groups separately. However, if you do oversample you will need to weight your data back to your customer demographics to ensure representative overall results. 

How Often to Survey: While transactional CX research is usually done on a continuous basis, relationship studies are usually conducted once or twice per year. How often companies conduct relationship studies is usually determined by the number of customers available (i.e., are there enough to conduct the study twice per year?) and when and how often decisions will be made based on the findings. 

How Many to Survey: This is often the most frequent question clients ask and the basic answer is that it depends on what organizational level you need the results to be representative of. The good news is that if you are only concerned about making decisions on the entire company level, only about 1000 well-sampled responses is sufficient. For most large companies that is a very small percentage of their customers. However, if you want the finding to be representative of lower levels of the organization for comparison purposes (e.g., zones, districts, stores) or want findings to be representative of certain customer groups (e.g., millennials, minorities, long-term customers, etc.) calculations need to be performed to determine the number of responses needed for these groups. Unfortunately, as demonstrated in the chart below, as the population size (e.g., company customers, zone customers, store customers) goes down, the percentage of customers needed to represent that population goes up. For instance, to obtain +/- 3 percentage point precision for a population of 3,000,000 people you only need 1067 randomly sampled returns. That is just 0.04% of the population. For a population of 30,000 people, you need 1030 returns which is 3.4% of the population. For a population of 3,000 the number of returns needed drops to 787, but that is 26.2% of a population of 3,000. For a very small population like 300, you need returns from 234 people (78.0%) of the population. 

population survey

E. Summary 

Both transactional and relationship surveys are key parts of any comprehensive customer experience program. Transactional surveys are great for assessing the quality of specific customer touch points and making improvements in those areas. Relationship surveys allow for the assessment of the entire customer experience across all touchpoints and therefore more closely relate to customer behaviors such as loyalty, customer spend, and customer advocacy.

Change Region

Selecting a different region will change the language and content of inmoment.com

North America
United States/Canada (English)
Europe
DACH (Deutsch) United Kingdom (English)
Asia Pacific
Australia (English) New Zealand (English) Asia (English)