Top 5 Game-Changing Experience Improvement Blogs from 2020

2020 asked us to step up our game—a lot. In fact, it seems as if the last year actually consisted of multiple years, with January and February feeling like they were light-years ago. Organizations have had to pivot multiple times since March in order to navigate the Coronavirus, but savvy brands have found a secret weapon: Experience Improvement (XI) initiatives.

XI initiatives provide a pathway for brands to not only listen to how customers are feeling about specific experiences (like COVID-19 specific policies, curbside pickup, etc.), but also to understand what actions they need to take to improve those experiences in a timely manner. In a way, a well-designed program serves as a roadmap in uncharted territory.

But how do you successfully set up such a program? Well, you’ve come to the right place for the answer. The InMoment XI Blog is your go-to place for everything Experience Improvement, from how-to’s, to what’s next, and even stories of rockstar brands.

Here are a few of our favorite blogs from 2020 at a glance:

Top Five 2020 Blogs for Experience Improvement

  1. What Does Customer Experience Look Like in the World of Coronavirus
  2. How to Ensure Successful Survey Design during a Pandemic
  3. 3 Powerful Ways to Create Engaging Transactional Customer Surveys
  4. How to Truly Understand Customer Needs, Wants, and Expectations
  5. Why Market Research is Vital to Your CX Program in Times of Crisis (and Beyond!)

What Does Customer Experience Look Like in the World of Coronavirus

This was our flagship piece of thought leadership on Coronavirus best practices. Though our experts Jim Katzman and Eric Smuda authored this piece in March, these best practices are still incredibly vital for brands going into 2021. After all, we still have a few more months until the vaccine can be distributed widely enough!

Click here to get the low-down on the top five ways brands can leverage their experience programs in their COVI-19 strategy.

How to Ensure Successful Survey Design during a Pandemic

One of the most common questions clients asked our expert practitioners in 2020 was, “should we alter our survey because of Coronavirus precautions?”

Their answer: it depends. More specifically, there are three factors brands should consider before making changes to their survey. You can read about them here.

3 Powerful Ways to Create Engaging Transactional Customer Surveys

A successful listening approach has multiple surveys with specific purposes. One of the most necessary for understanding the experience at different touchpoints is the transactional survey.

But as it goes with everything, there are best practices, and there are practices that can stop productivity in its tracks. In this blog, we have three specific strategies you can employ for engaging, intelligence-gathering, action-inspiring transactional surveys. Check it out here!

How to Truly Understand Customer Needs, Wants, and Expectations

How do you deliver incredible experiences that make customers eager to come back for more? You first need to understand what customers expect from your brand. This is one of the fundamental functions of an Experience Improvement initiative; it is also one of the most powerful ways your program can positively impact your bottom line.

In this article on the XI InMoment Blog, strategist Eric Smuda walks you through the process he employs to help our clients understand their customers. Read more here.

Why Market Research is Vital to Your CX Program in Times of Crisis (and Beyond!)

The thing about unprecedented situations is that the information you need to guide your efforts will not be in your existing data. That means that times of crisis are the best time to turn to a market research solution.

In this article, Strategic Insights Team expert Radi Hindawi discusses the power of market research and three rules for brands looking to weave it into their strategy. You can find it here.

We hope you have enjoyed the content on the XI InMoment Blog this year, and our team is looking forward to bringing you even more thought leadership, best practices, and customer stories in 2021!

Tell us more about yourself so we can tailor your demo for you

The Shortcomings of Comment-Based Surveys

Comment-based surveys can be effective for immediately gathering feedback from customers. However, there are several arenas in which brands use comment-based surveys when another survey type would yield better intelligence.

Comment-based surveys can be effective for immediately gathering feedback from customers. And when it comes to customer experience (CX), timeliness can make or break an organization’s ability to act on that feedback.

However, there are several arenas in which brands use comment-based surveys when another survey type would yield better intelligence. Today, I’d like to dive into several shortcomings that can make using comment-based surveys challenging for brands, as well as a few potential solutions for those challenges. Let’s get started.

Outlet-Level Analysis

As I discussed in my recent article on this subject, comment-based surveys are often less effective than other survey types for conducting outlet-level analysis. In other words, while brands can see how well stores, bank branches, and the like are performing generally, they usually can’t determine where individual outlets need to improve .

The reason for this has as much to do with the feedback customers leave as the survey design itself. From what I’ve seen across decades of research, customers rarely discuss more than 1-2 topics in their comments. Yes, customers may touch upon many topics as a group, but rarely are most or even a lot of those topics covered by singular comments.

What all of this ultimately means for brands using comment-based surveys to gauge outlet effectiveness is that the feedback they receive is almost always spread thin. The intelligence customers submit via this route can potentially cover many performance categories, but there’s usually not that much depth to it, making it difficult for brands to identify the deep-rooted problems or process breakages that they need to address at the unit level if they want to improve experiences.

(Un)helpful Feedback

Another reason that brands can only glean so much from comment-based surveys at the outlet level is that, much of the time, customers only provide superficial comments like:“good job”, “it was terrible”, and the immortally useless “no comment.” In other words, comment-based surveys can be where specificity goes to die.

Obviously, there’s not a whole lot that the team(s) running a brand’s experience improvement program can do with information that vague. Comments like these contain no helpful observations about what went right (or wrong) with the experience that the customer is referring to. The only solution to this problem is for brands to be more direct with their surveys and ask for feedback on one process or another directly.

How to Improve Comment-Based Surveys

These shortcomings are among the biggest reasons brands should be careful about trying to use comment-based surveys to diagnose processes, identify employee coaching opportunities, and seeing how well outlets are adhering to organization-wide policies and procedures. However, none of this means that comment-based surveys should be abandoned. In fact, there’s a solution to these surveys’ relative lack of specificity.

Brands can encourage their customers to provide better intelligence via multimedia feedback. Options like video and image feedback enable customers to express themselves in their own terms while also giving organizations much more to work with than comment-based surveys can typically yield. Multimedia feedback can thus better allow brands to see how their regional outlets are performing, diagnose processes, and provide a meaningfully improved experience for their customers.

Click here to read my Point of View article on comment-based surveys. I take a deeper dive into when they’re effective, when they’re not, and how to use them to achieve transformational success.

Tell us more about yourself so we can tailor your demo for you

Improving Survey Response Rates Through Incentives

From time to time customer experience managers will hear the following questions from their internal clients: “Is our response rate too low?”; “What can we do to increase our response rate?” or; “Should we provide an incentive for people to respond?” Like many things in research, these relatively simple questions have somewhat complex answers.

When faced with these questions, the first thing to address is what issue is really being raised. Is the question really about increasing response rates (the percentage of people who respond to a survey invitation), or is it about increasing the total number of responses at a given level of the organization (e.g., dealerships) or is it about improving the representativeness of the responses obtained? Improving the response rate is often not the most effective way to increase the total number of responses and/or improve representativeness.

Increasing the Number of Responses and Improving Representativeness

To increase responses at the unit level and improve representativeness, the first place to look is the sampling scheme. Is the program sampling only a small percentage of customers in an attempt to control costs? If so, it is often more economically feasible to sample more customers and not use an incentive than it is to provide an incentive to increase response rates of a smaller sample.

Another aspect of the sampling scheme to examine is whether important segments of customers are being excluded from 
the sample frame. For instance, in the automotive industry it has typically been the practice that customer-pay (as opposed to warranty) customers are excluded from dealership service experience surveys, even though most dealerships do much more customer-pay service work than they do warranty work. This practice started because of difficulties getting access to customer-pay records. Now that mechanisms are in place for most manufacturers to obtain customer-pay records, these customers should be included in the sampling frame.

Obviously, inclusion of these customers will increase representativeness of the returns because an important part of the dealership’s business will now be included in
the responses.

Improving Response Rates

If the question is indeed about improving the response rate or if improving the response rate is likely to be the best way to improve representativeness and/or the number of responses, providing an incentive to customers to respond is often not the most effective tactic to use. The choice of whether to respond to a survey invitation is a cost-benefit decision for the customer. How much will completing the survey cost
the customer versus what benefit will he/she receive? At 
first glance, one might think that there is no cost to the customer to respond. However, there are many costs and these costs have been increasing over the past few decades. These include:

  • Time–People are now more pressed for time than in years past and they are more often solicited for research than previously.
  • Effort–Many surveys are long and complicated.
  • Hassle/Boredom–Some customers feel “duped” by agreeing to take what they think is a short survey and then finding out it is quite long; many surveys contain boring and repetitive questions.
  • Potential for Loss of Privacy–Many customers worry that their information will not be kept confidential.
  • Potential of Being Put on Numerous Mail/E-mail/Phone Lists–Many customers are concerned that their contact information will be sold to other companies and used for marketing purposes.
  • Potential for Being Subjected to a Sales Pitch–With the increase in Selling Under the Guise of Research (“Sugging”) customers are more skeptical about the legitimacy of survey invitations.

On the benefits side of the equation, in years past customers often felt special and valued because they were being asked for their opinions. Unfortunately, as survey research has proliferated, being asked for your opinion is no longer a unique experience that conveys “specialness.” Customers also seemed more motivated to contribute to the “greater good” by providing feedback about products and services than they are today. Some argue that the younger generations are less interested in the greater good and have even labeled Generation Y the “What’s in It for Me?” generation. Also, those interested in providing feedback now have many ways of doing so (e.g., blogging, posting comments at customer-generated media sites, etc.) instead of completing a survey.

Look at Both Sides of the Customer Cost/Benefit Equation

To increase response rates, researchers should look at both sides of the customer cost/benefit equation by seeking to decrease the cost to the customer and increase the benefits of participation. Some suggestions for reducing the customers’ costs are:

  • Coordinate customer touch points. Many companies inadvertently over survey their customers because different departments or divisions conduct independent research programs.
  • Make the task as easy as possible.
  • Make the survey as short as possible, but not shorter than 
 Sometimes customers can interpret a very short survey as the company not really being interested in their opinions and just “going through the motions” of gathering customer feedback.
  • Make the survey as interactive and entertaining as possible, while maintaining collection of valid information.
  • Give customers the opportunity to choose how and when to respond.
  • Give customers the ability to “tell their story” rather than only answering a large number of specific closed-ended questions. Then use text analytics to gather insights from the customers’ comments.
  • Be very specific about how the information will and will not be used.
  • Avoid “nice to know” questions that are often included “because we have them responding anyway.”
  • Avoid sensitive questions (e.g., income, sexual orientation) unless they are really necessary. If they must be asked, explain to the customer why you are asking the questions and what will be done with the information.

Ways of increasing the benefits of participation to the customer are:

  • Send customers a “thank you” and briefly explain how the information is used.
  • Show customers how the information is being used. For example, some companies have posted signs in their retail outlets telling customers what improvement efforts are being made due to customer feedback.
  • Assure customers they will get a personal follow-up if they request it and they will not get a follow-up if they don’t request it. It is very important that companies follow-up on these promises. Otherwise, it will cause dissatisfied customers to become even more upset.
  • Consider allowing customers to see other customers’ feedback. People are social beings and they often want to know if their experience was typical or atypical.
  • Provide an appropriate reward with monetary value to respond.

Considerations When Using Monetary Incentives

In most circumstances, to increase response rates we recommend investigating the non-monetary methods listed above before considering use of a monetary incentive (or any incentive with monetary value – e.g., a free oil change or a discount coupon for your next purchase). If not done properly, monetary incentives have the potential to bias the responses. This brings us to the issue of what makes an incentive appropriate.

Generally, the smaller the incentive the better. This is not only because smaller incentives are more economical;
 it is primarily because larger incentives have more potential to bias results. There are two main concerns with large incentives. First, as incentives increase respondents are more likely to complete the survey just to get the incentive. Therefore, they may pay little or no attention to the questions they are answering and provide bad information. Unfortunately, bad information is worse than no information at all. Second, larger incentives may bias the sample by encouraging lower income individuals to respond at greater rates than higher income individuals. One thing to take into consideration when using a small monetary incentive is that it should be framed as “a small token of our appreciation” to the customer. If customers believe you are trying to compensate them for their time with a small incentive, they can become offended.

  1. If possible, provide the incentive to everyone being sampled rather than promising an incentive to those who complete the survey. In the case of cash incentives to complete a mail survey, most research has shown that inclusion of a small amount (e.g., $1) is more effective at increasing response rates than promising a larger amount (e.g., $5) upon return of the survey. There are many potential reasons for this, but probably the largest is customers’ skepticism that they will receive the promised reward.
  2. The incentive should be something of equal value to everyone, regardless of their experience. Incentives such as discount coupons for the next purchase or the promise of a free oil change have two major problems associated with them. First, they are more valuable to people who intend
to return to the retailer (e.g., those that previously had a good experience) than those that are unlikely to return. Therefore, they can bias the results. Second, they can be seen by customers as “just another marketing ploy.”
  3. The incentive must match the methodology and the geography. Inclusion of a dollar bill with mail surveys is relatively easy in the U.S. but it is obviously difficult to do for online or phone surveys. It is also difficult to include money in Canadian mail surveys because the one and two dollar currencies are coins and the added weight of including them increases postal rates.

A Quick Look at Some Common Incentives

Inclusion of a Dollar Bill with a Mail Survey. Surprisingly, when using monetary incentives, this is still one of the most effective ways to increase response rates for mail surveys. This technique is particularly appropriate for small survey programs but can become financially infeasible for large programs.

Entry into a Lottery to Win a Large Prize upon Return of the Survey. For mail surveys, this technique is generally not as effective at increasing response rates as including
a dollar bill with the outgoing survey. However, for large programs it is often more economically feasible than using a one-dollar incentive. For smaller programs it is less economically feasible. The use of a lottery is also easier to implement with online and telephone surveys. There are numerous laws and regulations concerning the use of lotteries as an incentive, and it is strongly recommended that a professional promotions management company be employed to manage the lottery.

Providing Discount Coupons. As discussed above, this is generally discouraged because of the potential to bias results and be seen as a marketing effort.

Contributing to Charity in the Customer’s Name.
 In general, this technique is not as effective at increasing response rates as either the dollar bill or lottery alternatives. If considering this alternative, it is important to include a number of relatively different charities the customer can choose from. Otherwise, the potential to bias the sample will increase because those in favor of the charity’s cause might respond at higher rates.


Inclusion of a monetary incentive for customers to return experience surveys is not a decision that should be made lightly. It is fraught with potential problems. In general, non-monetary ways of improving representativeness, the number of surveys returned, and response rates should be explored before considering incentives with monetary value. When considering monetary incentives, it is important to match the incentive to the size, methodology, and geography of the program. Conducting a pilot test assessing the costs (both financial costs and results bias) and benefit (in terms of increased response rates) of several different types of monetary incentives are recommended.

Tell us more about yourself so we can tailor your demo for you

Change Region

Selecting a different region will change the language and content of

North America
United States/Canada (English)
DACH (Deutsch) United Kingdom (English)
Asia Pacific
Australia (English) New Zealand (English) Asia (English)