The Science and Art of CX Goal Setting

In this blog I will address a question that I’ve come across many times during my 20 years as a research consultant: “What is the best way to set goals for my CX program?”

As most of you probably know, there are several important aspects of goals you need to consider. I like using the SMART acronym for setting motivating goals because it is both comprehensive and better yet, easy to remember. 

Using this acronym, goals should be:

  • Specific: Precisely state what needs to be accomplished.
  • Measurable: Clearly define what criteria will be used to determine if the goal is met and how it will be measured. Make sure measurement processes are in place and are valid.
  • Attainable: Set a goal that is challenging but realistically reachable.
  • Relevant: Make sure the goal pertains to the specific person or group trying to achieve the goal. In other words, the person’s or group’s behavior needs have a significant impact on whether or not the goal is achieved.
  • Time-Based: Set a firm timeline for when the goal needs to be achieved, but make sure the timeline is realistic.

All of these attributes of goals can be defined further, but I think the trickiest one is trying to set attainable goals. Therefore, I’m going to focus on what things you should consider when setting realistic but challenging goals.

In the customer experience world, most goals are “outcome goals” versus “performance goals.” 

Outcome goals usually focus on obtaining a score on a specific measure such as overall satisfaction with a given transaction (e.g., customer contact center call, product purchase experience, etc.), customer likelihood to recommend the brand, customer relationship satisfaction with the brand, or customer retention/repurchase behavior. Because outcome goals are the most prevalently used, I will be focusing on them.

Goal Considerations

What Is Your Current Score?

One of the first things to consider is where is the score now? Is it already quite high? If so, any improvement you are targeting will be more difficult to obtain than if the score is relatively low. It is usually much easier to move a score when there is a lot of room for improvement than when the score is nearing the ceiling of the scale.

For this reason, I like to set goals in terms of “percent of opportunity.” For instance, if we have a goal criterion measured on a 100-point scale, a “ten percent of opportunity” goal would translate to 5 points if the current score is 50 (100 – 50 = 50. 10% of 50 is 5.) but only 2 points if the current score is 80 (100 – 80 = 20. 10% of 20 is 2.). 

You should also consider when the score is “high enough” and no improvement is needed. While most companies want to focus on continuous improvement, there does come a time when improvement efforts are unnecessary and perhaps counterproductive. 

What Are the Past Trends in the Score?

Next, consider how the score has been trending. It will obviously be more difficult to improve a score that has been declining over the past than one that has been increasing. For instance, consider the two trend lines below. 

Figure A
Figure B

These scores are mirror images of each other with the one on the top (Figure A) decreasing an average of about two points per quarter whereas the one on the bottom (Figure B) shows an average increase of about two points per quarter. Therefore, if no improvement efforts are put in place, one can reasonably expect two different outcomes for the score in the next quarter (48 for the chart on the left, and 52 for the chart on the right). For the next quarter a reasonable goal for the measure on the left might be a score of 50 (just stop the decline) whereas a goal of 54 (a little more than where the score would likely be anyway) might be appropriate for the score on the right.

How Do You Consider the Variance of the Score?

You need to consider the variance of the score and this part gets a bit “stat-sy” but try to bear with me. Scores are also a lot easier to move if they have a wide distribution than if they are narrowly distributed. Consider the distributions of the two measures shown below. 

The one on the top (Figure C) has a standard deviation of 10 points (a standard deviation is basically the average distance the individual scores are from the scores’ overall average) whereas the one on the bottom (Figure D)has a standard deviation of 20 points. You can see how much narrower the distribution is on the left compared to the distribution on the right.

Figure C
Figure D

In a normal distribution about 64% of the scores fall between one standard deviation above and below the overall average.  What that means in this case is going from 50 to 60 is moving past 32% of the population for the scores represented in the left chart, but only a little over 16% for the chart on the right. For this reason, I often use something like “what is ½ of the standard deviation” as a first estimate of what I might want to use as an improvement goal.

By the way, looking at the standard deviation also gives you a good sense of how to adjust performance goals for different sized scales. For instance, the standard deviation for a 100-point scale will likely be much smaller than the standard deviation for a 1000-point scale. Setting an improvement goal of 5 points for the 100-point measure might very well be equivalent to setting a 40-or 50-point improvement goal for the 1000-point measure.

What Are Your Improvement Initiatives?

Finally, consider what improvement initiatives you have planned. If you aren’t going to put improvement initiatives in place, you can expect little change in your outcome measures, except for those explained by how your scores have been trending. 

Even if you do have improvement initiatives planned, you need to make sure they have time to work before the next measurement of the outcome variable. When deciding this, be careful because it is easy to underestimate the time an improvement initiative will take. Remember, you have to have time to develop the initiative, implement it across your organization, wait for your organization to put it in practice and to get good at it, and then you have to have time for the implementation to affect your outcome measure. Some measures (e.g., transactional customer satisfaction) are relatively fast to show change whereas other measures (e.g., customer retention and customer loyalty) can take months or years to show effects.

The Science and Art

I titled this blog “The Science and Art of CX Goal Setting” because I think you do the “science” parts first and the “art” part second. The science is everything I have talked about until now. The art is how you put it all together. While that will vary depending on your situation, I usually start with the following thought process:

  • Is the score high enough already? If so, there is no need to set an improvement goal. Just focus on maintaining the same level.
  • What would the score be if I extrapolate out the trend? I use that as my “no improvement” starting point.
  • What is the standard deviation of the score and what “percent of opportunity” does that represent? Does this seem like a reasonable improvement goal over the “no improvement” starting point?
  • How much effort is the organization going to put into improving the processes that drive the outcome goal? How long will these efforts take to make an effect?

Taking all of these things into consideration, I adjust the score accordingly but again, in the end it is an “art” rather than a “science.”

Enjoy the goal setting exercise! If you’d like to learn more about how you can set CX goals and develop a comprehensive CX strategy, check out this white paper!

About Author

Dave Ensing, Ph.D. VP, Research Consulting

David provides research design consultation to clients, facilitates continuous improvement of existing studies and manages InMoments automotive research services group. David has over 20 years of experience conducting and overseeing customer experience programs at InMoment.

Change Region

Selecting a different region will change the language and content of

North America
United States/Canada (English)
DACH (Deutsch) United Kingdom (English)
Asia Pacific
Australia (English) New Zealand (English) Asia (English)