The question asked in the Customer Satisfaction survey is the following:
The scale used for the survey question is typically 1 to 5.
The most common customer experience metrics survey questions all have different rating scales which can become easily confusing: the CSAT is a simple 1 to 5, the Customer Effort Score (CES) question is on 1 to 7, and the NPS question on 1 to 10. In the end, the scale does not quite matter as much and some organisations create their own custom metrics. However, having a standard helps customer experience professionals have a consistent frame of reference.
Another consideration in measuring CSAT is how many surveys need be sent out, and how many responses need to be received to obtain a reliable score? You might remember from a statistics class in college that the survey sample size n to reach is:
n = (z*𝜎/𝜀)^2
where 𝜀 is the margin of error, z is the z-score for a given confidence level and 𝜎 the standard deviation.
Interestingly, the number of survey responses you’d have to reach does not depend on how many customers you have in the first place. More customers does not mean you have to poll more of them. How many customers you need to poll has more to do with how much variance (standard deviation 𝜎 is the square root of the variance) there is in between their answers to the CSAT question. It somewhat intuitively makes sense. If the first responses that come in are all 4’s and 5’s, you’ll expect future responses to be in the same ball park. If you sometimes get 1s, sometimes 4’s, you’ll want to continue polling to narrow in on a score. In summary, more variability requires larger samples.
The second factor that impacts sample size is the margin of error. For a CSAT scale of 1 to 5, let’s say you’d be comfortable with +/- 0.5 points from the mean, a 10% margin of error.
Taking a common 95% confidence interval, z = 1.96, and let’s say a previous survey showed a 1.4 point standard deviation, then the sample size would be n = (1.96*(1.4/0.5))^2 = 30 responses. Now, if a margin of error of +/- 0.25 points was required, it would drive up the sample size to 120.
The old fashion way for a company to compute its CSAT would be to mass email, and/or direct mail a subset of customers twice a year. They’d use a formula like this one to calculate the sample size needed to reach statistical significance. However, by the time the survey is conducted and results compiled, the company’s products, services, and process will have evolved a bit, and the CSAT score would already be obsolete. And closing the loop on any negative feedback would be less than timely and not as effective in reducing churn. Finally, email or direct mail could be the appropriate channel to survey if that is the best way to engage with your customers, but often requires higher samples given the low response rates of those channels.
The demand for more accurate and timely customer insights has created a niche for software vendors to develop tools that can handle the surveying, the sampling, the computation, and monitoring of standard customer experience metrics such as CSAT, CES, and NPS. As more and more companies have moved their products and services online, some of these vendors offer the ability to survey customers directly during the customer’s web or mobile experience. Polling customers on their experience right as they’re engaging with the brand at key journey points generates more contextual feedback and a chance to remediate.
Taking the customer pulse and computing CSAT twice a year is not enough to manage a business proactively. Tools are now available to get a real-time CSAT for anyone in the company to see and rally around.
See ROI in Half the Time
InMoment’s integrated CX approach increases customer lifetime value and bottom line performance in just 12 months, significantly faster than the industry average of 25 months!
Estimated ROI (payback period in months)