Earlier this year I shifted my priorities and focus, and chose pleasure and purpose over other engrained habits and needs.

I was so “busy being busy” that I had forgotten the evident importance of slowing down and taking stock. The basic need to actively listen and properly connect with the various voices and opinions available to me had been neglected. As someone who delights in identifying patterns, sharing theories, and having an opinion, I had found myself too often recycling old narratives. I was running the risk of becoming stale or too comfortable with my long established talk track.

I needed to set some new goals, and decided to set deadlines that would force me to both focus on freeing up time to learn, and also create the right environment to get energised, so I decided to create regular “industry sessions” for my colleagues, where we could discuss the hot topics of the day.

Out of this year’s sessions came some very intriguing stories and lessons from the customer experience (CX) industry. Here are just a few that I’d like to share with you.

Personalisation

Often seen in relation to targeted marketing, we as consumers have appreciated brands’ efforts to personalise our experience. And brands know that establishing a connection (friendliness, trust, being made to feel valued) drives customer satisfaction, loyalty, and increased spend.

But being “personal” has also created some opportunities to strengthen important links, even if not always executed perfectly. Starbucks—as a global brand—is the antithesis of local, and yet they tapped in to the importance of being valued in a unique way through the “Can I have your name?” approach. At first disruptive and peculiar, it is now hard for other brands to copy.

My local train station has a Starbucks franchise, and whenever I approach the counter they know I want a flat white. Unfortunately, despite having tried on a number of occasions to tell Eddie that my name is not Matt, that is the name that appears on the cup. Despite being wrong, my “Britishness” can only allow it roll on now, and secretly I enjoy the regularity of this wrong. It is human and therefore wonderfully imperfect, and in many ways more effective than communication based on algorithms.

Emerging Labels And Transparency

We have perhaps already grown a little weary of contemplating millennials, and are now seeing more articles hypothesising on Gen Z (“the hyper millennials”), and what they will bring to the party. How different will their customer expectations be to us Generation X-ers? Are they really that much more sensible?

A requirement for authenticity, and brands doing the right thing may however be the needs that bond us all together, less XYZ and more Generation C (more connected to each other through social reviews than ever before). In the space of a few months two hip brands saw the polar effects of how quickly word of mouth can kick in.

Airrbnb was impacted by the news that customers who had left part way through their stays were seeing their reviews cleansed as they were being treated as having been cancelled. Their spokesperson described these as “isolated incidents.” In contrast, Patagonia’s promise to donate all of its Black Friday sales to local environmental causes not only swelled their tills, but boosted awareness and equity as the positive word spread.

Language Shifts And Meaning

Back in April we were debating FOMO (fear of missing out) and how brands use this emotion to drive increased traffic—and paranoia—amongst their competitors. I doubt many of us saw, however, “post truth” (apparently the Oxford Dictionary’s word of the year) coming up the rails.

The definition of post truth is, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

When the Temkin Group labeled 2016 as the year of emotion, I doubt they expected System 2 parts of our brain (slow, effortful, infrequent, logical, calculating, conscious) to be so heavily defeated by System 1 (fast, automatic, frequent, emotional, stereotypic, subconscious).

I should probably out myself here as a Guardian-reading, Radio-6-listening type, but I was certainly not the only person to view the recent UK and USA votes in a state of disbelief. Out of this darkness the team was at least able to properly understand two things:

1.    Negativity Bias: Humans are significantly more likely to remember the negative experiences, and far more people pass on a bad experience than a good one—so reduce the chances as well as you can, and don’t under estimate how motivating anger can be.

2.    Confirmation Bias and our manufactured echo chambers: This is a description of the situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented. The moral being to listen to your electorate/ customers / colleagues, and don’t ignore feedback that simply does not match your own take on the world. It could well come back to bite you.

Story Telling And Journeys In Experience

Finally, something that we have further built on is our love of stories, a recognition of their importance, and how best to narrate ideas to connect with an audience.

For example, we have agreed on the right structure for delivering meaningful communication. Set up the situation that we, or our customers, find themselves in; share the catalyst that requires a change; explain the purpose and the central question that you will be answering; give your answer; provide the evidence; summarise and provide the call to arms.

Part of the reason that story structures work so well to get a message across is that this is how our brains have evolved to take in important messages. And looking at experiences from a behavioural science perspective also provides learning for how to structure any interaction for greatest effect. Get the difficult things out of the way early (but try not to churn), spread the pleasure, and end on a high.

We can all benefit from taking care of our opportunities to communicate.

Before I conclude, for all those commentators out there who take the time to share and contribute to the mix of opinions and learning available to those willing to listen, I thank you.

We should all continuously remind ourselves that CX does not exist in a bubble. Where my team originally started looking at more CX-specific emerging topics, such as a focus on customer effort metrics, we soon felt diverted and started to explore the outer reaches of behaviour and motivation in general. And this is because we recognised that many factors can influence a brand’s ability to deliver against its customer promise, its employees’ capacity to deliver this proposition consistently, and indeed its customers’ appetite to appreciate and be motivated by these efforts.

My resolution for 2017 will be to stay curious, but contribute more. I will therefore leave you with a message that resonated with me this summer. It wasn’t from the usual sages, but the English RFU as part of their Level 1 Coaching course. Whilst aimed at how we work with young rugby players, and their development, the argument works for all of us who are in a position of influence:

“Our players have the capacity to outgrow us if we stand still. We may restrict them from achieving their full potential if we fail to recognise the need to continue our own development.”

Tell us more about yourself so we can tailor your demo for you

Earlier this year I shifted my priorities and focus, and chose pleasure and purpose over other engrained habits and needs.

I was so “busy being busy” that I had forgotten the evident importance of slowing down and taking stock. The basic need to actively listen and properly connect with the various voices and opinions available to me had been neglected. As someone who delights in identifying patterns, sharing theories, and having an opinion, I had found myself too often recycling old narratives. I was running the risk of becoming stale or too comfortable with my long established talk track.

I needed to set some new goals, and decided to set deadlines that would force me to both focus on freeing up time to learn, and also create the right environment to get energised, so I decided to create regular “industry sessions” for my colleagues, where we could discuss the hot topics of the day.

Out of this year’s sessions came some very intriguing stories and lessons from the customer experience (CX) industry. Here are just a few that I’d like to share with you.

Personalisation

Often seen in relation to targeted marketing, we as consumers have appreciated brands’ efforts to personalise our experience. And brands know that establishing a connection (friendliness, trust, being made to feel valued) drives customer satisfaction, loyalty, and increased spend.

But being “personal” has also created some opportunities to strengthen important links, even if not always executed perfectly. Starbucks—as a global brand—is the antithesis of local, and yet they tapped in to the importance of being valued in a unique way through the “Can I have your name?” approach. At first disruptive and peculiar, it is now hard for other brands to copy.

My local train station has a Starbucks franchise, and whenever I approach the counter they know I want a flat white. Unfortunately, despite having tried on a number of occasions to tell Eddie that my name is not Matt, that is the name that appears on the cup. Despite being wrong, my “Britishness” can only allow it roll on now, and secretly I enjoy the regularity of this wrong. It is human and therefore wonderfully imperfect, and in many ways more effective than communication based on algorithms.

Emerging Labels And Transparency

We have perhaps already grown a little weary of contemplating millennials, and are now seeing more articles hypothesising on Gen Z (“the hyper millennials”), and what they will bring to the party. How different will their customer expectations be to us Generation X-ers? Are they really that much more sensible?

A requirement for authenticity, and brands doing the right thing may however be the needs that bond us all together, less XYZ and more Generation C (more connected to each other through social reviews than ever before). In the space of a few months two hip brands saw the polar effects of how quickly word of mouth can kick in.

Airrbnb was impacted by the news that customers who had left part way through their stays were seeing their reviews cleansed as they were being treated as having been cancelled. Their spokesperson described these as “isolated incidents.” In contrast, Patagonia’s promise to donate all of its Black Friday sales to local environmental causes not only swelled their tills, but boosted awareness and equity as the positive word spread.

Language Shifts And Meaning

Back in April we were debating FOMO (fear of missing out) and how brands use this emotion to drive increased traffic—and paranoia—amongst their competitors. I doubt many of us saw, however, “post truth” (apparently the Oxford Dictionary’s word of the year) coming up the rails.

The definition of post truth is, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

When the Temkin Group labeled 2016 as the year of emotion, I doubt they expected System 2 parts of our brain (slow, effortful, infrequent, logical, calculating, conscious) to be so heavily defeated by System 1 (fast, automatic, frequent, emotional, stereotypic, subconscious).

I should probably out myself here as a Guardian-reading, Radio-6-listening type, but I was certainly not the only person to view the recent UK and USA votes in a state of disbelief. Out of this darkness the team was at least able to properly understand two things:

1.    Negativity Bias: Humans are significantly more likely to remember the negative experiences, and far more people pass on a bad experience than a good one—so reduce the chances as well as you can, and don’t under estimate how motivating anger can be.

2.    Confirmation Bias and our manufactured echo chambers: This is a description of the situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented. The moral being to listen to your electorate/ customers / colleagues, and don’t ignore feedback that simply does not match your own take on the world. It could well come back to bite you.

Story Telling And Journeys In Experience

Finally, something that we have further built on is our love of stories, a recognition of their importance, and how best to narrate ideas to connect with an audience.

For example, we have agreed on the right structure for delivering meaningful communication. Set up the situation that we, or our customers, find themselves in; share the catalyst that requires a change; explain the purpose and the central question that you will be answering; give your answer; provide the evidence; summarise and provide the call to arms.

Part of the reason that story structures work so well to get a message across is that this is how our brains have evolved to take in important messages. And looking at experiences from a behavioural science perspective also provides learning for how to structure any interaction for greatest effect. Get the difficult things out of the way early (but try not to churn), spread the pleasure, and end on a high.

We can all benefit from taking care of our opportunities to communicate.

Before I conclude, for all those commentators out there who take the time to share and contribute to the mix of opinions and learning available to those willing to listen, I thank you.

We should all continuously remind ourselves that CX does not exist in a bubble. Where my team originally started looking at more CX-specific emerging topics, such as a focus on customer effort metrics, we soon felt diverted and started to explore the outer reaches of behaviour and motivation in general. And this is because we recognised that many factors can influence a brand’s ability to deliver against its customer promise, its employees’ capacity to deliver this proposition consistently, and indeed its customers’ appetite to appreciate and be motivated by these efforts.

My resolution for 2017 will be to stay curious, but contribute more. I will therefore leave you with a message that resonated with me this summer. It wasn’t from the usual sages, but the English RFU as part of their Level 1 Coaching course. Whilst aimed at how we work with young rugby players, and their development, the argument works for all of us who are in a position of influence:

“Our players have the capacity to outgrow us if we stand still. We may restrict them from achieving their full potential if we fail to recognise the need to continue our own development.”

Tell us more about yourself so we can tailor your demo for you

Earlier this year I shifted my priorities and focus, and chose pleasure and purpose over other engrained habits and needs.

I was so “busy being busy” that I had forgotten the evident importance of slowing down and taking stock. The basic need to actively listen and properly connect with the various voices and opinions available to me had been neglected. As someone who delights in identifying patterns, sharing theories, and having an opinion, I had found myself too often recycling old narratives. I was running the risk of becoming stale or too comfortable with my long established talk track.

I needed to set some new goals, and decided to set deadlines that would force me to both focus on freeing up time to learn, and also create the right environment to get energised, so I decided to create regular “industry sessions” for my colleagues, where we could discuss the hot topics of the day.

Out of this year’s sessions came some very intriguing stories and lessons from the customer experience (CX) industry. Here are just a few that I’d like to share with you.

Personalisation

Often seen in relation to targeted marketing, we as consumers have appreciated brands’ efforts to personalise our experience. And brands know that establishing a connection (friendliness, trust, being made to feel valued) drives customer satisfaction, loyalty, and increased spend.

But being “personal” has also created some opportunities to strengthen important links, even if not always executed perfectly. Starbucks—as a global brand—is the antithesis of local, and yet they tapped in to the importance of being valued in a unique way through the “Can I have your name?” approach. At first disruptive and peculiar, it is now hard for other brands to copy.

My local train station has a Starbucks franchise, and whenever I approach the counter they know I want a flat white. Unfortunately, despite having tried on a number of occasions to tell Eddie that my name is not Matt, that is the name that appears on the cup. Despite being wrong, my “Britishness” can only allow it roll on now, and secretly I enjoy the regularity of this wrong. It is human and therefore wonderfully imperfect, and in many ways more effective than communication based on algorithms.

Emerging Labels And Transparency

We have perhaps already grown a little weary of contemplating millennials, and are now seeing more articles hypothesising on Gen Z (“the hyper millennials”), and what they will bring to the party. How different will their customer expectations be to us Generation X-ers? Are they really that much more sensible?

A requirement for authenticity, and brands doing the right thing may however be the needs that bond us all together, less XYZ and more Generation C (more connected to each other through social reviews than ever before). In the space of a few months two hip brands saw the polar effects of how quickly word of mouth can kick in.

Airrbnb was impacted by the news that customers who had left part way through their stays were seeing their reviews cleansed as they were being treated as having been cancelled. Their spokesperson described these as “isolated incidents.” In contrast, Patagonia’s promise to donate all of its Black Friday sales to local environmental causes not only swelled their tills, but boosted awareness and equity as the positive word spread.

Language Shifts And Meaning

Back in April we were debating FOMO (fear of missing out) and how brands use this emotion to drive increased traffic—and paranoia—amongst their competitors. I doubt many of us saw, however, “post truth” (apparently the Oxford Dictionary’s word of the year) coming up the rails.

The definition of post truth is, “Relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

When the Temkin Group labeled 2016 as the year of emotion, I doubt they expected System 2 parts of our brain (slow, effortful, infrequent, logical, calculating, conscious) to be so heavily defeated by System 1 (fast, automatic, frequent, emotional, stereotypic, subconscious).

I should probably out myself here as a Guardian-reading, Radio-6-listening type, but I was certainly not the only person to view the recent UK and USA votes in a state of disbelief. Out of this darkness the team was at least able to properly understand two things:

1.    Negativity Bias: Humans are significantly more likely to remember the negative experiences, and far more people pass on a bad experience than a good one—so reduce the chances as well as you can, and don’t under estimate how motivating anger can be.

2.    Confirmation Bias and our manufactured echo chambers: This is a description of the situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, where different or competing views are censored, disallowed, or otherwise underrepresented. The moral being to listen to your electorate/ customers / colleagues, and don’t ignore feedback that simply does not match your own take on the world. It could well come back to bite you.

Story Telling And Journeys In Experience

Finally, something that we have further built on is our love of stories, a recognition of their importance, and how best to narrate ideas to connect with an audience.

For example, we have agreed on the right structure for delivering meaningful communication. Set up the situation that we, or our customers, find themselves in; share the catalyst that requires a change; explain the purpose and the central question that you will be answering; give your answer; provide the evidence; summarise and provide the call to arms.

Part of the reason that story structures work so well to get a message across is that this is how our brains have evolved to take in important messages. And looking at experiences from a behavioural science perspective also provides learning for how to structure any interaction for greatest effect. Get the difficult things out of the way early (but try not to churn), spread the pleasure, and end on a high.

We can all benefit from taking care of our opportunities to communicate.

Before I conclude, for all those commentators out there who take the time to share and contribute to the mix of opinions and learning available to those willing to listen, I thank you.

We should all continuously remind ourselves that CX does not exist in a bubble. Where my team originally started looking at more CX-specific emerging topics, such as a focus on customer effort metrics, we soon felt diverted and started to explore the outer reaches of behaviour and motivation in general. And this is because we recognised that many factors can influence a brand’s ability to deliver against its customer promise, its employees’ capacity to deliver this proposition consistently, and indeed its customers’ appetite to appreciate and be motivated by these efforts.

My resolution for 2017 will be to stay curious, but contribute more. I will therefore leave you with a message that resonated with me this summer. It wasn’t from the usual sages, but the English RFU as part of their Level 1 Coaching course. Whilst aimed at how we work with young rugby players, and their development, the argument works for all of us who are in a position of influence:

“Our players have the capacity to outgrow us if we stand still. We may restrict them from achieving their full potential if we fail to recognise the need to continue our own development.”

Tell us more about yourself so we can tailor your demo for you

Emotion is coming to the forefront of Customer Experience (CX) management, not because it’s warm and fuzzy, and not because leveraging feelings is devilishly manipulative, but because when you use emotion to drive your CX efforts, it becomes a powerful differentiator.

More companies are getting better at the functional basics of customer experience, like responding in a timely manner to questions, streamlining the purchase process, and smoothing out onboarding (not to mention creating a decent product) – which means they need something unique to offer that separates them from their competition.  

What is the most unique, even unforgettable thing you can offer? The way you make your customers feel. It’s for this reason the bar for CX is inching up.

The fact that understanding and influencing emotion is a vital ingredient for business success is not surprising — it has been the heart and soul of brand efforts. It is also the foundation of the emotion-recognition techniques (measuring physiological responses) currently in pilot for some retailers and old-school ethnographic research. – Forrester 2017 Predictions: Dynamics That Will Shape The Future In The Age Of The Customer

Emotion not only carries the ability to define your company in a sea of competitors, it can also inspire viral word of mouth marketing from people who love you and want to express that to a large audience, whether because they’re influencers with their own followers, or reviewers.

Bad things are worse than good things are better

We are hardwired as human beings to be more sensitive to negative events than positive events. And this sensitivity only increases when we’re in a heightened emotional state – focusing on the negative becomes even easier.

As odd as it may sound, this is good news for those of us in the business of relieving pain points. You’ll get more appreciation from your customer by removing pain than creating delight. So, if a customer comes to you with a problem, you can expect them to be in a heightened emotional state, which means not only should you tread carefully, you’ll do well to relieve their most urgent pain points as soon as possible!

As a species, negative consequences take an enormous toll on us. In fact, we’ll go farther out of our way to avoid negative consequences than we’d go for positive results of equal measure (it’s called “Loss Aversion”). This behavior is predicated on the emotional truth that something bad feels worse than something good feels better. Losing $20 might wreck your day. Finding $20 may make you happier for an hour.

How does this translate to CX?

Vanguard, one of the world’s largest investment companies, was getting ready to redo its site, and rather than just considering customer acquisition, or lead-generating instruction, they studied how people felt about investing. They looked at whether their target audience was new to investing, had been investing for a while, and what their emotional baggage might be around the topic of investing in general. They discovered that, new or experienced, most people feel overwhelmed. Now, if you visit Vanguard’s site, their design is very simple, even sparse. They knew that visual clutter would only enhance the feeling of overwhelm. Their new design reduces it.

Delta airlines also makes a point to reduce customer pains. They set up their phone systems so that if you call in response to getting a text message saying your flight was canceled, their automated phone system will put you straight through to the appropriate person rather than route you through a dozen exhausting options.

United Airlines has been working diligently to improve its public image by tackling some of its thorniest customer experience pitfalls, like lost luggage. The airline recently introduced a service that lets fliers follow their luggage on the United smartphone app, and get text message alerts if their bags miss their destination. Instead of being angry and frustrated by lost bags, passengers are calling this “Amazing” customer service. As one passenger told the Huffington Post:

After I arrived, I received a text message alert that one of my two bags did not make it and would be delivered to my address within 24 hours,” she says. “I also received an email where I could track my bag, see who was delivering it and at what time. At no time did I have to wait in line or on hold for them to rectify their mistake. They simply took care of it and kept me informed every step of the way. To me, that was amazing customer service.

Amazon offers one of the most loved customer experiences, some argue, because it provides “an unparalleled sense of emotional satisfaction.” How do they do that? Not through being especially warm and fuzzy, but by reducing pain points with features like multiple wishlists, a save-for-later area, an easily accessible cart, and even more easily accessible price comparisons, along with shipping cost reduction and the nearly instant gratification of Prime. If and when a customer does have a problem, returns are easy and customer service gets top marks.

A lot of bad customer experiences are ‘death by a thousand cuts’ annoyances. Avoid exacerbating pain in an already painful situation, and the better the customer’s perception of their experience will be.

Emotions lead to loyalty – the key to growing SaaS businesses

Emotion is linked to loyalty (and CX is linked to emotion). In the hotel industry, which has the largest percentage of customers that reported feeling “valued” one study reported, 88% of the “valued” people will advocate for the hotel brand, and more than 75% will stay with the hotel brand.

The TV service provider industry, unsurprisingly, has the largest percentage of customers who report feeling annoyed. Only 8% of these annoyed people express willingness to advocate for the TV service provider, and just over 1 in 10 intend to keep their existing relationships with the provider.

For the SaaS industry, retention is a key metric for profit and growth – you can’t afford to annoy, disappoint, or frustrate your customers. Essentially, customers are 5 times more loyal when they feel valued, than when they feel annoyed.

The most important emotions for loyalty in the U.S. are, in fact, feeling valued, appreciated, and confident.

For example, there’s something about Slack that makes you feel confident (and a bit cool) that you’re part of something that’s on the leading edge. That’s not just because Slack is relatively new – they engender this feeling on purpose with Slack release notes (which are hilarious, self-deprecating, and charmingly relatable) that make updating the app a pleasure. Not only do they manage to keep everyone up-to-date, they remove the significant pain of updating an app and replace it with a positive emotion.

Note: Positive emotions that drive behavior like repurchases and advocacy differ by country and culture, even by customer base. In the UK, Germany and France, for example, the top three loyalty-inspiring emotions are slightly (yet significantly) different.

Positive Emotions that drive behavior
Source: Forrester

Loyalty weakening emotions differ by country and culture too. U.S. customers share their loyalty-weakening emotions with their U.K. friends.

Emotions that weaken customer loyalty
Source: Forrester

Be sure to understand the emotions of your specific customer base rather than make assumptions.

Interestingly, customer loyalty itself comes in multiple flavors. Loyalty can mean retention (the customer will maintain existing business), enrichment (a customer will buy additional products and services), advocacy (the customer will recommend the company).

Do you know how your customers feel about their experiences with your business?

How to Measure Emotion in Customer Experience

Most CX measurement programs don’t quantify customer emotions – they focus more on metrics that reflect a rational or cognitive evaluation of experiences. Maxie Schmidt-Subramanian, senior analyst at Forrester, says businesses can begin measuring emotion in CX by first defining metrics that measure critical emotions in influential experiences (the ones with the highest impact on customer relationships).

Yes, that means you’re making it up as you go along. You have to figure out for yourself which metrics effectively measure emotion for your customers, in your context. One way to do this is by tracking sentiment in Voice of Customer data – people convey a wide range of emotions with the words they use. Some companies, like Lenovo, use text analysis software to measure changes in sentiment scores, alerting when sentiment falls below a certain threshold.

Using a sentiment analysis tool, you can track positive or negative themes and dig into specific words most often used by your customers to describe how they feel. You can also mine customer feedback and questions, or any other written message from your customer to you. Of course, the most straightforward way to get Voice of Customer data is through surveys, and if you time your surveys right (and ask in the right channel), you can begin to tell what events trigger which emotions.  

Whichever method you choose to get your emotion metrics, the goal is the same: to define the emotional context customers have around your product, industry, and specific touch points in your sales funnel, onboarding process, and usage. From there, you can identify and alleviate pain points, gain loyalty, and win brand advocates.

Prove the value of emotion to yourself first

Emotion is a relatively ‘wooey’ topic. It’s still considered soft. It’s not taken seriously by many. So make it your mission to prove the value of emotion early on in your program by first targeting the highest-emotion touch points, and developing experiments for how to improve customers’ emotions around those experiences. Then track your success rates.

But remember, emotion is contextual, and you don’t have control over the entire context of a customer’s experience. That said, companies who value customer loyalty are willing to go to creative lengths to keep customers feeling good about their brand. Join them.

Win customers for life. Start getting Net Promoter feedback today with InMoment.

Tell us more about yourself so we can tailor your demo for you

Omni-Channel Customer Feedback

You know your business inside and out. You know that listening to customers and responding to their needs is the key to staying competitive. Still, you might be struggling with where and when to survey your customers. A pop-up survey in your web app? Send them an email? What about a text message on their mobile phone? Figuring out the most effective channel to ask for feedback can be confusing.

The good news is that you have more options than ever before.  We’d like to help by giving an overview of where companies are engaging their customers, and how multiple channels can work together. Then, you’ll be better equipped to develop a plan that best meets your company’s unique needs.

Why take advantage of multiple feedback channels

Start with a customer-focused approach: when, where and how do your customers want to give you feedback? This inquiry can quickly lead to a multi-channel approach.

Fight survey fatigue

An improved survey experience helps you maintain high response rates. Not every customer wants to fill out an in-app survey, not every customer opens email in their inbox. However, a lot of people do want to give feedback, and appreciate the opportunity to do so. So your goal is to get more and more sophisticated about the “where and when” over time.

Reach more stakeholders, in the right context

When you leverage more than one survey channel you can expand the pool of users you’re hearing from. You may have an email relationship with some customers, and in-product engagement with others. A multichannel approach also lets you choose the right channel for a given interaction, and to customize your Voice of the Customer program for your business model.

Which Customer Feedback Survey Channel is “Best”?

Is one survey channel more brand-oriented or more transaction-oriented?  Which is the best? This is a very common question. We think the most important factor here is when you survey, rather than which channel.

Here’s why. If you send an NPS survey right after purchase, you can expect that response to be more influenced by that last transaction. However, keep in mind, an NPS survey triggered by a transaction is still colored by the brand experience.

To help you think this through, here is some information about the different channels:

Email: Lower response rates, but higher rates of qualitative feedback. Think about it: How often do you take the time to open emails from businesses, let alone respond? However, those customers who do take the time to answer a customer feedback survey via email are more likely to be invested in your brand and take the time to write comments that provide more detail to the “why” behind their score.

In-app (Web or Mobile): Higher response rates, lower rates of qualitative feedback. In-app surveys can deliver contextual feedback, and we find that customers will answer the question they are asked. They are absolutely willing to provide higher level feedback when prompted in a web or mobile app. This is why customer experience management platforms offer feedback tagging, sentiment analysis, and other means of gleaning insight from the fire hose of data that many companies receive via in-app surveys. Nonetheless, fewer in-app respondents will take the time to give qualitative feedback.

The high response rate that in-app NPS surveys deliver can be a positive trade off, especially for SaaS businesses focused on reducing churn. You may prefer to get a gut impression that you can follow up on rather than radio silence from a passive or unhappy user that ignores an email survey.

SMS:  With transactions, deliveries, and services, sometimes texting is the most effective and immediate way for you to interact with customers. It also allows you to grab customers in the place they tend to spend more and more of their time – on their mobile phones.

So, really, it’s not about which is better. The question is, “Which channel or channels are the best fit for my business and my customers?”

Scenarios Where Using More than One Customer Feedback Channel Makes Sense

1. Targeting Distinct Stakeholder Groups with Different Survey Channels.

Consider the enterprise sales model as one that can benefit from both in-app and email surveys. Here we are talking about a SaaS company or other business with a very strong digital presence. In this example, your company is using the Net Promoter Score system to measure customer loyalty.

If your brand is an online product, we’ve seen huge success when you choose in-app surveys as your primary channel. This is because end users of a SaaS product relate to your company through your digital platform. They probably don’t open your marketing emails because they aren’t looking to be sold to. They just want to do their thing in your product everyday. For them, it makes sense to give NPS feedback in-app, and they are mostly likely to respond there.

Now consider some executive stakeholders or buyers of your platform. They don’t spend as much time in your product (if any), but you definitely want to know their opinion. For this group, delivering an NPS survey via email is likely the way to go, and email gives you a higher chance of getting qualitative feedback in their response.  

So, in this case, it’s the combination of in-app and email surveys that gets you the info you need. 

2. Reaching Customers Throughout their Journey

E-commerce is an interesting use case here. The e-commerce business often has a couple of different customer survey touchpoints: online and offline. Every customer needs to place an order–typically on a website or mobile app. It can be valuable to learn how a customer feels after the ordering process, and that survey can often happen in the web application.

Once the product is delivered, the customer may register delight or dissatisfaction. For e-commerce businesses, it really makes sense to capture that sentiment via email or SMS because, honestly, if the customer had a bad experience, they’re probably not going to come back to your site to give you feedback.

The power of those two surveys together—one in-app and one via email—can give you an insightful story of the customer journey, and it can only happen by tapping into multiple feedback channels.

3. Surveying Customers Across All Lines of Business

As companies evolve and develop new forms of business for growth, customers of those different products might require distinct feedback channels. A good example is a technology company that hasn’t fully migrated to the cloud and still has legacy software offerings. These types of businesses in transition have a software user base “on premise,” where the only option is to do an email survey. Newer, cloud-based offerings from the same company can opt instead for in-app surveys.

Here is another example. A media company might get in-app survey feedback from subscribers or readers who visit their website. However, the same company may find that email surveys are a better channel to reach customers that receive subscription services via home delivery.

4. Improving Response Rates among Low Engagement Customers

Supplementing one channel with another may help you get a higher response rate.  For example, if you start your feedback program with in-app surveys and you find that certain customers just aren’t using your application that frequently, or aren’t receptive to an in-app survey, then you have the flexibility to try another channel. See what those customers prefer to respond to–try an email survey, try SMS, or try surveying in a mobile app if you have one. That way, every customer’s voice is being heard on their terms.

5. Evolve to Reach Your Customers Where They Are

There are times when companies communicate with customers primarily through SMS. Think about your mobile provider, bank, airline, or ride share service. You expect to hear from them through that channel and count on the immediacy that texting provides. This is when it makes good sense to survey through SMS in addition to other channels, particularly for transaction-related feedback.

You’ve Got Choices

There are times when “it just depends.”  Multi-channel customer feedback gives you the flexibility to survey customers based on the way they prefer to communication with your business. It lets you engage a broader segment of users across multiple touch points and lines of business. You can get the big picture, each step in your customer’s journey.

And, it lets you meet your customers on their terms. Don’t risk filling your customer’s devices with unwanted messages. The sensitivity that multi-channel feedback offers can help you avoid survey fatigue. That means higher quality feedback to help you grow your company.

Start measuring Net Promoter Score in multiple channels with InMoment

Tell us more about yourself so we can tailor your demo for you

Omni-Channel Customer Feedback

You know your business inside and out. You know that listening to customers and responding to their needs is the key to staying competitive. Still, you might be struggling with where and when to survey your customers. A pop-up survey in your web app? Send them an email? What about a text message on their mobile phone? Figuring out the most effective channel to ask for feedback can be confusing.

The good news is that you have more options than ever before.  We’d like to help by giving an overview of where companies are engaging their customers, and how multiple channels can work together. Then, you’ll be better equipped to develop a plan that best meets your company’s unique needs.

Why take advantage of multiple feedback channels

Start with a customer-focused approach: when, where and how do your customers want to give you feedback? This inquiry can quickly lead to a multi-channel approach.

Fight survey fatigue

An improved survey experience helps you maintain high response rates. Not every customer wants to fill out an in-app survey, not every customer opens email in their inbox. However, a lot of people do want to give feedback, and appreciate the opportunity to do so. So your goal is to get more and more sophisticated about the “where and when” over time.

Reach more stakeholders, in the right context

When you leverage more than one survey channel you can expand the pool of users you’re hearing from. You may have an email relationship with some customers, and in-product engagement with others. A multichannel approach also lets you choose the right channel for a given interaction, and to customize your Voice of the Customer program for your business model.

Which Customer Feedback Survey Channel is “Best”?

Is one survey channel more brand-oriented or more transaction-oriented?  Which is the best? This is a very common question. We think the most important factor here is when you survey, rather than which channel.

Here’s why. If you send an NPS survey right after purchase, you can expect that response to be more influenced by that last transaction. However, keep in mind, an NPS survey triggered by a transaction is still colored by the brand experience.

To help you think this through, here is some information about the different channels:

Email: Lower response rates, but higher rates of qualitative feedback. Think about it: How often do you take the time to open emails from businesses, let alone respond? However, those customers who do take the time to answer a customer feedback survey via email are more likely to be invested in your brand and take the time to write comments that provide more detail to the “why” behind their score.

In-app (Web or Mobile): Higher response rates, lower rates of qualitative feedback. In-app surveys can deliver contextual feedback, and we find that customers will answer the question they are asked. They are absolutely willing to provide higher level feedback when prompted in a web or mobile app. This is why customer experience management platforms offer feedback tagging, sentiment analysis, and other means of gleaning insight from the fire hose of data that many companies receive via in-app surveys. Nonetheless, fewer in-app respondents will take the time to give qualitative feedback.

The high response rate that in-app NPS surveys deliver can be a positive trade off, especially for SaaS businesses focused on reducing churn. You may prefer to get a gut impression that you can follow up on rather than radio silence from a passive or unhappy user that ignores an email survey.

SMS:  With transactions, deliveries, and services, sometimes texting is the most effective and immediate way for you to interact with customers. It also allows you to grab customers in the place they tend to spend more and more of their time – on their mobile phones.

So, really, it’s not about which is better. The question is, “Which channel or channels are the best fit for my business and my customers?”

Scenarios Where Using More than One Customer Feedback Channel Makes Sense

1. Targeting Distinct Stakeholder Groups with Different Survey Channels.

Consider the enterprise sales model as one that can benefit from both in-app and email surveys. Here we are talking about a SaaS company or other business with a very strong digital presence. In this example, your company is using the Net Promoter Score system to measure customer loyalty.

If your brand is an online product, we’ve seen huge success when you choose in-app surveys as your primary channel. This is because end users of a SaaS product relate to your company through your digital platform. They probably don’t open your marketing emails because they aren’t looking to be sold to. They just want to do their thing in your product everyday. For them, it makes sense to give NPS feedback in-app, and they are mostly likely to respond there.

Now consider some executive stakeholders or buyers of your platform. They don’t spend as much time in your product (if any), but you definitely want to know their opinion. For this group, delivering an NPS survey via email is likely the way to go, and email gives you a higher chance of getting qualitative feedback in their response.  

So, in this case, it’s the combination of in-app and email surveys that gets you the info you need. 

2. Reaching Customers Throughout their Journey

E-commerce is an interesting use case here. The e-commerce business often has a couple of different customer survey touchpoints: online and offline. Every customer needs to place an order–typically on a website or mobile app. It can be valuable to learn how a customer feels after the ordering process, and that survey can often happen in the web application.

Once the product is delivered, the customer may register delight or dissatisfaction. For e-commerce businesses, it really makes sense to capture that sentiment via email or SMS because, honestly, if the customer had a bad experience, they’re probably not going to come back to your site to give you feedback.

The power of those two surveys together—one in-app and one via email—can give you an insightful story of the customer journey, and it can only happen by tapping into multiple feedback channels.

3. Surveying Customers Across All Lines of Business

As companies evolve and develop new forms of business for growth, customers of those different products might require distinct feedback channels. A good example is a technology company that hasn’t fully migrated to the cloud and still has legacy software offerings. These types of businesses in transition have a software user base “on premise,” where the only option is to do an email survey. Newer, cloud-based offerings from the same company can opt instead for in-app surveys.

Here is another example. A media company might get in-app survey feedback from subscribers or readers who visit their website. However, the same company may find that email surveys are a better channel to reach customers that receive subscription services via home delivery.

4. Improving Response Rates among Low Engagement Customers

Supplementing one channel with another may help you get a higher response rate.  For example, if you start your feedback program with in-app surveys and you find that certain customers just aren’t using your application that frequently, or aren’t receptive to an in-app survey, then you have the flexibility to try another channel. See what those customers prefer to respond to–try an email survey, try SMS, or try surveying in a mobile app if you have one. That way, every customer’s voice is being heard on their terms.

5. Evolve to Reach Your Customers Where They Are

There are times when companies communicate with customers primarily through SMS. Think about your mobile provider, bank, airline, or ride share service. You expect to hear from them through that channel and count on the immediacy that texting provides. This is when it makes good sense to survey through SMS in addition to other channels, particularly for transaction-related feedback.

You’ve Got Choices

There are times when “it just depends.”  Multi-channel customer feedback gives you the flexibility to survey customers based on the way they prefer to communication with your business. It lets you engage a broader segment of users across multiple touch points and lines of business. You can get the big picture, each step in your customer’s journey.

And, it lets you meet your customers on their terms. Don’t risk filling your customer’s devices with unwanted messages. The sensitivity that multi-channel feedback offers can help you avoid survey fatigue. That means higher quality feedback to help you grow your company.

Start measuring Net Promoter Score in multiple channels with Wootric

Tell us more about yourself so we can tailor your demo for you

Machine Learning in 5 Minutes

There's a famous quote, supposedly from Bill Gates, that goes "A breakthrough in machine learning would be worth ten Microsofts." In the next 5 minutes you'll understand exactly what machine learning is and what it can do and why everyone is excited about it.

First and foremost:

What is machine learning, and why is it a good thing?

Machine learning is a set of statistical/mathematical tools and algorithms for training a computer to perform a specific task, for example, recognizing faces.

Two important words here are “training” and “statistical.” Training because you are literally teaching the computer about a particular task. We emphasize statistical because the computer is working with probabilistic math. The chances of it getting the answer “correct” varies with the type and complexity of the question that it’s being trained to answer.

Different Types of Algorithms

There are a number of different types of machine learning algorithms, from the simple “Naïve Bayes” to “Neural Networks” to “Maximum Entropy” and “Decision Trees.” We’re more than happy to geek on out with you with respect to advantages and disadvantages of different types, and talk about linear vs. non-linear learning, feed-forward systems, or argue about multi-layer hidden networks vs. explicitly exposing each layer.

Lexalytics is a machine learning company. We maintain dozens of both supervised and unsupervised machine learning models (Close to 40, actually). We have dozens of person-years dedicated to gathering data sets, experimenting with the state of the art machine learning algorithms, and producing models that balance accuracy, broad applicability, and speed.

Lexalytics is not a general-purpose machine learning company. We are not providing you with generic algorithms that can be tuned for any machine-learning problem. We are entirely, completely, and totally focused on text. All of our machine learning algorithms, models, and techniques are optimized to help you understand the meaning of text content.

Text is Sparse

Text content requires special approaches from a machine learning perspective, in that it can have hundreds of thousands of potential dimensions to it (words, phrases, etc), but tends to be very sparse in nature (say you’ve got 100,000 words in common use in the English language, in any given tweet you’re only going to get say 10-12 of them). This differs from something like video content where you have very high dimensionality, but you have oodles and oodles of data to work with, so, it’s not quite as sparse.

Why is this an issue? Because how can you start grouping things together and seeing trends unless you can understand the similarities between content.

The Machine Learning Tool Belt

In order to deal with the specific complications of text, we use what’s called a “hybrid” approach. Meaning, that unlike pure-play machine learning companies, we use a combination of machine learning, lists, pattern files, dictionaries, and natural language algorithms. In other words, rather than just having a variety of hammers (different machine learning algorithms), we have a nice tool belt full of different sorts of tools, each tool optimal for the task at hand.

The “term du jour” seems to be “deep learning” – which is an excellent rebranding of “neural networks.” Basically, the way that deep learning works is that there are several layers that build up on top of each other in order to recognize a whole. For example, if dealing with a picture, layer 1 would see a bunch of dots, layer 2 would recognize a line, layer 3 would recognize corners connecting the lines, and the top layer would recognize that this is a square.

This explanation is an abstraction of what happens inside of deep learning for text – the internal layers are opaque math. We have taken a different approach that we believe to be superior to neural networks/deep learning – explicitly layered extraction. We have a multi-layered process for preparing the text that helps reduce the sparseness and dimensionality of the content – but as opposed to the hidden layers in a deep learning model, our layers are explicit and transparent. You can get access to every one of them and understand exactly what is happening at each step.

Machine Learning Models

To give an idea of the machine learning models we have, just to process a document in English, we have the following machine-learning models:

  • Part of Speech tagging
  • Chunking
  • Sentence Polarity
  • Concept Matrix (Semantic Model)
  • Syntax Matrix (Syntax Parsing)

All of those models help us deal with that dimensionality/sparseness problem listed above. Now, we have to actually extract stuff, so, we’ve got additional models for

  • Named Entity Extraction
  • Anaphora Resolution (Associating pronouns with the right words)
  • Document Sentiment
  • Intention Extraction
  • Categorization

For other languages, like Mandarin Chinese, we have to actually figure out what a word is, so, we need to “tokenize” – which is another machine learning task.

The Hybrid Approach

Some of our customers, particularly in the market analytics space and the customer experience management space, have been hand-coding categories of content for years. This means they have a lot of content bucketed into different categories. Which means that they have a really great set of content for training a machine-learning based classifier – we can do that for you too!

But, and this is a really big but, it is inefficient to do all tasks with the same tool. That’s why we also have dictionaries and pattern files, and all sorts of other good stuff like that. To sum up why we use a hybrid approach, let’s take the following example… Say you’ve trained up a sentiment classifier using 50,000 documents that does a pretty good job of agreeing with a human as to whether something is positive, negative, or neutral. Awesome!

Training the Model

What happens when a review comes in that it scores incorrectly? There are 2 approaches – sometimes you have a feedback loop, and sometimes you have to collect a whole corpus of content and retrain the model.

Even in the case of the feedback loop, the behavior of the model isn’t going to change immediately, and it can be unpredictable – because you’re just going to tell it “this document was scored incorrectly, it should be positive” – and the model is going to take all of the words into account that are actually in the model itself.

In other words, it’s like you’ve got a big ocean liner. You can start to turn it, but it’s going to take a while and a lot of feedback before it turns. In our approach, you simply look to see what phrases were marked positive and negative, change them as appropriate, and then you’re done. The behavior changes instantly.

We like to think of it as the best of both worlds, and we think you will too.

Tell us more about yourself so we can tailor your demo for you

How to Create a Balanced Survey

It would be natural to assume that companies which invest in customer experience measurement (CEM) would put customer preferences at the top of the list, but this is not always the case. Companies do not consciously ignore customers in the survey process. Rather it’s more often a matter of doing what has come to be expected internally— populating a dashboard with metrics that provide a snapshot of performance at various levels in the organization. Overly structured surveys may do this efficiently while at the same time falling short of adequately describing customers’ actual experiences. It doesn’t have to be this way. One approach to creating more customer-centric surveys is to make sure customers are able to tell their stories. By shifting the survey balance to include some unstructured feedback, everyone wins.

Unsatisfying Customer Satisfaction Surveys

It’s ironic but a number of customers who take ‘satisfaction’ surveys find the experience less than satisfying. Surveys frustrate customers and the interviewers who have to administer them. The effects can be even more harmful with self-administered questionnaires done online or through the mail—there is nothing keeping a customer from prematurely ending an unsatisfying “exchange.”

Poorly Designed Surveys Have Real Consequences

Too often customers are hindered to say what’s on their minds and interviewers are stymied in their attempt to record valuable information. Completely close-ended customer experience surveys administered using in flexible software are all too common, and contribute to:

  • Declining response rates—Respondents fail to complete the survey. Others refuse to participate based on previous unpleasant experiences. The available respondent pool shrinks and survey costs increase.
  • Poor quality data—Respondents rush to get through surveys filled with questions that are irrelevant to them, or are forced into selecting answers which do not represent their true or complete feelings.
  • Missing or incomplete information—What company would not benefit from learning in a customer’s own words what went amiss in a service transaction, or the opposite—what went exactly right? Too many surveys simply do not provide this opportunity.

The bottom line: customers are becoming disengaged with the very feedback process designed to improve their experiences. Over time, this behavior will have a negative impact on perceptions of your brand—which you may find yourself reading about on a social media or internet rating site.

Creating the Right Kind of Survey

Today’s customers are not waiting to be asked what they think about customer experience surveys—they are telling us without reservation and we need to give them the tools and utilize technology that allow customers to give us feedback.
A key element is more flexible surveys that not only provide better data but also create a better survey experience. In other words, surveys which are more like everyday conversations. During conversations people exchange information quickly and efficiently. They readily engage, react to each other’s statements and naturally probe for and provide further detail. Adding open-end questions to customer surveys helps create an environment in which interesting information surfaces and customers are able to tell their stories in their own words. It’s a matter of shifting the survey balance from 100 percent close-ended ratings-based questions to providing targeted opportunities for unstructured feedback.

Qualitative research entails primarily an open-ended exchange between interviewer and customer. We are not advocating all customer experience surveys should go to this extreme, but there is certainly room to shift the balance and let customers more freely give us the feedback they want to give and not just the ratings organizations force on them.

This does not mean giving up performance metrics. A well balanced experience survey will meet the needs of all stakeholders in the customer experience measurement process. How far a company moves along the continuum depends on a number of factors including:

  • Information goals: Is the survey’s focus on performance appraisal, diagnosis of systemic problems, rapid problem resolution or retention/relationship building?
  • How the information will be used and by whom?
  • The category/type of transaction
  • The organization’s culture

What Should We Ask?

There is not a magic formula for questions that solicit useful, unstructured feedback. It starts with deciding exactly what type of information you want, who will use it and for what purpose. General considerations are:

  • Question selection/wording
  • Placement in the survey
  •  Number of questions
  • Probing and clarifying responses to best effect

Question Selection

Just as researchers agonize over the best wording for an attribute, they should also give careful thought to the wording of open-end questions. Start by matching the question to the specific information need, and then get creative. In general, the less specific or loosely defined the question is, the less specific the response will be.

Don’t be afraid to experiment with adjectives that might be considered too leading in a close-ended question. Words like unforgettable, terrific or disappointing may inspire respondents to more focused and detailed responses. Don’t forget to communicate research concepts in customer friendly ways and ask them directly:

  • What stood out?
  • What matters the most to you?
  • How do we keep your business?

Consider borrowing simple projective techniques from the qualitative arsenal, e.g., “If you were the President of the company, what would you do to improve this experience?”

Placement

Data continuity will be a consideration unless you are designing a new program. Where new questions are placed
in the survey may influence responses to questions that follow. Therefore, it is advisable to pretest the new questionnaire to understand these effects. An exception would be when new questions are placed at the end of the survey.

How Many Open-Ends Is Too Many?

There is not a one-size- fits-all answer, but in the same way an attribute list can become burdensome, it is possible to put too many open-end questions into a customer experience survey. A pretest will reveal the information each question produces, allowing you to judge incremental value and whether some questions are redundant.

If it turns out that there are several productive questions, consider splitting the questions up across the sample; there should still be enough information to analyze. While it is important that all respondents provide an overall rating, it is not necessary that everyone experiences the same set of open-end questions. The main point here is to make sure that respondents get the most relevant opportunities to provide their feedback.

Be realistic about the survey subject and especially the character of transaction when considering which and how many open-ends to include. Low involvement transactions, especially those done repeatedly, become routine and unmemorable. A simple question at the end of the survey such as, “Please tell us anything else memorably positive/negative?” may be all that is needed.

Getting the Most Out of Open-Ends

More companies are moving customer experience surveys online and it is important that open-end questions can be as effective in self-administered as in interviewer-administered formats. The success of open-ends administered by live interviewers is dependent on the quality of their probing and clarifying skills. The success of open-end questions in online surveys is also driven by effective probing. If the response to an online open-end question is left blank or is too brief, simply trigger a prompt such as, “Please can you tell us more?”

Technology to the Rescue

Automated text analysis uses a combination of natural language processing and other computational linguistic techniques to:

  1. Categorize and summarize text
  2. Extract information into a suitable form for additional analysis

In other words, it turns unstructured text information into structured data that can be summarized and analyzed using familiar quantitative tools. Note that automated text analysis tools are capable of far more than comment categorization (comparable to human coding).

Surveys are a Reflection of your Brand

Every interaction with your company—including a customer experience survey—is a reflection on your brand. One way to make sure the survey experience is positive is to shift the balance from completely structured to semi-structured. Open-end questions have the potential, when designed and executed well, to create a better survey experience for respondents and to generate data with significant diagnostic value. A more open-ended questionnaire design creates a survey experience that is more conversational and allows customers to tell their stories in their own words.

 

Tell us more about yourself so we can tailor your demo for you

The real value of customer experience programs is not in gathering customer feedback, but in putting the voice of the customer to work. While there was never a positive return on investment (ROI) for simply measuring satisfaction (no more than there is a positive ROI for taking your temperature when you are sick), today’s cost/benefit driven environment has made the need for meaningful action even more acute.

At a Glance

Most organizations invest in measuring customer experience and satisfaction with an expectation that the insights derived will lead to product and service improvements and better customer experiences. Unfortunately, far too many organizations simply hand customer feedback to managers with instructions to “use the results to take action.” The consequences? Quite often, no action is taken and the anticipated improvements in customer experience fail to materialize.

Start to Utilize Your Feedback

A growing body of evidence reveals that a majority of organizations are not where they want to be when it comes to putting the voice of the customer to work. These five steps will help you guide you to identify people and actions to be taken so that the feedback you are receiving can be utilized.

Step 1. Identify High Priority Customer-Driven Action Items

Quite often, analysis of customer survey items – each of which represents a specific element of the customer experience – is the starting point for defining action items. Specifically, items identified as “key drivers” of overall customer satisfaction and loyalty, and those that receive relatively unfavorable customer ratings are designated as customer-driven priorities for improvement. Many organizations also look at additional Voice of Customer (VoC) data sources (e.g., inbound customer comments and complaints, user-generated media, etc.) to corroborate initial conclusions based on analysis of survey data. Overall, the analysis of customer feedback enables the organization to define customer-driven action items.

Step 2. Determine Owners of the Customer-Driven Action Items

The next step in the process involves a review of customer feedback by a cross-functional team of managers. These managers collectively determine the people and parts of the organization that impact and have some level of ownership of each action item. It is the “owners” that must take the lead in developing and implementing an appropriate action plan.

Step 3. “Drill Down” for Clarity and Granularity

The analysis of survey items often provides the starting point for customer-driven action planning and implementation. However, the survey instruments are not generally designed to provide enough detail or granularity to enable an organization to determine the specific action to take. As a result, the action-item owners are limited by an incomplete understanding of “what to do.” This leads to one of two unfortunate outcomes:

  • The actions taken to respond to the voice of the customer are misguided and ineffective
  • Managers and employees end up taking no action at all because they lack clarity regarding what the customer wants or needs

In contrast, organizations that are successful in applying customer feedback to drive improvement ask themselves a simple question before developing and implementing action plans: Do we understand what the customer wants us to do or do differently?

The third step in the process requires that owners of a customer-driven action item confirm that they have sufficient understanding of what customers actually want the company to do or do differently. Social media can provide insight into what customers want or expect and knowledge from social media sources can be valuable. If not, the group must determine the questions to address and areas requiring “drill-down” for clarity and granularity.

Step 4. Pinpoint Policies, Processes, and Operations Associated with High-Priority Action Items

Once a customer issue is clarified and ownership for action established, a fourth critical step in the process is to identify and target the relevant business enablers. What are the organizational processes, policies, practices and other aspects of performance that are connected to the targeted element of the customer experience? The owners must answer this question to ensure that they identify and x the “right things.”

Step 5. Develop and Implement Appropriate Action Plans

Upon completion of these first four process steps, the organization has put itself in a very good position to develop and implement an appropriate customer experience improvement plan, because:

  • The people and parts of the organization that impact the customer-driven action item have been identified
  • These owners understand what customers want the organization to do
  • The owners have pinpointed the organizational processes, practices, policies and other performances issues that need to be changed and improved

Essentially, the “guess work” has been taken out of developing and implementing an appropriate customer driven action plan. Now, it’s time for the owners to develop the plan.

Well-conceived action plans require solid information about what to change and how to change it. Integrating action
items identified through the customer feedback process with operational training tools to guide action is a best practice to drive improvement. For many organizations, integrating these elements within the reporting platform is the most effective way to arm corporate and front-line managers with the tools they need to address improvement areas.

Connect to the Right People

Companies investing in capturing, crunching, and sharing insights derived from customer feedback will make some progress toward putting the voice of the customer to work. However, unless these organizations implement a process to connect customer feedback to the right people, and the right business processes, policies and activities, progress likely will be stalled.

Tell us more about yourself so we can tailor your demo for you

Editor’s note: This article originally appeared on the CX Cafe Blog.

Automotive Dealership Loyalty Study Background

Purpose of the Study: To determine the relationship between dealership satisfaction, dealership customer loyalty and dealership revenues.

This was a follow-up study of 2009 and 2010 model year vehicle purchasers who returned MaritzCX’s New Vehicle Customer Study. Customers were asked about their vehicle service behaviors and vehicle repurchase behaviors since purchasing their 2009 or 2010 vehicles.

Two Data Sets

  • All Respondents (n=12,875)
    • Weighted to 2009 and 2010 vehicle sales by model
    • Used for Sales to Service Loyalty analyses and service usage analyses
  • Vehicle Replacers (n=5228)
    • 5431 had replaced their 2009 or 2010 vehicle
    • 203 respondents removed because their original brand was no longer available
    • Weighted to 2009 and 2010 vehicle sales by model
    • Used for Sales to Sales Loyalty and Service to Sales Loyalty analyses

Key Dealership Measures

Dealership Sales Satisfaction – Satisfaction with the dealership purchase experience as reported by customers in 2009 or 2010.

Overall Dealership Satisfaction – Satisfaction with the selling dealer over the lifetime of the vehicle as reported by customers in 2016.

Dealership Sales-to-Sales Loyalty – The percentage of vehicle replacers who purchased their replacement vehicle from the same dealership that sold them their 2009 or 2010 vehicle.

Dealership Sales-to-Service Loyalty – The percentage of customers who reported that they usually used their selling dealership for various types of service work.

Service-to-Sales Loyalty – The percentage of vehicle replacers who purchased their replacement vehicle from the dealership that sold them their 2009 or 2010 vehicle by where they usually had their 2009 or 2010 vehicle serviced.

Sales-to-Sales Loyalty

Dealership Satisfaction and Dealership Loyalty

Both dealership sales satisfaction and overall dealership satisfaction are strongly related to dealership sales loyalty

  • Customers completely satisfied with the dealership are over four times more likely to buy from that dealership again compared to very dissatisfied customers

Dealership Loyalty and Brand Loyalty

While dealership satisfaction is more associated with dealership loyalty than vehicle brand loyalty, both show strong relationships

  • Those completely satisfied with their dealerships are about twice as likely to re-purchase the brand as those that are very dissatisfied with the dealership

Sales-to-Service Loyalty

Dealership Sales Satisfaction and Service Loyalty

Customers with higher levels of dealership sales satisfaction are about 50% more likely to have their vehicles serviced at the dealership:

Overall Dealership Satisfaction and Service Loyalty

That relationship gets stronger when looking at overall dealership satisfaction

  • Customers are two to three times more likely to service at the dealership if they rate their overall dealership experience completely satisfied vs. very dissatisfied

Dealership Satisfaction and Service Spend

As customers are less satisfied with their dealerships, service spend doubles at independent facilities and halves at the selling dealerships

Service-to-Sales Loyalty

Service Usage by Service Type

About half of customers report usually using their selling dealership for all types of service work, but this tapers off for out of warranty service

  • Independent facilities are picking up this work

Service Provider and Dealership Sales to Sales Loyalty

  • Dealership sales to sales loyalty is over 50% if their customers usually have their vehicle serviced at the dealership
  • Dealerships really want to avoid customers servicing at other dealerships. Only about 10% of customers who service their vehicles at other dealerships return to the selling dealership when replacing their vehicle.

Show Me the Money – A Financial Model

Financial Impact of Dealership Satisfaction on Dealership Revenue

For the average US dealer

  • Increasing satisfaction of all customers by one “box” on a 5-point scale generates approximately $2.5 million in loyalty related revenue
  • Allowing satisfaction to fall one box for all customers equates to a loss of $4.2 million in loyalty related revenue

On a typical 100 point scale, each point of customer satisfaction relates to approximately $151,800 in additional loyalty related sales revenue for each dealership

Model Showing the Financial Effect of Increasing Satisfaction One Level

Model Showing the Financial Effect of Decreasing Satisfaction One Level

Scaling to 100-Point Scale

To model loyalty changes on a typical 100-point satisfaction scale, we converted the 5-point scale by assigning the values of 100, 75, 50, 25, and 0 to the boxes from Completely Satisfied to Very Dissatisfied. We then extrapolated the models shown previously to the points where all customers where completely satisfied and all customers were completely dissatisfied.

For all the models in this process we calculated the 100-point satisfaction score and the associated loyalty rate. These points were plotted on the graph below. Because the resulting curve is mostly linear, a trend line was fit to it, and its regression equation was determined. This equation and its associated line shows that every point on the 100-point scale relates to a .45 percentage point change in dealership sales-to-sales loyalty

  • For the average dealership with 1003 vehicle sales, that equates to a change of sales revenue of $151,830 per point1

1Assumes an average selling price of $33639 per vehicle as reported by the National Automobile Dealers Association for 2014 (the most recent data available).

Tell us more about yourself so we can tailor your demo for you

Don’t Know Option in Surveys

Editor’s Note: This blog was originally posted on CX Cafe’.

Your respondents might know more than you think.

Including a “don’t know” option in a survey is an issue that is currently under speculation.  The “don’t know” option can be explicit, as shown with the scale, or it can be implicit by the use of skip patterns within a survey.  It’s a powerful option to give survey takers who don’t really know the answer–an option so they don’t get frustrated, but it also can serve as a cop-out for those who just don’t want to answer the question. So where do you draw the line?

The “don’t know” option can contribute to good survey design, because it utilizes skip patterns to alleviate the need of showing respondents a set of questions that are not applicable. However, if the “don’t know” option is associated with attitudes concerning relevant touch points or facts, you may want to reconsider including that option in your surveys.

So what happens if you include the “don’t know” option in your Survey?

  • First, when that option is present, respondents are more likely to select it than engaging in the question.
  • Second, researchers have found that respondents do a pretty good job at answering questions in the face of uncertainty.  For example, if a fact-based question had four choices, respondents who initially said don’t know had much higher accuracy than the 25% that guessed at random.
  • Third, attitudes can be more reliably “guesstimated” than facts.
  • Fourth, if respondents choose “don’t know,” multivariate analysis requires those answers to be treated as missing, so the data is not inaccurate.  For missing values, we often use methods to try to recover those answers (imputation). Who do you want to estimate those underlying values? The researcher? The respondent?
  • Fifth, and finally, placing a “don’t know” option on a crowded scale or not setting it apart from equidistant scale points can lead to respondent confusion and incorrect selections.

The fear of not using “don’t know” is that you are forcing the respondent to provide meaningless responses. However, the use of “don’t know” can lead to MORE data problems.  In general, minimize the use of “don’t knows” in your surveys, for a more powerful and informative survey.

Note: There are some considerations about omitting the “don’t know” option on mandatory questions.  If there are too many questions which force the respondent to answer, the respondent could get more frustrated without the “don’t know” option. Depending upon the questions you are asking on your survey, it is key to find a healthy balance between adding “don’t know” on your survey and taking it off.

Tell us more about yourself so we can tailor your demo for you

How to Get High Response Rates to User Surveys on Mobile

In this age of survey fatigue, getting users to engage with a survey in any medium is challenging. Mobile apps are no exception, and have their own unique constraints. The good news is that in-app surveys can provide a streamlined mobile experience that results in super-high response rates and meaningful feedback, too.

In this age of survey fatigue, getting users to engage with a survey in any medium is challenging. Mobile apps are no exception, and have their own unique constraints. The good news is that in-app surveys can provide a streamlined mobile experience that results in super-high response rates and meaningful feedback, too.

Asking for survey response on a mobile screen can create a friction-y experience for users.

Low screen attention. Small, cramped mobile screen. Tiny text. Question after question. Who wants to deal with that?

You need a streamlined survey solution that reduces friction but still provides rich feedback.  So, how do you overcome the constraints of mobile?

Mobile App Surveys

Net Promoter Score surveys minimize friction.

Net Promoter Score (NPS) is recognized as a powerful measure of customer happiness, and a lean, agile way to elicit meaningful feedback from users. Not familiar with NPS? Here are the basics.

The NPS survey consists of a single survey question, plus an opportunity for a quick qualitative response. Because of its simplicity, the NPS survey really shines in the mobile context.   Have a look at how it works here on Android and iOS.

Surveying mobile users via email can mean low response rates.

Until now, mobile businesses have had to rely on email surveys to get NPS user feedback, and email certainly has its place. Trouble is, app developers may not have a user’s email. Even when you do, inboxes are noisy places and readers are less likely to click through from mobile devices. Also, an NPS survey via email arrives after the fact, after your user has left your app. Their attention is elsewhere.

For high response rates, ask the powerful NPS question right in your mobile app.

You can now show your user an NPS survey in real-time, when he or she is engaged with your app on a mobile device.  Users are scoring and commenting in context – feedback is fresh and relevant, which helps make it more actionable. Typically, you will see a 40-60% response rate right off the bat.

Surveys can be triggered to suit your business needs. For example, a user could see a survey after she has logged in x number of times, taken a specific action, or 30 days after downloading the app.

Using an external platform to manage your NPS process has its advantages. It can scale easily, is hassle free to try and deploy, and frees up resources to focus on the “so what?”  — leveraging NPS results to improve your application. A streamlined version of an NPS platform can come free and shouldn’t break your budget.  Our tool, Wootric, is one example.

Get the ebook, The Modern Guide to Winning Customers with Net Promoter Score. Learn eight ways to leverage Net Promoter Score for customer loyalty and growth.

To maintain strong response rates, take action on user feedback.

Maybe you filled out a survey once — really took the time to give constructive feedback. Did you hear back from the company?  If you did not, how likely are you to fill out another survey from that company? Not very likely, right?

For you to continue to garner high survey response rates, your mobile users must know that their feedback matters.  An NPS platform dashboard makes it easy for you to respond. The dashboard is where you can monitor the cumulative Net Promoter Score your app is earning, and slice and dice your data. It is also where you can see individual scores and qualitative feedback from individuals.

If your users have accounts and you have the resources, you should respond directly to individuals. You can do so right from your NPS dashboard.  You can also forward feedback to a team member in Customer Support, Customer Success, or Product Management for further action. To streamline the process, you might automate an email response to the bulk of respondents depending on whether they are Promoters, Passives or Detractors.

If your users are anonymous, at a minimum, you can acknowledge in software release notes that it was user feedback that revealed that recently-squashed bug, or drove the development of xyz feature.

High response rates and rich user feedback are possible in the mobile environment.

The streamlined nature of the NPS survey is a great fit for the low-attention span of the mobile user and the constraints of the small screen.  Consider in-app NPS surveys for higher response rates than email.  Be sure to show your users that you listen to their feedback, they will be more likely to answer another survey down the line.

Start measuring Net Promoter Score in your mobile app for free with InMoment

Tell us more about yourself so we can tailor your demo for you

Change Region

Selecting a different region will change the language and content of inmoment.com

North America
United States/Canada (English)
Europe
DACH (Deutsch) United Kingdom (English)
Asia Pacific
Australia (English) New Zealand (English) Asia (English)