Overview of Customer Experience Quality Metrics

What indicators will help evaluate the customer effort score (customer experience). How to implement them in call center!

Overview of Customer Experience Quality Metrics


Background. In 2010, a new powerful tool for evaluating customer experience appeared – the CES (Customer Effort Score) indicator of customer efforts. It was invented by CEB consultants. The idea was to assess the likelihood of repeat purchases based on the customer’s answer to a single question. The beauty of the idea was that there should be exactly one question (it is known that with an increase in the number of questions, fewer clients reach the end of the questionnaire). CES turned out to be not just an indicator, but also by its nature allowed to poll the maximum number of customers. From the point of view of the people at CEB, neither the NPS index, nor the CSAT/CDSAT pair, nor any modifications to the CSI were suitable as a predictor of loyalty (that is, willingness to buy or recommend a company’s products). By then, in July 2010, the Harward Business Review had already published an article entitled “Stop Delighting Your Customers,” the conclusions from which indicated that the correlation between satisfaction and loyalty was low. By the way, upon closer examination, this is logically obvious, because loyalty can be different. For example, if you need to fly to Norilsk by plane, and the only airline flies there, you will have to pay for a ticket, even if it costs the same as to New York. Nothing can be done: forced loyalty to the monopolist. From the figures in the article, it turned out that 20% of satisfied customers are going to stop shopping, and 28% of dissatisfied customers are going to continue to buy anyway. And this meant the correctness of the hypothesis that satisfaction indicators (CSAT / CDSAT and CSI) do not allow marketing services to draw correct conclusions about future customer behavior. NPS as an indicator of loyalty turned out to be a little better, but due to internal defectiveness and perversity, it didn’t help much either. Why is NPS defective? There is quite a serious criticism of this indicator on the Internet, but in order not to deviate from the purpose of the article, I will mention two main defects:

  1. With the same NPS values, the distribution of detractors, neutrals, and promoters can be different, that is, two audiences with different loyalty are marked with the same number.
  2. The value of NPS depends very much on how much time has passed since the client contacted the company, that is, on the measurement technique.

What did the specialists from CEB come up with? The CES measurement question was originally formulated as: “How much effort did it take you to solve your problem?” and implied one of five response options, where 5 meant “very much effort” and 1 meant “very little effort”. The idea was good and the thought was correct, but it turned out that there were problems. First, with translation. For example, in the original there were “How much” and “Request”, where “how much” is semantically correct and corresponds to the meaning of the task. But, for example, in Russian, the construction “How much effort it took you to solve your request” sounds a little wild. It is clear that the Russian was not the only one with whom the same situation arose. Secondly, it turned out that in different countries the scale from 1 to 5 is perceived ambiguously. Somewhere people got used to the fact that 1 means “excellent” (conditionally corresponds to the first place on the podium), and in other places they understood 1 as a stake, that is, lower than unsatisfactory. At the same time, a bad moment surfaced. If the wording was changed to “to solve your problem” instead of “to solve your problem”, that is, a verbal noun was used instead of a verb, this could lead to false positive responses even if the problem was not actually solved.

CES 2.0

The folks at CEB modified the question and got a CES score of 2.0 without the previous flaws. The new question was: “Do you agree with the fact that the company gave you the opportunity to easily solve the problem?”. There were seven possible answers instead of five. By the way, seven-point scales are better than five- and ten-point ones: a five-point scale, for example, does not allow you to put “five with a minus”, and in a ten-point scale it is not very clear how to distinguish an “eight” rating from a “nine”. Interestingly, neither CES 2.0 nor its predecessor, CES, has found widespread adoption in the area of customer service and customer experience management. Perhaps not enough was invested in the PR of the indicator, or it turned out to be difficult for company management to understand. There are chances that the purely psychological wording of the CES question provokes rejection, for example, from the director of customer experience: “how is it that the company does not provide an opportunity to solve the problem?” For him, this means that he does not do or poorly does his job. An important fact. It turned out that the change in customer loyalty is non-linear. After reaching score 5, for each next score up to 7, loyalty increases by only 2%, while before score 5, the increase can be 20%. That is, 5 is the threshold value, after which the business case should follow, whether to increase the injection of funds into solving client problems or leave everything as it is. It may well be that the growth of CES 2.0, for example, from 5 to 6 will absorb so many resources that it will not be profitable. Perspectives. From the author’s point of view, measuring CES 2.0 should not replace the parallel measurement of NPS and satisfaction scores. If, under specific conditions, correlations between indicators appear or, conversely, disappear, this will be a good reason to ask questions to the analytical departments.

CES 3.0

Based on the results of using CES 2.0 in its current form, it is not informative enough to manage it. Example: let’s say the value has fallen (by the way, the indicator can be measured using IVR or bots in chats and instant messengers), we have 20 operator groups, two support lines, a product line of 1,000 products. We look at the fall and must seriously dig deeper to find the reason, which, by the way, may not be one, but superimposed on another. Therefore, a multidimensional CES 3.0 is needed, which would be simultaneously measured for:

  • Products or product groups
  • Contents of customer questions
  • operator groups
  • Operator group supervisors (even the same people can work differently for different supervisors)
  • Passing marketing activities

This multidimensionality will allow the new CES to quickly identify problems. But at the same time, it will require a completely new type of “smart” signaling: instead of BI panels and many indicators on dashboards, automation will have to issue preliminary conclusions with the text: “During the campaign for product G 07.01. 3 under the control of supervisor S”. Here at Oki-Toki, you can create an automatic script for surveying clients using any of the methods, and save the result to your internal CRM or, using integrations, take it to AMOCRM, Bitrix24 or your CRM. In addition to the described metric, you can use automatic speech analytics and conversation assessment questionnaires to find and eliminate operator errors, which are also found in Oki-Toki.

Rate the news:

Read also

Tuesday July 12th, 2022 Contact center incoming call statistics

Incoming call statistics is a functionality that allows you to analyze incoming calls to contact center numbers.

Wednesday February 26th, 2020 Predictive call. How it works and how to connect

How predictive calling works and how it can be useful for your business. Where predictive calling is used, how effective it is


Leave a request and get detailed advice from a specialist.

    Agreement on the processing of personal data