Quality assurance (QA) has long been considered one of those necessary evils of the contact center world. The list of reasons why reps have historically disliked QA is long and complicated, and primarily can be boiled down to the fact that QA was originally invented in a manufacturing context to catch flaws or mistakes. And while it is important that each widget on the assembly line of an automobile plant is attached to the car in the exact same way, the same simply cannot be said about the contact center—at least not anymore. Yes, there was a time when the call center felt a lot like an assembly line…repeat the same script, solve the same rote problems, and do it as quickly as possible. You may have heard us describe that kind of environment as the “factory of sadness.” In the factory of sadness, QA exists as a checklist to ensure a “consistent” customer experience. What it quickly become is a big brother listening tool designed to catch a rep say something wrong or miss a step in a process. Not only does this drive disengagement, but it doesn’t help your reps improve their skills.
Good news! You don’t have to be the factory of sadness. Your reps aren’t assembly line workers—they are knowledge workers serving customers with complex issues and high expectations. But what we find is that the QA process is slow to change, and in many organizations doesn’t reflect the type of experience that leaders know their customers want. I’d be willing to bet that, in the last 12 months, you or someone in your service organization have listened to a “perfect call” according to your QA scorecard that was far, far removed from “perfect” in the eyes of the customer.
What we need now is a QA evaluation that is focused on behaviors, not check-the-box things like saying the customer’s name three times. Here at Challenger, we work with organizations to adopt this type of a behavior-based QA framework, starting with a diagnostic to help select the right competencies for each organization. Having done this with a number of companies across the last five years or so, the Effortless Experience team spent some time this fall reviewing our list of competency options and culling it down to the ones that are most used by our clients and have the biggest impact on the customer experience. We also wanted to simplify areas of overlap or interdependencies between competencies that can lead to the dreaded “double-ding” during an evaluation. Some of the competencies that got cut were not surprising (e.g., “resilience”, which is a fantastic trait but proves exceptionally difficult to assess via QA listening). On the other hand, there were a few eliminations that felt a little, well, BOLD. Case in point: we’ve removed “issue diagnosis” AND “issue resolution.”
“Issue resolution” was a little less scandalous of a choice because frequently clients will observe that there are so many things outside a rep’s control when it comes to issue resolution that it just doesn’t seem fair to have it on the scorecard—and sometimes the QA team lacks the ability to verify that resolution occurred completely. Furthermore, issue resolution can feel a bit binary (did you or didn’t you resolve the issue), and the goal of a behavior-based framework like ours is to help evaluate how well the behavior was demonstrated, not just that it was demonstrated at all.
“Issue diagnosis”, however, was a little harder to release. After all, reps need to correctly identify an issue before they can successfully solve it for the customer. Ultimately, as we examined overlap with some of our other competencies, what we determined is that Issue Diagnosis is actually an outcome, not a behavior in and of itself. Issue Diagnosis is dependent on a host of other skills and behaviors (e.g., product knowledge, purposeful small talk, acknowledging baggage, active listening), so moving forward we’ll focus our clients on addressing those root cause behaviors. This increases the chances that the QA evaluation starts to improve rep performance instead of just pointing out errors.
Removing Issue Diagnosis from the scorecard and focusing on the root cause behaviors instead is also a “win” for your supervisors. This is because your QA outputs are your coaching inputs. If we are feeding supervisors a QA evaluation that simply is focused on metrics or outcomes (like issue diagnosis or issue resolution), we’re increasing the chances that “spreadsheet coaching” will take place instead of behavior-based coaching.
Ultimately, issue diagnosis and issue resolution both remain extremely important to the overall customer service experience. However, if the goal of QA is to find areas where reps can improve their behaviors to drive better outcomes for customers, then we should focus the evaluation on behaviors alone.