Between a Rock and a QA Hard Place

Between a Rock and a QA Hard Place

I overheard a conversation about QA at one of our client’s sites recently:

“Have you tried out any of the new approved greetings yet?”

“Yes.  I sound like C-3PO with most of them.”

“Same.  But the other month I got dinged because I tried doing my own thing.  I thought it was fine because it was still close to the scripts but felt more natural.  QA said it wasn’t ok.”

“Well, lesson learned.  C-3PO it is!”

In all honesty, I made up that conversation.  But are similar conversations happening at contact centers across the globe?  Absolutely.  Rote QA scorecards have reps everywhere choosing between a great customer service experience and their score.  If scores impact their incentives or shift bids, they’ll stick to the script.  Or worse, they will game the system to get their high scores…and your customers pay the price.

Frankly, your reps pay the price as well.  Who wants to feel like a robot at work?  You’ve heard us talk before about how contact centers can easily become factories of sadness, and if your reps are being forced to stick to the script, do the call flow, and absolutely don't think for themselves - they will leave the job.  Maybe not tomorrow, but soon.

Competencies create an effortless experience 

At Challenger, we advocate moving to a competency-based QA model.  Rather than a checklist, you should choose the select competencies that matter the most to your organization and define the type of service that you want your reps delivering in every service interaction—but leave the details for them to determine.

This causes a certain degree of discomfort for customer service leaders.  Checklists are easy.  Subjective approaches appear more complex (and scary).

There are a few common root causes where we typically see this discomfort, but the good news is that we can partner with you to ease those concerns.  For example, those concerns may sound like:

Too much autonomy is terrifying.  Autonomy doesn’t mean anarchy.  With each competency comes a set of mastery levels to help you and your leadership team determine what novice looks like, what expert looks like, and everything in-between. These levels serve as guardrails for your reps, and leave your reps thinking “I know the core competencies required of me but can use my discretion in how I interact with customers.”

Subjective approaches are, well, subjective.  Many of our clients might already be sold on the value in moving to a competency-based model, but worried that one Quality Analyst will hear aptitude while another hears incompetence.  Well…couldn’t you say the same about your customers’ subjective interpretation of their experiences?

During our Quality Transformation Engagement, we hold a working session where we calibrate together across 10 calls for one core performing rep.  Yes, not everyone will evaluate every call perfectly, and you won’t have complete alignment.  Participants evaluate two calls, then we summarize the evaluations and project the results for everyone to see.  Nearly all of our clients find that they are far more aligned than they thought they would be, even after two calls. This process is designed to focus on trends rather than one-off mistakes.  As a result, small discrepancies in evaluations tend to come out in the wash after  10 calls, which establishes a trend in that rep’s performance rather than what Quality might hear if they only listen to one call.

We need a pulse on overall performance, and the score gives us that.  You’re in good company if this is where your mind is already heading!  Many of our clients find themselves still assigning a value to each mastery level so they can calculate their progress (e.g., novice =1, effective =3, expert=5). 

Some of our clients have simplified their reporting even further.  If you consider an Effortless Experience, the goal should be to have nearly 100% of your workforce at “effective” or higher for each competency.  A score of “effective” will ensure that a customer will always have a consistent experience, regardless of whether he or she calls your organization 3 times a week or 3 times a year.

100% “effective” across the board will never happen due to attrition and other factors, so reporting on it gives leadership the pulse they need from the Quality team.  If the Quality team sees that a competency’s progress is starting to slip, they can ask Supervisors to root cause the issue and emphasize that competency in future coaching sessions as appropriate.

And last but not least, the final discomfort we hear from leaders when considering moving to a competency-based QA Model:

This process sounds HARD. 

Change can be hard and overhauling an entire QA function is a journey.  But change is also invaluable.  A rolling QA stone gathers no moss.