Why are User Feedback and Objective Quality Important WebRTC Metrics for Contact Centers?

By Carl Blume on January 28, 2019
read

Many contact centers solicit feedback from users about call quality just before a call ends. By gathering user feedback, you get a direct indication of call quality from the user’s perspective. However, this isn’t always reliable. Not only do many customers avoid giving feedback, but they also tend to give a biased answer regarding call quality based on other factors like the result of the call, whether they liked the call center agent, how the conversation flowed, and more. The callstats.io Objective Quality metric addresses this uncertainty.

 

If you’re interested in learning more about the importance of WebRTC in the call center, check out our white paper: Cloud-based Contact Centers: the WebRTC Story.

What are User Feedback and Objective Quality?

User feedback is feedback directly from the user about the quality of the call’s audio streams. It’s usually collected by an IVR system that enters the call after the agent interaction is complete. Along with measuring NPS, agent responsiveness, and other factors, these surveys can collect feedback on overall audio quality. Many cloud-based contact center services can also collect agent’s feedback from within the agent’s app.

 

In contrast, the callstats.io Objective Quality metric evaluates the quality of a call by giving each individual stream a score and then further aggregating for each participant. It detects the technical factors that can annoy customers, including out of sync audio, clipped audio or robotic sounding audio, delay variations, throughput, jitter, and audio speaker output.

 

By evaluating user feedback and objective quality values in conjunction, you can more effectively identify when call quality is truly poor and dig deeper to find and address these issues.

What Causes Low User Feedback and Objective Quality Values?

Low user feedback ratings can be caused by any number of factors. Ideally, all customers will respond and evaluate the call’s audio quality. However, it doesn’t always happen this way. First, customer feedback is limited because some individuals may not want to respond. Second, the rating may be influenced by other things going on in their life, how the conversation with the call center agent went, or even whether they are in a rush. Third, each customer may assign a different score to calls that deliver the same quality. These factors make user feedback extremely subjective and a poor indicator of call quality. Typically, low user feedback scores correlate better than high scores.

 

Objective Quality, however, provides a common yardstick for measuring call quality based on technical data collected at each endpoint and overcomes the socio-economic biases of qualitative user-feedback. It combines several metrics such as jitter values, throughput values, delay variations, packet loss, and concealment metrics, audio variations, audio output, and sync time for audio to form an accurate and direct indicator of call quality. Objective Quality values allow call center infrastructure engineers to quickly identify when there is a problem, so they can dig deeper into other metrics and find a solution.

Why are User Feedback and Objective Quality Important WebRTC Metrics for Call Centers?

 

User feedback is a critical metric for contact centers because it gives a direct link to how the customer felt about the call. Direct feedback from the customer is the quickest indicator of call quality. If user feedback is consistently negative, it is a sign something is very wrong - whether that be with agent  conversations, call quality, or some other factor. It’s crucial to monitor this metric and use it to guide your contact center operations. Without engaging with and making improvements based on this metric, you may find that customers are wary to interact with your contact center. This can result in customer frustration, missed business opportunities, brand degradation, and potential customer churn.

 

Objective Quality gives your operations team a deeper understanding of whether calls are high quality. It removes the subjective nature of human evaluation and provides a single yardstick for evaluating call quality. This metric gives you insights before your users become so frustrated they flood you with negative user feedback. You can address poor call quality immediately and prevent customer frustration and churn. This will also aid your contact center agents, as without poor quality calls, they are able to complete calls quicker, keep the customer happier, and maintain a smoother conversation. This opens up more business opportunities and quicker time to resolution with fewer transfers, resulting in higher customer satisfaction.

 

If your agents are experiencing poor audio quality that is sneaking up on you and frustrating customers, you should start watching out for user feedback and Objective Quality.

 

Download our latest WebRTC metrics report to learn more about objective quality and user feedback across the industry.

Download white paper here

Tags: WebRTC, WebRTC Verticals, Amazon Connect, Contact Centers, WebRTC Monitoring, WebRTC Metrics Report, WebRTC Use Cases