A question we frequently hear from customers is: How can I measure the audio quality of a WebRTC call? It is frequently asked by contact center operations teams with a background in legacy VoIP communications. They are familiar with voice quality estimation tools that produce a MOS estimate and seek a comparable tool for estimating the quality of experience for modern WebRTC audio calls.
In this blog I’ll review some of the limitations of traditional VoIP call quality estimation methods, and explain how our Objective Quality metric can help you estimate quality of experience for WebRTC audio services.
Can Traditional MOS Models be Applied to WebRTC?
Mean Opinion Score (MOS) has its roots in the analog telephony services prevalent in the middle decades of the last century, when carriers used groups of people to listen and rate the quality of calls. A MOS is the mean of the ratings assigned by the listeners, typically ranked on a “Absolute Category Rating” (ACR) scale of 1 (poor) to 5 (excellent). Of course, this approach became impractical and prohibitively expensive, yet the need to understand quality of experience hasn’t diminished.
In response, engineers developed algorithms that render an audio quality estimate based on Mean Opinion Score (MOS) values. For example, the E-model is a popular standards-based algorithm for estimating audio quality in VoIP networks. Described in ITU-T G.107, the E-model combines media and system characteristics to determine a transmission quality rating factor (R-factor) for a call. The R-factor is then mapped to a subjective MOS.
The E-model can provide a relatively good estimate of call quality in legacy VoIP networks that use traditional fixed-rate, narrowband audio codecs like G.711 and certain wideband and full-band codecs. WebRTC, however, uses the Opus codec by default. Opus is a modern codec with multiple modes that can change dynamically. This use case was not contemplated in the E-model. In fact, there are no industry-standard algorithms for estimating quality of experience in real-time for Opus-based WebRTC sessions, nor is there a leading model in the scientific literature, to the best of our knowledge.
callstats.io Objective Quality Scoring Specifically Conceived for WebRTC
At callstats.io, we recognize organizations operating contact centers, collaboration services and other types of applications rely on WebRTC to deliver business-critical communications. And we know that poor audio quality can lead to customer frustration and lost business. To help customers monitor quality of experience, callstats.io has developed the Objective Quality (OQ) family of quality estimation metrics to assess the quality of every call monitored by our platform. We score each call on a scale of 0.0 (poor) to 3.5 (excellent) using an innovative algorithm that is based on fundamental relationships between network performance and the user’s perception of quality.
OQ is calculated at frequent intervals during an active call, ranging from ten to thirty seconds, depending upon configuration, and then carefully aggregated over the duration of the call. This is because averaging scores calculated over a long interval, or the entire length of a call, can be meaningless. For example, call quality can be quite good for an extended period (for example 3 minutes), then degrade to the point the users disconnect (for example, 20 seconds). The average quality score for the entire length of such a call might appear acceptable, obscuring the call was in fact terminated because of poor user quality.
Armed with an OQ histogram, operations teams are able to evaluate not only the severity, but also the duration of low-quality episodes. An OQ histrogram also allows you to correlate periods of poor quality with other factors, such as network congestion.
OQ comprises a family of estimators that support the audio and video codecs specified by WebRTC, plus screen sharing. The values displayed in the callstats.io dashboard for each call aggregates the scores produced by the estimators that are active during the call. For audio calls, only the audio estimator is used to produce the OQ score.
Customers rely on OQ to identify quality issues with their communications services. They use it to flag problem calls for investigation and to monitor the overall performance of their service.
Advancing QoE Estimators for WebRTC
We introduced OQ, the industry’s first quality estimation metric in 2014. Currently at version 3, OQ continues to evolve. callstats.io has dedicated a team of engineers to advancing our WebRTC quality estimators - increasing their accuracy with support for Opus, VP8 and VP9 codecs, and making them more actionable.
We are contributing our QoE research to the industry through our involvement in scientific forums. Most recently, callstats.io delivered the keynote at the QoE-Management 2019 workshop, co-located with the Innovation in Clouds, Internet and Networks conference in Paris.
QoE for WebRTC is an exciting topic and we look forward to sharing many further developments with you in this space.