This quarter we are focusing on improving the page loads for large conferences. Along with the data loading, we are updating the workflow and the user-experience. There are a lot of changes happening under the hood, so some of these updates will be quickly available while others will appear later.
The major change that is happening under the hood is the moving to datapipeline v3. The datapipeline v2 has served us well for the last 5 years, however, the blitz scaling in 2020 (due to shelter in place orders) warranted a new look at the infrastructure. Furthermore, in the last year, quite a few of our customers have gone from limiting 20 participants in a conference calls to 500-1000 participants in a conference call (a mix of real-time voice and real-time streaming).
We are re-imagining the workflows for diagnosing these large conferences, new data tables to look at the issues hierarchically.
- What were the issues in the conference at an aggregate of participation
- Which users participated, from where, which networks, summary of inbound and outbound connections, media, etc
- Ability to drill down from the conference/user level to a single user or a subset of users based on:
- network/transport issues
- incoming media issues
- outgoing media issues
When needed, raw data will always be accessible.
The callstats team has been working tirelessly behind the scenes to get this into your hands, we are extremely excited about these workflow improvements. Further, this will be a stepping stone to allow slicing and dicing the data based on arbitrary constraints. What this really means that you will be able to break apart the subset of users from a group of calls and analyse them.
We will discuss more about the new data pipelines in a future blog post, but I wanted to use the opportunity to highlight that this an important update for the product and will take place in phases.
This feature is built by Mark, Dejan, Tamas, Gowtham, Paul, Karthik, Eljas.