Hi there, I made a thread the other day regarding some large scale webinar sessions that appear to have some issues once the attendee count goes above 400ish people. ( Large scale (250+ people) webinar sessions lead to very degraded quality and crashes? ) I haven’t heard gotten any responses to that thread or any of my support emails so I am making this new thread here.
I have not fully ruled out that the issues we’re encountering aren’t just issues on my application’s end. But I do see on the OpenVidu pricing page that “official” support for “large scale sessions” is still marked as coming soon. So I’m wondering if currently OpenVidu is just not optimized for these types of 1:400+ person live events. If that is the case, I’m wondering what is the timeline for official support for “large scale sessions”?
You are right: large scale sessions such as one presenter sending media to 500 subscribers are setups that are currently not optimized in OpenVidu, and quality might be affected.
The main limitation right now is that one session must be contained in a single Media Node. So for sessions with hundreds or thousands of users, that becomes a bottle neck.
That being said, there are some things that can be done to increase the quantity and quality of streams in big sessions. Using mediasoup in OpenVidu Enteprise will provide more total capacity, as mediasoup streams require less resources than Kurento. Besides that, mediasoup supports simulcast, which will enure that each subscriber gets the best video quality according to their own network, without affecting the quality of the publisher or other subscribers.
Of course, vertical scalability will provide more capacity in a single session. A bigger machine serving as Media Node should be able to handle more streams, and therefore bigger sessions.
Regarding OpenVidu’s roadmap, native support of large scale sessions will probably be implemented around Q4 2022.
Thanks for your response @pabloFuente .
I did notice (in my other thread that I linked) that my media server did not seem close to max CPU utilization (The media server CPU utilization peaked around 25%). So I’m wondering what exactly the issue is in that case, as you mentioned vertically scaling should provide more capacity. Is this potentially an issue of AWS limiting network bandwidth?
Did the cluster of your test have only 1 Media Node? Because the CPU load graph in Kibana represents the average load, which means that if your cluster had 4 Media Nodes and a massive session was being held in one of them, its load could be 100% but the total average load would be 25%.
I can’t say for certain at this point but I will keep that in mind going forward. By default we only have 1 media node running but we do have autoscaling set up to create new media nodes once CPU usage hits 70%.
Very much looking forward to “official” support for large scale sessions because our app very much needs to have that functionality in the future.