Ping request timeouts

Hello guys

A few weeks ago a teammate contacted you via email in order to follow up this error, however i preferred to publish here in order to help others.

Currently we are using OpenVidu Pro 2.11.0 running in AWS (t2.large instance both for OpenVidu and for Kurento)

We are getting errors in the WebSocket requests. There are a couple of users that always (each session) go through this problem. Here some screenshots of the web console logs:


This problem causes that users stop listen to others, and also subscribers stop to listen him. We checked the network conditions of these users:

  1. User 1, Upload 5Mbps, Download 1Mbps, Ping ~11ms
  2. User 2, Upload 30Mbps, Download 30Mbps, Ping ~6ms

At the time of the test with these users there was only 1 active session with 6 participants. Every participant publishes his audio stream. In average, 4 are muted and 2 participants with their mics open.

Kibana reports up to 6% usage of CPU and 34% usage of memory (the same usage when there aren’t active sessions).

We asked to the users to make a test in the OpenVidu Pro monitor. In this test the users didn’t have as many timeouts as if they were in a session in our application, and when a timeout was detected a WebSocket reconnection succeeded

In order to track the source of this problem, you recommend us to check the KMS and OpenVidu logs. I have some questions related with this and with the problem:

  1. Should we run the KMS and OpenVidu with an specific flag in order to get the proper information?

  2. Is it possible to display these logs in Kibana?

  3. What could be the reason WebSocket reconnection is not succeeding in our application?

  4. What other information can we provide you in order to solve this problem?

Thanks for your attention :slight_smile:

Hi,

  1. Running OpenVidu Server with default logging configuration should be sufficient to get any useful log upon errors. Kurento can be deeper configured following this instructions: https://doc-kurento.readthedocs.io/en/latest/features/logging.html#logging-levels-and-components

  2. These logs do not end up in Kibana automatically. That can be done, but must be implemented outside of OpenVidu, preferably with Logstash

3/4) Some further information about your setup can help solving your problems:

  • Are you deploying OpenVidu Pro in AWS through MarketPlace by following these exact instructions (any variant on them is important)? https://openvidu.io/docs/openvidu-pro/deploying-openvidu-pro/
  • These couple of users that are having problems: which exact browser are they using? That “ICE failed” error, we have seen it sometimes when OpenVidu Server networking is not completely right and using certain versions of Firefox. If they use other browser, do their connections work fine?

Regards

Hi Pablo, sorry for the delay but we’re in the process of configure ElasticSearch + Logstash in order to get more information about problems.

We want to make some previous tests using this tool in order to check ICE state with some users. We would like to use our TURN/STUN server address for these tests. Where can we find this information?

You can set your own custom ICE Server configuration in the browser using method OpenVidu.setAdvancedConfiguration: https://openvidu.io/api/openvidu-browser/classes/openvidu.html#setadvancedconfiguration

This is if you want to use custom ICE servers in OpenVidu, of course. What do you mean when you say “Where can we find this information?”.

Ok checking this post i thought there was a TURN/STUN server already configured when we deployed OpenVidu Pro and i wanted to check the address of this TURN/STUN server, but now i don’t know if this is correct :sweat_smile:

Sure, OpenVidu Pro stack will deploy a COTURN server along OpenVidu Server.
The public IP of the COTURN will be the same that OpenVidu Server Pro.Port is the default (3478).

1 Like

STUN will be available immediately. For testing TURN capabilities, you need a pair of credentials. The only way to get them is by generating a token (you will see them inside of the token). That allows OpenVidu Server to automatically manage TURN credentials lifecycle for users, as you will have read in the Medium article

Thank you Pablo :slight_smile:

Another quick question, in the case we get an ICE Failed error, the information provided by chrome://webrtc-internals/ (in the case of chrome) and about:webrtc (in the case of firefox) could help us to diagnose problems? I just want to be sure to have enough information when we perform our tests.

Sure, both those pages are really useful when debugging WebRTC connections.
To debug a STUN/TURN server, I really recommend this page: https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/

It allows you to test that your STUN/TURN server is in fact working fine.