OpenVidu-Call not working properly

I’ve just tested again and it works for me.

Please @pabloFuente, can you test OpenVidu Call in https://www.fourmodules.com:4443/?

Regards

Yes, woking perfectly fine for me between a laptop and a phone.

Yes after opening that ports it is now working fine on my end also.
Thanks

Just to recap and post a proper solution to this forum entry. If you have problems receiving remote video streams:

Please open all the ports in your server(s) as exactly stated in the deployment documentation

Yes specially when it says “ICE failed, your TURN server appears to be broken” in console error.

Hello there,

I think I have a similar issue with the call app, I read through the posts and tried all the solutions proposed, I not be able to make it work.

Here the logs from the app, in my 2 last tests (but I got same same result in the previsus one):

root@m-ln-openvisu:/opt/openvidu# docker-compose logs app
Attaching to openvidu_app_1
app_1 | OPENVIDU URL: https://172.17.0.1:433
app_1 | OPENVIDU SECRET: mysec
app_1 | Restarting nginx: nginx.

root@m-ln-openvisu:/opt/openvidu# docker-compose logs app
Attaching to openvidu_app_1
app_1 | OPENVIDU URL: http://172.17.0.1:5443
app_1 | OPENVIDU SECRET: mysec
app_1 | Restarting nginx: nginx.

From the logs in the Openvidu server, seem that the application is not able to locate ov server, since I not see any request for a new session.

the error in the application is:

Thank you.
Regards
Antonello

Hello Antonello,

Have you executed this command to get docker IP?

ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+'

Can you show us the entire application logs to know why OpenVidu Call backend is not able to connect openvidu server?

Hello Micael,

sorry for the delay, here comes the info required:

ip -4 addr show docker0 | grep -Po ‘inet \K[\d.]+’
172.17.0.1

Here the output of the docker-compose logs, after I tried to user the app. The domain and pwd are changed if you need to access to my server please tell and I send the information in a private conversation. I restart the server, add the media-node and try to use the app.

Creating network “openvidu_default” with the default driver
Creating openvidu_openvidu-server_1 …
Creating openvidu_redis_1 …
Creating openvidu_elasticsearch_1 …
Creating openvidu_kibana_1 …
Creating openvidu_coturn_1 …
Creating openvidu_app_1 …
Creating openvidu_nginx_1 …
Creating openvidu_coturn_1 … doneCreating openvidu_nginx_1 … doneCreating openvidu_openvidu-server_1 … doneCreating openvidu_redis_1 … doneCreating openvidu_kibana_1 … doneCreating openvidu_app_1 … doneCreating openvidu_elasticsearch_1 … doneAttaching to openvidu_openvidu-server_1
openvidu-server_1 |
openvidu-server_1 |
openvidu-server_1 | =======================================
openvidu-server_1 | = WAIT KIBANA =
openvidu-server_1 | =======================================
openvidu-server_1 |
openvidu-server_1 | Waiting for kibana in ‘http://127.0.0.1:5601/app/kibana’ URL…
openvidu-server_1 | Waiting for kibana in ‘http://127.0.0.1:5601/app/kibana’ URL…
…[OMITTED]
openvidu-server_1 | Waiting for kibana in ‘http://127.0.0.1:5601/app/kibana’ URL…
openvidu-server_1 | Waiting for kibana in ‘http://127.0.0.1:5601/app/kibana’ URL…
openvidu-server_1 | ==== Kibana is Ready ====
openvidu-server_1 |
openvidu-server_1 | =======================================
openvidu-server_1 | = LAUNCH OPENVIDU-SERVER =
openvidu-server_1 | =======================================
openvidu-server_1 |
openvidu-server_1 | ______________________________________________________________
openvidu-server_1 | ____ __ ___ _
openvidu-server_1 | / __ \ \ \ / () | |
openvidu-server_1 | | | | |
__ ___ _ \ \ / / _ | | _ ___
openvidu-server_1 | | | | | ’
\ / _ \ ’
\ / / | |/ ` | | | | | _ _ _ ___
openvidu-server_1 | | |__| | |
) | / | | \ / | | (| | || | | / '/ _
openvidu-server_1 | _
/| ./ _|| ||/ ||_,|_,| || || __/
openvidu-server_1 | | |
openvidu-server_1 | |
| version 2.14.0
openvidu-server_1 | ______________________________________________________________
openvidu-server_1 |
openvidu-server_1 | [INFO] 2020-05-27 08:05:01,703 [main] io.openvidu.server.pro.OpenViduServerPro - Starting OpenViduServerPro on m-ln-openvidu-server with PID 260 (/opt/openvidu/openvidu-server.jar started by root in /opt/openvidu)
openvidu-server_1 | [INFO] 2020-05-27 08:05:01,716 [main] io.openvidu.server.pro.OpenViduServerPro - No active profile set, falling back to default profiles: default
openvidu-server_1 | [INFO] 2020-05-27 08:05:02,397 [main] io.openvidu.server.config.OpenviduConfig - Configuration properties read from file /opt/openvidu/.env
openvidu-server_1 | [INFO] 2020-05-27 08:05:02,734 [main] io.openvidu.server.pro.OpenViduServerPro - Started OpenViduServerPro in 5.556 seconds (JVM running for 9.97)
openvidu-server_1 | [INFO] 2020-05-27 08:05:02,760 [main] io.openvidu.server.OpenViduServer -
openvidu-server_1 |
openvidu-server_1 |
openvidu-server_1 | Configuration properties
openvidu-server_1 | ------------------------
openvidu-server_1 |
openvidu-server_1 | * CERTIFICATE_TYPE=letsencrypt
openvidu-server_1 | * DOMAIN_OR_PUBLIC_IP=ov.mysuperdomain
openvidu-server_1 | * HTTPS_PORT=443
openvidu-server_1 | * KMS_URIS=[]
openvidu-server_1 | * OPENVIDU_CDR=false
openvidu-server_1 | * OPENVIDU_CDR_PATH=/opt/openvidu/cdr
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER=true
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_AUTODISCOVERY=true
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_ENVIRONMENT=on_premise
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_ID=
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_LOAD_STRATEGY=streams
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_MEDIA_NODES=1
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_MODE=manual
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_PATH=/opt/openvidu/cluster
openvidu-server_1 | * OPENVIDU_PRO_CLUSTER_TEST=false
openvidu-server_1 | * OPENVIDU_PRO_LICENSE=XXXXXXX
openvidu-server_1 | * OPENVIDU_PRO_STATS_MONITORING_INTERVAL=30
openvidu-server_1 | * OPENVIDU_PRO_STATS_WEBRTC_INTERVAL=30
openvidu-server_1 | * OPENVIDU_RECORDING=false
openvidu-server_1 | * OPENVIDU_RECORDING_AUTOSTOP_TIMEOUT=120
openvidu-server_1 | * OPENVIDU_RECORDING_COMPOSED_URL=
openvidu-server_1 | * OPENVIDU_RECORDING_CUSTOM_LAYOUT=/opt/openvidu/custom-layout
openvidu-server_1 | * OPENVIDU_RECORDING_NOTIFICATION=publisher_moderator
openvidu-server_1 | * OPENVIDU_RECORDING_PATH=/opt/openvidu/recordings
openvidu-server_1 | * OPENVIDU_RECORDING_PUBLIC_ACCESS=false
openvidu-server_1 | * OPENVIDU_RECORDING_VERSION=2.9.0
openvidu-server_1 | * OPENVIDU_SECRET=mysupersecret
openvidu-server_1 | * OPENVIDU_SESSIONS_GARBAGE_INTERVAL=900
openvidu-server_1 | * OPENVIDU_SESSIONS_GARBAGE_THRESHOLD=3600
openvidu-server_1 | * OPENVIDU_STREAMS_VIDEO_MAX_RECV_BANDWIDTH=1000
openvidu-server_1 | * OPENVIDU_STREAMS_VIDEO_MAX_SEND_BANDWIDTH=1000
openvidu-server_1 | * OPENVIDU_STREAMS_VIDEO_MIN_RECV_BANDWIDTH=300
openvidu-server_1 | * OPENVIDU_STREAMS_VIDEO_MIN_SEND_BANDWIDTH=300
openvidu-server_1 | * OPENVIDU_WEBHOOK=false
openvidu-server_1 | * OPENVIDU_WEBHOOK_ENDPOINT=
openvidu-server_1 | * OPENVIDU_WEBHOOK_EVENTS=[sessionCreated,sessionDestroyed,participantJoined,participantLeft,webrtcConnectionCreated,webrtcConnectionDestroyed,recordingStatusChanged,filterEventDispatched,mediaNodeStatusChanged]
openvidu-server_1 | * OPENVIDU_WEBHOOK_HEADERS=[]
openvidu-server_1 |
openvidu-server_1 |
openvidu-server_1 |
openvidu-server_1 | [WARN] 2020-05-27 08:05:02,780 [main] io.openvidu.server.pro.OpenViduServerPro - You have set property server.port (or SERVER_PORT). This will serve OpenVidu Server Pro on your host at port 5443. But property HTTPS_PORT (443) still configures the port that should be used to connect to OpenVidu Server from outside. Bear this in mind when configuring a proxy in front of OpenVidu Server
openvidu-server_1 | [INFO] 2020-05-27 08:05:03,151 [main] io.openvidu.server.pro.OpenViduServerPro - Starting OpenViduServerPro on m-ln-openvidu-server with PID 260 (/opt/openvidu/openvidu-server.jar started by root in /opt/openvidu)
openvidu-server_1 | [INFO] 2020-05-27 08:05:03,154 [main] io.openvidu.server.pro.OpenViduServerPro - No active profile set, falling back to default profiles: default
openvidu-server_1 | [INFO] 2020-05-27 08:05:10,709 [main] io.openvidu.server.config.OpenviduConfig - Configuration properties read from file /opt/openvidu/.env
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,343 [main] org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 5443 (http)
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,381 [main] org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler [“http-nio-0.0.0.0-5443”]
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,388 [main] org.apache.catalina.core.StandardService - Starting service [Tomcat]
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,389 [main] org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.30]
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,602 [main] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
openvidu-server_1 | [INFO] 2020-05-27 08:05:12,603 [main] org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 9284 ms
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,153 [main] io.openvidu.server.config.OpenviduConfig - Configuration properties read from file /opt/openvidu/.env
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,236 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro is deployed in ‘on_premise’ environment
openvidu-server_1 | [WARN] 2020-05-27 08:05:13,236 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro cluster mode is ‘manual’. There will be no automatic instances management
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,310 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro cluster mode enabled
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,331 [main] io.openvidu.server.pro.OpenViduServerPro - Cluster identifier got from disk: clu_XpmfkZJC
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,342 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro is using ‘streams’ load strategy
openvidu-server_1 | [INFO] 2020-05-27 08:05:13,368 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro Elasticsearch service is enabled
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,016 [main] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - Elasticsearch is accessible at 127.0.0.1:9200
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,105 [main] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - Elasticsearch version is 7.6.2
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,171 [main] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - Elasticsearch index “openvidu” already exists
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,172 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro CDR service is disabled (may be enable with ‘OPENVIDU_CDR=true’)
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,172 [main] io.openvidu.server.pro.OpenViduServerPro - OpenVidu Pro Webhook service is disabled (may be enabled with ‘OPENVIDU_WEBHOOK=true’)
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,180 [main] io.openvidu.server.kurento.kms.KmsManager - OpenVidu Server Pro is deployed with ‘OPENVIDU_PRO_CLUSTER_MODE’ set to ‘manual’. Initializing Media Nodes defined in parameter ‘KMS_URIS’: []
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,188 [main] io.openvidu.server.kurento.kms.KmsManager - No KMS uris were defined
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,365 [main] io.openvidu.server.recording.service.RecordingManager - OpenVidu recording service is disabled
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,523 [main] io.openvidu.server.coturn.CoturnCredentialsService - COTURN IP: 95.39.37.107
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,523 [main] io.openvidu.server.coturn.CoturnCredentialsService - COTURN Redis DB accessible with string “ip=127.0.0.1 dbname=0 password=turn connect_timeout=30”
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,523 [main] io.openvidu.server.coturn.CoturnCredentialsService - Cleaning COTURN DB…
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,524 [main] io.openvidu.server.coturn.CoturnCredentialsService - Path of COTURN log files: /var/log/
openvidu-server_1 | [ERROR] 2020-05-27 08:05:15,544 [main] io.openvidu.server.coturn.CoturnCredentialsService - COTURN DB is not empty
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,545 [main] io.openvidu.server.coturn.CoturnCredentialsService - Using COTURN credentials service for BASH environment
openvidu-server_1 | [INFO] 2020-05-27 08:05:15,574 [main] io.openvidu.server.pro.utils.GeoLocationByIpPro - Trying to load user location database…
openvidu-server_1 | [INFO] 2020-05-27 08:05:16,962 [Thread-3] io.openvidu.server.pro.account.LambdaService - License GET HTTP status: HTTP/1.1 200 OK
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,343 [main] io.openvidu.server.pro.utils.GeoLocationByIpPro - Database was loaded successfully
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,363 [main] io.openvidu.server.core.SessionManager - Garbage collector for non active sessions initialized. Running every 900 seconds and cleaning up non active Sessions more than 3600 seconds old
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,628 [main] org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler - Initializing ExecutorService ‘jsonrpcTaskScheduler’
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,837 [main] io.openvidu.server.pro.config.KibanaConfig - Kibana is accessible at http://127.0.0.1/kibana
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,838 [main] io.openvidu.server.pro.config.KibanaConfig - Kibana version is 7.6.2
openvidu-server_1 | [INFO] 2020-05-27 08:05:17,922 [main] io.openvidu.server.pro.config.KibanaConfig - Kibana already has default dashboards imported [OpenVidu Recordings, OpenVidu Sessions]
openvidu-server_1 | [INFO] 2020-05-27 08:05:18,536 [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService ‘applicationTaskExecutor’
openvidu-server_1 | [INFO] 2020-05-27 08:05:18,804 [main] org.springframework.boot.autoconfigure.web.servlet.WelcomePageHandlerMapping - Adding welcome page: class path resource [static/index.html]
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,387 [main] org.springframework.security.web.DefaultSecurityFilterChain - Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@15888343, org.springframework.security.web.context.SecurityContextPersistenceFilter@5965d37, org.springframework.security.web.header.HeaderWriterFilter@6f01b95f, org.springframework.web.filter.CorsFilter@7cc0cdad, org.springframework.security.web.authentication.logout.LogoutFilter@740fb309, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@d554c5f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4201c465, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@710636b0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@33ecda92, org.springframework.security.web.session.SessionManagementFilter@1a245833, org.springframework.security.web.access.ExceptionTranslationFilter@5c80cf32, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@4b8d604b]
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,397 [main] org.springframework.security.web.DefaultSecurityFilterChain - Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4b520ea8, org.springframework.security.web.context.SecurityContextPersistenceFilter@69c81773, org.springframework.security.web.header.HeaderWriterFilter@dc7df28, org.springframework.web.filter.CorsFilter@7cc0cdad, org.springframework.security.web.authentication.logout.LogoutFilter@25df00a0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5a5a729f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4d14b6c2, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7e990ed7, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@16150369, org.springframework.security.web.session.SessionManagementFilter@30f842ca, org.springframework.security.web.access.ExceptionTranslationFilter@55de24cc, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@97e93f1]
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,495 [main] org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler [“http-nio-0.0.0.0-5443”]
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,578 [main] org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 5443 (http) with context path ‘’
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,580 [main] io.openvidu.server.pro.OpenViduServerPro - Started OpenViduServerPro in 16.711 seconds (JVM running for 26.816)
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,582 [main] io.openvidu.server.OpenViduServer -
openvidu-server_1 |
openvidu-server_1 | ----------------------------------------------------
openvidu-server_1 |
openvidu-server_1 | OpenVidu is ready!
openvidu-server_1 | ---------------------------
openvidu-server_1 |
openvidu-server_1 | * OpenVidu Server: https://ov.mysuperdomain/
openvidu-server_1 |
openvidu-server_1 | * OpenVidu Dashboard: https://ov.mysuperdomain/dashboard/
openvidu-server_1 |
openvidu-server_1 | ----------------------------------------------------
openvidu-server_1 |
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,586 [main] io.openvidu.server.pro.monitoring.OpenViduServerProMonitor - Starting server monitoring with an interval of 30s
openvidu-server_1 | [INFO] 2020-05-27 08:05:19,588 [main] io.openvidu.server.pro.monitoring.OpenViduServerProMonitor - Local host net, cpu and mem usage is now being monitored (in an interval of 30 seconds)
openvidu-server_1 | [INFO] 2020-05-27 08:05:20,366 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566719666,“cpu”:2.040009384864154,“mem”:{“used”:2791984.0,“percentage”:70.81796229784298},“net”:{“eth0”:{“rxBytes”:0,“txBytes”:0},“veth5b08047”:{“rxBytes”:0,“txBytes”:0},“vethd6a75e1”:{“rxBytes”:0,“txBytes”:0},“vethe958b35”:{“rxBytes”:0,“txBytes”:0}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:05:49,772 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566749734,“cpu”:6.1093489148580975,“mem”:{“used”:2753036.0,“percentage”:69.830056208275},“net”:{“eth0”:{“rxBytes”:4040,“txBytes”:1388},“veth5b08047”:{“rxBytes”:17803,“txBytes”:41596},“vethd6a75e1”:{“rxBytes”:0,“txBytes”:0},“vethe958b35”:{“rxBytes”:41036,“txBytes”:17352}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:06:19,803 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566779762,“cpu”:8.31197724991636,“mem”:{“used”:2652396.0,“percentage”:67.27734826809521},“net”:{“eth0”:{“rxBytes”:7220,“txBytes”:2128},“veth5b08047”:{“rxBytes”:35341,“txBytes”:83342},“vethd6a75e1”:{“rxBytes”:0,“txBytes”:0},“vethe958b35”:{“rxBytes”:82127,“txBytes”:34439}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:06:49,833 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566809788,“cpu”:5.470612930310663,“mem”:{“used”:2653856.0,“percentage”:67.31438079584424},“net”:{“eth0”:{“rxBytes”:10626,“txBytes”:2976},“veth5b08047”:{“rxBytes”:52813,“txBytes”:124943},“vethd6a75e1”:{“rxBytes”:0,“txBytes”:0},“vethe958b35”:{“rxBytes”:123073,“txBytes”:51460}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:06:58,717 [0.0-5443-exec-1] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring DispatcherServlet ‘dispatcherServlet’
openvidu-server_1 | [INFO] 2020-05-27 08:06:58,718 [0.0-5443-exec-1] org.springframework.web.servlet.DispatcherServlet - Initializing Servlet ‘dispatcherServlet’
openvidu-server_1 | [INFO] 2020-05-27 08:06:58,736 [0.0-5443-exec-1] org.springframework.web.servlet.DispatcherServlet - Completed initialization in 18 ms
openvidu-server_1 | [INFO] 2020-05-27 08:06:59,397 [0.0-5443-exec-1] io.openvidu.server.pro.rest.ConfigRestControllerPro - REST API: GET /pro/media-nodes
openvidu-server_1 | [INFO] 2020-05-27 08:07:00,064 [0.0-5443-exec-2] io.openvidu.server.pro.rest.ConfigRestControllerPro - REST API: POST /pro/media-nodes (async)
openvidu-server_1 | [INFO] 2020-05-27 08:07:00,262 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “cdr” sent to Elasticsearch: {“timestamp”:1590566820079,“id”:“kms_SY4U4oZw”,“environmentId”:null,“ip”:“10.80.250.22”,“uri”:“ws://10.80.250.22:8888/kurento”,“newStatus”:“launching”,“oldStatus”:null,“clusterId”:“clu_XpmfkZJC”,“event”:“mediaNodeStatusChanged”,“elastic_type”:“cdr”}
openvidu-server_1 | [INFO] 2020-05-27 08:07:00,334 [rbeatExec-e1-t0] org.kurento.jsonrpc.client.JsonRpcClientNettyWebSocket - [KurentoClient] Connecting native client
openvidu-server_1 | [INFO] 2020-05-27 08:07:00,335 [rbeatExec-e1-t0] org.kurento.jsonrpc.client.JsonRpcClientNettyWebSocket - [KurentoClient] Creating new NioEventLoopGroup
openvidu-server_1 | [INFO] 2020-05-27 08:07:01,001 [ntLoopGroup-2-1] org.kurento.jsonrpc.client.JsonRpcClientNettyWebSocket - [KurentoClient] Initiating new Netty channel. Will create new handler too!
openvidu-server_1 | [INFO] 2020-05-27 08:07:01,280 [0.0-5443-exec-2] io.openvidu.server.kurento.kms.KmsManager - All KMSs in [ws://10.80.250.22:8888/kurento] are within reach of OpenVidu Server
openvidu-server_1 | [INFO] 2020-05-27 08:07:01,282 [EventExec-e2-t0] io.openvidu.server.kurento.kms.KmsManager - Kurento Client “connected” event for KMS ws://10.80.250.22:8888/kurento [org.kurento.client.KurentoClient@3df61c75]
openvidu-server_1 | [INFO] 2020-05-27 08:07:01,475 [0.0-5443-exec-2] io.openvidu.server.pro.account.ClusterUsageService - Media Node kms_SY4U4oZw registered into usage service
openvidu-server_1 | [INFO] 2020-05-27 08:07:01,529 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “cdr” sent to Elasticsearch: {“timestamp”:1590566821281,“id”:“kms_SY4U4oZw”,“environmentId”:null,“ip”:“10.80.250.22”,“uri”:“ws://10.80.250.22:8888/kurento”,“newStatus”:“running”,“oldStatus”:“launching”,“clusterId”:“clu_XpmfkZJC”,“event”:“mediaNodeStatusChanged”,“elastic_type”:“cdr”}
openvidu-server_1 | [INFO] 2020-05-27 08:07:02,009 [0.0-5443-exec-3] io.openvidu.server.pro.rest.ConfigRestControllerPro - REST API: GET /pro/media-nodes
openvidu-server_1 | [INFO] 2020-05-27 08:07:19,845 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566839809,“cpu”:12.104152712659076,“mem”:{“used”:2672908.0,“percentage”:67.79762991822406},“net”:{“eth0”:{“rxBytes”:35441,“txBytes”:46231},“veth5b08047”:{“rxBytes”:71187,“txBytes”:167907},“vethd6a75e1”:{“rxBytes”:4681,“txBytes”:11230},“vethe958b35”:{“rxBytes”:165179,“txBytes”:68705}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:07:49,890 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566869846,“cpu”:3.912512546002007,“mem”:{“used”:2675440.0,“percentage”:67.86185345264909},“net”:{“eth0”:{“rxBytes”:42954,“txBytes”:53281},“veth5b08047”:{“rxBytes”:88659,“txBytes”:209532},“vethd6a75e1”:{“rxBytes”:6233,“txBytes”:14779},“vethe958b35”:{“rxBytes”:206358,“txBytes”:85726}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:08:17,334 [Thread-3] io.openvidu.server.pro.account.LambdaService - License GET HTTP status: HTTP/1.1 200 OK
openvidu-server_1 | [INFO] 2020-05-27 08:08:19,926 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566899870,“cpu”:3.44363605954173,“mem”:{“used”:2676900.0,“percentage”:67.89888598039812},“net”:{“eth0”:{“rxBytes”:59506,“txBytes”:69282},“veth5b08047”:{“rxBytes”:106131,“txBytes”:251158},“vethd6a75e1”:{“rxBytes”:11020,“txBytes”:25497},“vethe958b35”:{“rxBytes”:247581,“txBytes”:102789}},“elastic_type”:“monitoringStats”}
openvidu-server_1 | [INFO] 2020-05-27 08:08:49,921 [/O dispatcher 1] io.openvidu.server.pro.cdr.CDRLoggerElasticSearch - New event of type “monitoringStats” sent to Elasticsearch: {“timestamp”:1590566929889,“cpu”:3.8991962491627596,“mem”:{“used”:2677428.0,“percentage”:67.91227856577585},“net”:{“eth0”:{“rxBytes”:66585,“txBytes”:78184},“veth5b08047”:{“rxBytes”:123603,“txBytes”:292783},“vethd6a75e1”:{“rxBytes”:18115,“txBytes”:28977},“vethe958b35”:{“rxBytes”:288537,“txBytes”:119810}},“elastic_type”:“monitoringStats”}
^CERROR: Aborting.
root@m-ln-openvidu-server:/opt/openvidu#

Regard Antonello

In our case also after sometimes it shows gateway error then we restart openvidu server it starts working, but again after sometime it shows gateway error:

[root@server openvidu]# docker-compose logs -f nginx

nginx_1            | 2020/06/02 09:02:56 [warn] 65#65: *527 upstream server temporarily disabled while reading response header from upstream, client: 124.253.47.67, server: www.fourmodules.com, request: "POST /api/tokens HTTP/1.1", upstream: "http://[::1]:5443/api/tokens", host: "www.fourmodules.com:4444"
nginx_1            | 2020/06/02 09:02:56 [error] 65#65: *527 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 124.253.47.67, server: www.fourmodules.com, request: "POST /api/tokens HTTP/1.1", upstream: "http://[::1]:5443/api/tokens", host: "www.fourmodules.com:4444"

Please can you send us OpenVidu server logs. You hit a bug we are unable to reproduce ourselves.

https://www.fourmodules.com:4444/#/demo

 <html> <head><title>504 Gateway Time-out</title></head> <body> <center><h1>504 Gateway Time-out</h1></center> <hr><center>nginx/1.18.0</center> </body> </html> 
#docker-compose logs openvidu-server

openvidu-server_1  | [INFO] 2020-06-02 09:01:56,197 [0.0-5443-exec-4] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 09:02:35,364 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [INFO] 2020-06-02 09:07:09,903 [0.0-5443-exec-8] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=abc1739}
openvidu-server_1  | [INFO] 2020-06-02 09:07:10,686 [0.0-5443-exec-7] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=abc1739}
openvidu-server_1  | [INFO] 2020-06-02 09:07:10,686 [0.0-5443-exec-7] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 09:17:35,363 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [INFO] 2020-06-02 09:21:39,766 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=abc1739}
openvidu-server_1  | [INFO] 2020-06-02 09:21:40,443 [0.0-5443-exec-8] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=abc1739}
openvidu-server_1  | [INFO] 2020-06-02 09:21:40,443 [0.0-5443-exec-8] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 09:23:05,661 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=demo}
openvidu-server_1  | [INFO] 2020-06-02 09:23:05,661 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - New session demo initialized [abc1739, demo]
openvidu-server_1  | [INFO] 2020-06-02 09:23:05,734 [0.0-5443-exec-3] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=demo}
openvidu-server_1  | [INFO] 2020-06-02 09:23:05,734 [0.0-5443-exec-3] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 09:24:06,426 [0.0-5443-exec-2] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=demo}
openvidu-server_1  | [INFO] 2020-06-02 09:24:06,494 [.0-5443-exec-10] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=demo}
openvidu-server_1  | [INFO] 2020-06-02 09:24:06,494 [.0-5443-exec-10] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 09:32:35,364 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [INFO] 2020-06-02 09:47:35,363 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [INFO] 2020-06-02 10:02:35,364 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [ERROR] 2020-06-02 10:02:50,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session abc1739
openvidu-server_1  | [INFO] 2020-06-02 10:17:35,363 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [ERROR] 2020-06-02 10:17:50,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session abc1739
openvidu-server_1  | [INFO] 2020-06-02 10:32:35,363 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [ERROR] 2020-06-02 10:32:50,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session demo
openvidu-server_1  | [ERROR] 2020-06-02 10:33:05,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session abc1739
openvidu-server_1  | [INFO] 2020-06-02 10:37:32,269 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=demo}
openvidu-server_1  | [INFO] 2020-06-02 10:37:32,337 [0.0-5443-exec-2] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=demo}
openvidu-server_1  | [INFO] 2020-06-02 10:37:32,337 [0.0-5443-exec-2] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 10:38:50,612 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - REST API: POST /api/sessions {customSessionId=demo}
openvidu-server_1  | [INFO] 2020-06-02 10:38:50,680 [0.0-5443-exec-6] io.openvidu.server.rest.SessionRestController - REST API: POST /api/tokens {session=demo}
openvidu-server_1  | [INFO] 2020-06-02 10:38:50,680 [0.0-5443-exec-6] io.openvidu.server.coturn.CoturnCredentialsService - Creating COTURN user
openvidu-server_1  | [INFO] 2020-06-02 10:47:35,363 [Timer-0] io.openvidu.server.core.SessionManager - Running non active sessions garbage collector...
openvidu-server_1  | [ERROR] 2020-06-02 10:47:50,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session demo
openvidu-server_1  | [ERROR] 2020-06-02 10:48:05,364 [Timer-0] io.openvidu.server.core.SessionManager - Timeout waiting for Session closing lock to be available for garbage collector to clean session abc1739
[root@server openvidu]#

Thank you for the logs.

Can you get more log lines?

Please check the complete log at: https://www.fourmodules.com/log.txt

I have just installed new openvidu version (dockerized one) and I got the same error: openvidu server in dashboard works fine, I can access the openvidu call front but but back-end is not connecting with openvidu-server. I am trying with the given openvidu call before deploying my own application (with updated openvidu browser and openvidu-node-client).

Here my app logs, nginx logs and coturn logs:
App logs:
https://pastebin.com/dzwU1BUt
Coturn logs:
https://pastebin.com/uD1CP6v4
Nginx logs:
https://pastebin.com/2QCRrqrB

What I can see as strange is:

In coturn logs:

coturn_1 | CONFIG ERROR: Empty cli-password, and so telnet cli interface is disabled! Please set a non empty cli-password!
coturn_1 | 0: WARNING: cannot find certificate file: turn_server_cert.pem (1)
coturn_1 | 0: WARNING: cannot start TLS and DTLS listeners because certificate file is not set properly
coturn_1 | 0: WARNING: cannot find private key file: turn_server_pkey.pem (1)
coturn_1 | 0: WARNING: cannot start TLS and DTLS listeners because private key file is not set properly

In app logs:

app_1 | isAxiosError: true,
app_1 | toJSON: [Function]
app_1 | }
app_1 | (node:21) UnhandledPromiseRejectionWarning: RangeError [ERR_HTTP_INVALID_STATUS_CODE]: Invalid status code: undefined
app_1 | at ServerResponse.writeHead (_http_server.js:248:11)
app_1 | at ServerResponse._implicitHeader (http_server.js:239:8)
app_1 | at write
(_http_outgoing.js:650:9)
app_1 | at ServerResponse.end (_http_outgoing.js:761:5)
app_1 | at ServerResponse.C.send (/opt/openvidu-call/openvidu-call-server.js:302:2782)
app_1 | at p (/opt/openvidu-call/openvidu-call-server.js:320:3586)
app_1 | at /opt/openvidu-call/openvidu-call-server.js:320:4257
app_1 | at /opt/openvidu-call/openvidu-call-server.js:320:2946
app_1 | at Object.throw (/opt/openvidu-call/openvidu-call-server.js:320:3051)
app_1 | at r (/opt/openvidu-call/openvidu-call-server.js:320:1816)
app_1 | (node:21) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 23)

I was unsintalling and following deployment instructions for 2.13 version (already dockerized but with another openvidu-call version) in the same machine and it works fine. I will try to deploy my own app (angular front and node back) connecting with this 2.13 version server.

Hi,

We found out yesterday that if the redis database does not respond, then the creation of a coturn user (and therefore the creation of a token, which brings the coturn credentials) may get stuck, causing many operations performed to that session not being able to properly execute (just as the session garbage collector is showing in your logs).

Redis database not responding may be caused by an overloaded system, and we suspect that this can be related to this official redis issue that some people have experimented: https://github.com/docker-library/redis/issues/217 (a crypto mining malware that can be inserted in the redis container if system admins do not properly close all non-required ports of OpenVidu deployments).

For the next release we have already modified the official redis container to protect it with a password, to avoid this hack even for those OpenVidu deployments with too many ports opened.

There’s already available a beta version of openvidu-server container that will never lock a session if the coturn creation process does not return (instead, a descriptive error is returned, listing all processes in the host machine to have a snapshot of the server status):

openvidu/openvidu-server:2.14.1-beta6

2 Likes

By when next version will be available for production use?

It’s shceduled for next week (June 8-14th)

Thanks for the information and work. I supposed that this malware is a problem related to the redis container version included in ov 2.14.0 and not in ov 2.13.0.

In any case, maybe this malware can be inserted through the ports which are necessary for an openvidu system?

I am developing against 2.13.0 infrastructure right now (always with all non-required ports closed :). But I will do another try and report to ov 2.14.0 in the next weeks.

Today again when I have open demo class it again showing…

There was an error getting the token:
<html> <head><title>504 Gateway Time-out</title></head> <body> <center><h1>504 Gateway Time-out</h1></center> <hr><center>nginx/1.18.0</center> </body> </html> 
Errors in the redis logs are:
#docker-compose logs -f redis

redis_1            | 1:S 04 Jun 2020 07:40:19.207 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:40:19.207 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:40:20.210 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:40:20.212 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:40:20.212 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:40:21.214 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:40:21.215 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:40:21.215 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:40:22.219 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:40:22.220 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:40:22.220 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:40:23.065 # Module ./red2.so failed to load: ./red2.so: cannot open shared object file: No such file or directory
redis_1            | 1:M 04 Jun 2020 07:40:23.222 # Setting secondary replication ID to 400b31663338c50e962e7f3b027b5990e2f7068f, valid up to offset: 1. New replication ID is 195363cf9c5c03965ae878930fd0e1e62ff98fa5
redis_1            | 1:M 04 Jun 2020 07:40:23.222 * Discarding previously cached master state.
redis_1            | 1:M 04 Jun 2020 07:40:23.223 * MASTER MODE enabled (user request from 'id=670 addr=212.8.247.179:34560 fd=13 name= age=8 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 07:47:48.633 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis_1            | 1:S 04 Jun 2020 07:47:48.635 * REPLICAOF 212.47.251.177:8886 enabled (user request from 'id=675 addr=95.214.11.231:43336 fd=13 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 07:47:49.221 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:49.221 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:50.222 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:50.225 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:50.225 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:51.226 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:51.230 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:51.230 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:52.232 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:52.236 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:52.236 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:53.239 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:53.240 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:53.240 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:54.242 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:54.244 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:54.244 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:55.247 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 07:47:55.247 * Connecting to MASTER 212.47.251.177:8886
redis_1            | 1:S 04 Jun 2020 07:47:55.247 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 07:47:55.948 # Module ./red2.so failed to load: ./red2.so: cannot open shared object file: No such file or directory
redis_1            | 1:M 04 Jun 2020 07:47:56.108 # Setting secondary replication ID to 195363cf9c5c03965ae878930fd0e1e62ff98fa5, valid up to offset: 1. New replication ID is d11ac562e6f84cba048b6f774c43024d9710549e
redis_1            | 1:M 04 Jun 2020 07:47:56.108 * Discarding previously cached master state.
redis_1            | 1:M 04 Jun 2020 07:47:56.108 * MASTER MODE enabled (user request from 'id=675 addr=95.214.11.231:43336 fd=13 name= age=8 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 08:30:09.592 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis_1            | 1:S 04 Jun 2020 08:30:09.594 * REPLICAOF 64.227.10.112:41403 enabled (user request from 'id=679 addr=64.227.10.112:33862 fd=13 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 08:30:09.646 * Connecting to MASTER 64.227.10.112:41403
redis_1            | 1:S 04 Jun 2020 08:30:09.646 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:09.647 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 08:30:10.649 * Connecting to MASTER 64.227.10.112:41403
redis_1            | 1:S 04 Jun 2020 08:30:10.650 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:10.651 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 08:30:11.653 * Connecting to MASTER 64.227.10.112:41403
redis_1            | 1:S 04 Jun 2020 08:30:11.653 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:11.654 * Non blocking connect for SYNC fired the event.
redis_1            | 1:S 04 Jun 2020 08:30:11.655 * Master replied to PING, replication can continue...
redis_1            | 1:S 04 Jun 2020 08:30:11.657 * Trying a partial resynchronization (request d11ac562e6f84cba048b6f774c43024d9710549e:1).
redis_1            | 1:S 04 Jun 2020 08:30:11.661 * Full resync from master: ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ:0
redis_1            | 1:S 04 Jun 2020 08:30:11.661 * Discarding previously cached master state.
redis_1            | 1:S 04 Jun 2020 08:30:11.662 * MASTER <-> REPLICA sync: receiving 1578199 bytes from master
redis_1            | 1:S 04 Jun 2020 08:30:11.720 * MASTER <-> REPLICA sync: Flushing old data
redis_1            | 1:S 04 Jun 2020 08:30:11.721 * MASTER <-> REPLICA sync: Loading DB in memory
redis_1            | 1:S 04 Jun 2020 08:30:11.721 # Wrong signature trying to load DB from file
redis_1            | 1:S 04 Jun 2020 08:30:11.721 # Failed trying to load the MASTER synchronization DB from disk
redis_1            | 1:S 04 Jun 2020 08:30:12.656 * Connecting to MASTER 64.227.10.112:41403
redis_1            | 1:S 04 Jun 2020 08:30:12.657 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:12.658 * Non blocking connect for SYNC fired the event.
redis_1            | 1:M 04 Jun 2020 08:30:13.689 # Setting secondary replication ID to d11ac562e6f84cba048b6f774c43024d9710549e, valid up to offset: 1. New replication ID is 01840ff4719957399f06b79ecdea210e915a69e7
redis_1            | 1:M 04 Jun 2020 08:30:13.689 * MASTER MODE enabled (user request from 'id=679 addr=64.227.10.112:33862 fd=13 name= age=4 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 08:30:13.721 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
redis_1            | 1:S 04 Jun 2020 08:30:13.721 * REPLICAOF 64.227.10.112:54995 enabled (user request from 'id=681 addr=64.227.10.112:33938 fd=13 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=48 qbuf-free=32720 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | 1:S 04 Jun 2020 08:30:14.665 * Connecting to MASTER 64.227.10.112:54995
redis_1            | 1:S 04 Jun 2020 08:30:14.665 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:14.666 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 08:30:15.670 * Connecting to MASTER 64.227.10.112:54995
redis_1            | 1:S 04 Jun 2020 08:30:15.670 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:15.671 # Error condition on socket for SYNC: Connection refused
redis_1            | 1:S 04 Jun 2020 08:30:16.673 * Connecting to MASTER 64.227.10.112:54995
redis_1            | 1:S 04 Jun 2020 08:30:16.673 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:16.674 * Non blocking connect for SYNC fired the event.
redis_1            | 1:S 04 Jun 2020 08:30:16.675 * Master replied to PING, replication can continue...
redis_1            | 1:S 04 Jun 2020 08:30:16.678 * Trying a partial resynchronization (request 01840ff4719957399f06b79ecdea210e915a69e7:1).
redis_1            | 1:S 04 Jun 2020 08:30:16.679 * Full resync from master: ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ:0
redis_1            | 1:S 04 Jun 2020 08:30:16.679 * Discarding previously cached master state.
redis_1            | 1:S 04 Jun 2020 08:30:16.680 * MASTER <-> REPLICA sync: receiving 42680 bytes from master
redis_1            | 1:S 04 Jun 2020 08:30:16.681 * MASTER <-> REPLICA sync: Flushing old data
redis_1            | 1:S 04 Jun 2020 08:30:16.681 * MASTER <-> REPLICA sync: Loading DB in memory
redis_1            | 1:S 04 Jun 2020 08:30:16.681 # Wrong signature trying to load DB from file
redis_1            | 1:S 04 Jun 2020 08:30:16.681 # Failed trying to load the MASTER synchronization DB from disk
redis_1            | 1:S 04 Jun 2020 08:30:17.678 * Connecting to MASTER 64.227.10.112:54995
redis_1            | 1:S 04 Jun 2020 08:30:17.679 * MASTER <-> REPLICA sync started
redis_1            | 1:S 04 Jun 2020 08:30:17.680 * Non blocking connect for SYNC fired the event.
redis_1            | 1:S 04 Jun 2020 08:30:18.682 * Module 'system' loaded from /tmp/exp_lin.so
redis_1            | 1:M 04 Jun 2020 08:30:18.683 # Setting secondary replication ID to 01840ff4719957399f06b79ecdea210e915a69e7, valid up to offset: 1. New replication ID is df6613a24bc307d6ea11c11ea2d0a154440586c9
redis_1            | 1:M 04 Jun 2020 08:30:18.683 * MASTER MODE enabled (user request from 'id=681 addr=64.227.10.112:33938 fd=13 name= age=5 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=34 qbuf-free=32734 obl=0 oll=0 omem=0 events=r cmd=slaveof')
redis_1            | sh: 1: killall: not found