OPENVIDU 3.1.0 Single Node Installation Issue

Hello,

We are trying to install OPENVIDU Single Node 3.1.0 on a Linux Ubuntu 24.04.1 LTS server. The installation process completes without any errors.

We utilized the command:

sh <(curl -fsSL http://get.openvidu.io/community/singlenode/3.1.0/install.sh)

However, after the installation, when attempting to run OpenVidu, we encounter an error (Failed to start openvidu.service). The logs are in the end.

Does anyone have any insights on what might be going wrong?

------------------- LOGS --------------------

sh <(curl -fsSL http://get.openvidu.io/community/singlenode/3.1.0/install.sh)

Docker already installed. Check you have the latest version for best compatibility
Docker Compose already installed. Check you have the latest version for best compatibility
Synchronizing state of docker.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable docker
Stopping ‘docker.service’, but its triggering units are still active:
docker.socket
Waiting for Docker to start…
Docker started successfully.
3.1.0: Pulling from openvidu/openvidu-installer
Digest: sha256:9b8be46ca4953bdea9d15f0069067dcde3cc6ada64470b0d8ac6b75bd66dd9c7
Status: Image is up to date for openvidu/openvidu-installer:3.1.0
docker.io/openvidu/openvidu-installer:3.1.0

OpenVidu deployment installer

  • OpenVidu version: 3.1.0
  • Edition: COMMUNITY
  • Deployment type: single_node

Welcome to the ‘OpenVidu Community’ installer.
You are going to install the ‘Single Node’ deployment of the ‘OpenVidu Community’ edition.
Make sure of the following things before installing:

1. This installer is being executed in the machine where OpenVidu will be installed.
2. You have a FQDN (Fully Qualified Domain Name) pointing to the IP of this machine.

Do you want to continue? … Yes
Select which certificate type to use … Own Certificate
Write the domain name or IP address of your cluster … telemedicina.vskysamu.com.br
Certificates for domain: telemedicina.vskysamu.com.br
Write the private key of your own certificate … …(1711 bytes)
Write the public key of your own certificate … …(2219 bytes)
(Optional) Write the domain name of your TURN server to allow TLS over TURN …
Which modules do you want to enable? … Observability, Default App
Write the Public IP of this node (If empty, the public IP will be detected automatically) … 54.94.30.253
Write the LiveKit API Key (Generated if empty) …
Write the LiveKit API Secret (Generated if empty) …
Write the Dashboard Admin User (‘admin’ if empty) …
Write the Dashboard Admin Password (Generated if empty) …
Write the Redis Password (Generated if empty) …
Write the Minio Access Key (‘minioadmin’ if empty) …
Write the Minio Secret Key (Generated if empty) …
Write the Mongo Admin User (‘mongoadmin’ if empty) …
Write the Mongo Admin Password (Generated if empty) …
Write the Mongo Replica Set Key (Generated if empty. It is a password for the replica set) …
Write the Grafana Admin User (‘admin’ if empty) …
Write the Grafana Admin Password (Generated if empty) …
Write the default app (OpenVidu Call) admin username (‘calladmin’ if empty) …
Write the default app (OpenVidu Call) admin password (Generated if empty) …
Write the default app (OpenVidu Call) to access the app (‘calluser’ if empty) …
Write the default app (OpenVidu Call) password for the user (Generated if empty) …
Do you want to continue? … Yes
Docker already installed. Check you have the latest version for best compatibility
Docker Compose already installed. Check you have the latest version for best compatibility
Synchronizing state of docker.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable docker
Pulling setup-volumes … done
Pulling openvidu-init … done
Pulling dashboard … done
Pulling loki … done
Pulling minio … done
Pulling promtail … done
Pulling egress … done
Pulling ingress … done
Pulling openvidu … done
Pulling prometheus … done
Pulling grafana … done
Pulling caddy … done
Pulling mongo … done
Pulling redis … done
Pulling app … done
Increasing network buffer size for media traffic
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.rmem_default = 2500000
net.core.wmem_default = 2500000
net.ipv4.udp_wmem_min = 2500000
net.ipv4.udp_rmem_min = 2500000
Network buffer size adjustments applied and persisted.
It has been has been installed as a systemd service and it is not started yet.
To start OpenVidu Community, run:

systemctl start openvidu

OpenVidu Community is installed at /opt/openvidu.

cd /opt/openvidu

To check the status of OpenVidu Community and its logs, run:

cd /opt/openvidu
docker ps
docker-compose logs -f

To stop OpenVidu Community, run:

systemctl stop openvidu

root@ip-172-31-100-12:/opt/openvidu# systemctl start openvidu

Job for openvidu.service failed because the control process exited with error code.
See “systemctl status openvidu.service” and “journalctl -xeu openvidu.service” for details.


root@ip-172-31-100-12:/opt/openvidu# systemctl status openvidu.service
× openvidu.service - OpenVidu Community - Single Node
Loaded: loaded (/etc/systemd/system/openvidu.service; enabled; preset: enabled)
Active: failed (Result: exit-code) since Tue 2025-03-25 16:45:15 -03; 58s ago
Duration: 17min 50.649s
Process: 82647 ExecStartPre=/usr/local/bin/docker-compose down (code=exited, status=1/FAILURE)
CPU: 363ms

Mar 25 16:45:14 ip-172-31-100-12 systemd[1]: Failed to start openvidu.service - OpenVidu Community - Single Node.
Mar 25 16:45:15 ip-172-31-100-12 systemd[1]: openvidu.service: Scheduled restart job, restart counter is at 5.
Mar 25 16:45:15 ip-172-31-100-12 systemd[1]: openvidu.service: Start request repeated too quickly.
Mar 25 16:45:15 ip-172-31-100-12 systemd[1]: openvidu.service: Failed with result ‘exit-code’.
Mar 25 16:45:15 ip-172-31-100-12 systemd[1]: Failed to start openvidu.service - OpenVidu Community - Single Node.


root@ip-172-31-100-12:/opt/openvidu# docker-compose logs -f
ERROR: Missing mandatory value for “command” option interpolating /bin/sh -c "
mkdir -p /data/egress_data/home/egress/backup_storage &&
chown 1001:1001 /data/egress_data/home &&
chown 1001:1001 /data/egress_data/home/egress &&
chown 1001:1001 /data/egress_data/home/egress/backup_storage &&
mkdir -p /data/minio_data/data &&
mkdir -p /data/mongo_data/data &&
echo ${MONGO_REPLICA_SET_KEY:?mandatory} > /data/mongo_data/replica.key &&
chown 999:999 /data/mongo_data /data/mongo_data/data /data/mongo_data/replica.key &&
chmod 600 /data/mongo_data/replica.key &&
chown 1001:1001 /data/minio_data /data/minio_data/data &&
mkdir -p /data/prometheus_data/prometheus &&
chown 65534:65534 /data/prometheus_data/prometheus &&
mkdir -p /data/loki_data/data &&
chown 10001:10001 /data/loki_data /data/loki_data/data &&
mkdir -p /data/grafana_data/data &&
chown 472:0 /data/grafana_data /data/grafana_data/data
"
in service “setup-volumes”: mandatory

Hello @PauloBarros

What is the output you have in this command?

journalctl -f -u openvidu

Hi @cruizba !
This is the output.
There may be an error in the docker startup script. It cannot access the configuration variables.

Mar 26 08:30:53 ip-172-31-100-12 systemd[1]: Starting openvidu.service - OpenVidu Community - Single Node…
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: The MONGO_INTERNAL_PORT variable is not set. Defaulting to a blank string.
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: The MONGO_ADMIN_USERNAME variable is not set. Defaulting to a blank string.
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: The MONGO_ADMIN_PASSWORD variable is not set. Defaulting to a blank string.
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: Missing mandatory value for “command” option interpolating /bin/sh -c "
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/egress_data/home/egress/backup_storage &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 1001:1001 /data/egress_data/home &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 1001:1001 /data/egress_data/home/egress &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 1001:1001 /data/egress_data/home/egress/backup_storage &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/minio_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/mongo_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: echo ${MONGO_REPLICA_SET_KEY:?mandatory} > /data/mongo_data/replica.key &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 999:999 /data/mongo_data /data/mongo_data/data /data/mongo_data/replica.key &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chmod 600 /data/mongo_data/replica.key &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 1001:1001 /data/minio_data /data/minio_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/prometheus_data/prometheus &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 65534:65534 /data/prometheus_data/prometheus &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/loki_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 10001:10001 /data/loki_data /data/loki_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: mkdir -p /data/grafana_data/data &&
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: chown 472:0 /data/grafana_data /data/grafana_data/data
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: "
Mar 26 08:30:54 ip-172-31-100-12 docker-compose[114260]: in service “setup-volumes”: mandatory
Mar 26 08:30:54 ip-172-31-100-12 systemd[1]: openvidu.service: Control process exited, code=exited, status=1/FAILURE
Mar 26 08:30:54 ip-172-31-100-12 systemd[1]: openvidu.service: Failed with result ‘exit-code’.
Mar 26 08:30:54 ip-172-31-100-12 systemd[1]: Failed to start openvidu.service - OpenVidu Community - Single Node.

Yes. It’s quite strange.

Can you try the following?

  1. Uninstall:
sudo su
systemctl stop openvidu
rm -rf /opt/openvidu/
rm /etc/systemd/system/openvidu.service
rm /etc/sysctl.d/50-openvidu.conf
  1. Reinstall again.
sh <(curl -fsSL http://get.openvidu.io/community/singlenode/3.1.0/install.sh)

Just in case any error happened in the installation.

Also which versions of Docker and Docker-compose are you using?

If it continues failing, please can you share the content of /opt/openvidu/config/openvidu.env (Censor the secrets) and the content of /etc/systemd/system/openvidu.service ?

Take into account that the service runs using the /etc/systemd/system/openvidu.service. There you have the docker compose command being run.

Hi @cruizba ,

We uninstalled and installed again and the errors continue the same way.

Docker version 28.0.1, build 068a01e
docker-compose version 1.29.2, build unknown

Bellow the contents of configuration files:

– openvidu.env –

# OpenVidu Community Configuration File
# ------------------------------------
# NOTES:
# ------------------------------------
# The parameters defined here are global and can be utilized by any configuration
# file within this directory.
#
# No need to quote assignment values, even if they contain spaces.
# Values are stored exactly as written, so avoid using quotes.
# ------------------------------------

# ------------------------------------
# Domain name configuration
# ------------------------------------
# The domain name for the deployment. Use this domain name to access
# OpenVidu APIs and services.
DOMAIN_NAME=xxx
# ------------------------------------
# Certificate Configuration
# ------------------------------------

# ------------------------------------
# LiveKit Global Configuration
# ------------------------------------
# Global LiveKit API Key and Secret used for apps to connect to OpenVidu.
LIVEKIT_API_KEY=xxxx
LIVEKIT_API_SECRET=xxxx

# ------------------------------------
# Redis Global Configuration
# -----------------------------------
# Redis password.
REDIS_PASSWORD=xxxx

# ------------------------------------
# Minio Global Configuration
# ------------------------------------
# Minio access key and secret key.
MINIO_ACCESS_KEY=xxxx
MINIO_SECRET_KEY=xxxx

# ------------------------------------
# MongoDB Global Configuration
# ------------------------------------
# MongoDB admin username and password.
MONGO_ADMIN_USERNAME=xxxx
MONGO_ADMIN_PASSWORD=xxxx

# ------------------------------------
# Dashboard Global Configuration
# ------------------------------------
# Dashboard admin username and password.
DASHBOARD_ADMIN_USERNAME=xxxx
DASHBOARD_ADMIN_PASSWORD=xxxx

# ------------------------------------
# Observability Global Configuration
# ------------------------------------
# Grafana admin username and password.
GRAFANA_ADMIN_USERNAME=xxxx
GRAFANA_ADMIN_PASSWORD=xxxx

# ------------------------------------
# External S3 configuration
# ------------------------------------
# By default OpenVidu works with Minio as the S3 provider.
# If you want to use an external S3 provider, you need
# to provide the following configuration
# and the bucket names you want to use.

# External S3 Endpoint URL.
# Example: https://s3.us-east-2.amazonaws.com
# Note that in AWS S3, the endpoint URL is different for each region.
# Check: https://docs.aws.amazon.com/general/latest/gr/s3.html
EXTERNAL_S3_ENDPOINT=

# External S3 Access Key and Secret Key.
EXTERNAL_S3_ACCESS_KEY=
EXTERNAL_S3_SECRET_KEY=

# External S3 Region.
EXTERNAL_S3_REGION=

# Use path style access for S3.
# Valid values are: true, false
EXTERNAL_S3_PATH_STYLE_ACCESS=

# Application data bucket.
# It will be used by:
#   - Egress service to store recordings.
#   - Default App (OpenVidu Call) to store recordings and files.
EXTERNAL_S3_BUCKET_APP_DATA=

# ------------------------------------
# OpenVidu Enabled Modules
# ------------------------------------
# List of enabled modules.
# All values are:
# ENABLED_MODULES=app,observability
ENABLED_MODULES=observability,app

# ------------------------------------------------------
# ------------------------------------------------------
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# !!! DANGER ZONE !!! DANGER ZONE !!!!!! DANGER ZONE !!!
# !!! DANGER ZONE !!! DANGER ZONE !!!!!! DANGER ZONE !!!
# !!! DANGER ZONE !!! DANGER ZONE !!!!!! DANGER ZONE !!!
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# ------------------------------------------------------
# ------------------------------------------------------
# The following parameters are used to configure
# infrastructure related parameters. Changing these
# parameters may involve changing the configuration
# of more than one service and may lead to unexpected
# behavior. Only change these parameters if you know
# what you are doing.
# ------------------------------------------------------

# ------------------------------------
# MongoDB Replica Set Key
# ------------------------------------
# This key is used internally by MongoDB for replication.
MONGO_REPLICA_SET_KEY=xxxxxx

# ------------------------------------
# Caddy proxy related ports
# ------------------------------------
# This port is used by caddy to serve all the services
# at the specified port through HTTPS and to expose RTMPS and TURN with TLS.
# If you change it, you must also change the port binding
# in the docker-compose file of the Caddy service.
CADDY_HTTPS_PUBLIC_PORT=443

# This port is used by caddy to redirect from HTTP to HTTPS.
# If you change it, you must also change the port binding
# in the docker-compose file of the Caddy service.
CADDY_HTTP_PUBLIC_PORT=80

# This port is used by caddy to serve as RTMPS.
# It is used when you create an Ingress for RTMP.
# If you change it, you must also change the port binding
# in the docker-compose file of the Caddy service.
CADDY_RTMPS_PUBLIC_PORT=1935

# This port is used by caddy to serve MinIO API
# at the specified port through HTTPS using the DOMAIN_NAME.
# If you change it, you must also change the port binding
# in the docker-compose file of the Caddy service.
CADDY_MINIO_PUBLIC_PORT=9000

# This port is used internally by caddy to serve all the services
# at the specified port through HTTP.
# If you change it, you must change the following:
#    1. Update the port binding in the docker-compose file
#       of the Caddy service if the port is changed.
#    2. Change the port of LIVEKIT_URL_PRIVATE if you are using
#       the default app OpenVidu Call at app.env
CADDY_HTTP_INTERNAL_PORT=7880

# ------------------------------------
# OpenVidu LiveKit WebRTC related ports
# ------------------------------------

# This port is used by LiveKit to serve TURN over UDP.
LIVEKIT_TURN_PUBLIC_UDP_PORT=443

# This port is used by LiveKit to serve TURN over TLS.
# It is reverse proxied by Caddy to serve it
# at the TURN_DOMAIN_NAME with TLS.
# When TURN over UDP is not possible, TURN with TLS is used.
LIVEKIT_TURN_TLS_INTERNAL_PORT=5349

# This port is used by LiveKit to serve WebRTC over TCP
# When a UDP connection is not possible, TCP may be used.
LIVEKIT_WEBRTC_PUBLIC_TCP_PORT=7881

# This range port is used by LiveKit to serve TURN relay ports
# for relay candidates.
LIVEKIT_TURN_RELAY_INTERNAL_PORT_RANGE_START=40000
LIVEKIT_TURN_RELAY_INTERNAL_PORT_RANGE_END=50000

# This range port is used by LiveKit to serve WebRTC over UDP
LIVEKIT_WEBRTC_PUBLIC_UDP_PORT_RANGE_START=50000
LIVEKIT_WEBRTC_PUBLIC_UDP_PORT_RANGE_END=60000

# ------------------------------------
# Other OpenVidu LiveKit related ports
# ------------------------------------

# This port is used by LiveKit to serve the API.
# It is reverse proxied by Caddy to serve it
# at CADDY_HTTP_INTERNAL_PORT with HTTP and
# at CADDY_HTTPS_PUBLIC_PORT with HTTPS.
LIVEKIT_API_INTERNAL_PORT=7780

# This port is used by LiveKit to serve RTMP.
# It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with TLS (RTMPS).
LIVEKIT_RTMP_INTERNAL_PORT=1945

# This port is used by LiveKit to serve WHIP.
# It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with HTTPS.
LIVEKIT_WHIP_INTERNAL_PORT=8080

# This port is used by LiveKit to serve prometheus metrics.
# It is used internally by the Prometheus service.
LIVEKIT_PROMETHEUS_INTERNAL_PORT=6789

# This port is internally by the Ingress service to itself.
# It is used for internal communication.
LIVEKIT_INGRESS_HTTP_RELAY_INTERNAL_PORT=9091

# Port used by the Ingress service. Can be used to
# check if the service is healthy.
LIVEKIT_INGRESS_HEALTH_INTERNAL_PORT=9092

# This port is used by the Ingress service to announce itself
# for WHIP Ingress. When Ingress is used with WHIP, this port
# will handle the WEBRTC traffic.
LIVEKIT_INGRESS_RTC_UDP_PORT=7885

# Port used by the Egress service. Can be used to
# check if the service is healthy.
LIVEKIT_EGRESS_HEALTH_INTERNAL_PORT=9093

# ------------------------------------
# Other OpenVidu services ports
# ------------------------------------
# This port is used by Redis to serve itself at the specified port.
# It is used internally by OpenVidu LiveKit, Ingress and Egress services.
# If you change it, you must also change the port binding
# in the docker-compose file of the Redis service.
REDIS_INTERNAL_PORT=7000

# This port is used by Minio to serve its API.
# It is used internally by the Egress service to
# connect to MinIO and reverse proxied by Caddy
# to serve it at CADDY_MINIO_PUBLIC_PORT with HTTPS.
# If you change it, you must also change the port binding
# in the docker-compose file of the Minio service.
MINIO_API_INTERNAL_PORT=9100

# This port is used by Minio to serve its WEB GUI console.
# It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with HTTPS.
MINIO_CONSOLE_INTERNAL_PORT=9101

# This port is used by MongoDB to serve itself.
# It is used internally by the Dashboard service and OpenVidu LiveKit.
# If you change it, you must also change the port binding
# in the docker-compose file of the MongoDB service.
MONGO_INTERNAL_PORT=20000

# This port is used by OpenVidu Dashboard to serve itself.
# It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with HTTPS.
DASHBOARD_INTERNAL_PORT=5000

# This port is used by Grafana to serve itself.
# It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with HTTPS.
GRAFANA_INTERNAL_PORT=3000

# This port is used by Loki to serve itself.
# It is used internally by the Grafana service.
LOKI_INTERNAL_HTTP_PORT=3100

# This port is used by Loki to serve itself via gRPC.
# It is used internally by the service itself.
LOKI_INTERNAL_GRPC_PORT=9095

# This port is used by the Default App (OpenVidu Call)
# to serve itself. It is reverse proxied by Caddy to serve it
# at the DOMAIN_NAME with HTTPS.
# If you change it, you must also change the port binding
# in the docker-compose override file of the Default App service.
DEFAULT_APP_INTERNAL_PORT=6080

– openvidu.service –

[Unit]
Description=OpenVidu Community - Single Node
After=docker.service
Requires=docker.service

[Service]
LimitNOFILE=500000
Restart=always
WorkingDirectory=/opt/openvidu

# Environment variables
Environment="RUNNING_SYSTEMD=true"
Environment="COMPOSE_ENV_FILES=config/openvidu.env"
# Shutdown container (if running) when unit is started
ExecStartPre=/usr/local/bin/docker-compose down
ExecStart=/usr/local/bin/docker-compose up
ExecStop=/usr/local/bin/docker-compose down

[Install]
WantedBy=multi-user.target

I think the problem is the docker-compose version. It is too old.

Docker-compose 1.29 is discontinued, and probably incompatible with that version in a lot of extents.

Remove that docker-compose version.

After that, reinstall with this command:

curl -L "https://github.com/docker/compose/releases/download/v2.33.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

After installing it, try to reinstall OpenVidu and see if everything is working.

1 Like

@Rubens_Barroso_de_Ol @PauloBarros

Were you able to fix the deployment by updating Docker Compose?

Hi @cruizba ,

Yes, Openvidu is running correctly. Now we are moving forward with installing a reverse proxy to test HTTPS access. Our server is hosted on AWS and we will access Openvidu through a client application.

Thanks for the help.

1 Like

Nice! FYI, OpenVIdu 3.2.0 will have a flag to be installed easily with an external-proxy, stay tuned for new releases!