Deployment using Docker

Hi,

Does anyone able to host the openvidu v3 version in linux server using docker setup.
I was trying to deploy it with my own ssl configuring with caddy and it’s not working as expected.

Can anyone suggest me the proper way to deploy the setup with Own certificate using docker setup.

I was reffering to the given link for single Node Install - openVidu

Take a look to this:

Also, in the installation wizard, you can select Own Certificate, and the wizard will ask you for the private and public certificate for the installation and install it automatically.

Do you have any issue with your certificates? Make sure both are in PEM format.

Installation done successfully, after following all the steps given, problem is all services continuously exiting and restarting. None of the services were stable

Which containers are restarting? Can you show the logs of some of them?

Are you starting the service with:

systemctl start openvidu

docker logs caddy
{“level”:“info”,“ts”:1728450712.834611,“msg”:“using provided configuration”,“config_file”:“/etc/caddy.yaml”,“config_adapter”:“yaml”}
{“level”:“info”,“ts”:1728450712.837943,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//localhost:2019”,“//[::1]:2019”,“//127.0.0.1:2019”]}
{“level”:“info”,“ts”:1728450712.8373818,“msg”:“redirected default logger”,“from”:“stderr”,“to”:“stdout”}
{“level”:“info”,“ts”:1728450712.8387446,“logger”:“http.auto_https”,“msg”:“enabling automatic HTTP->HTTPS redirects”,“server_name”:“minio”}
{“level”:“info”,“ts”:1728450712.8541481,“msg”:“warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again”}
{“level”:“info”,“ts”:1728450712.854174,“msg”:“define JAVA_HOME environment variable to use the Java trust”}
{“level”:“info”,“ts”:1728450712.8543758,“logger”:“http”,“msg”:“enabling HTTP/3 listener”,“addr”:“:9000”}
{“level”:“info”,“ts”:1728450712.8547802,“logger”:“http.log”,“msg”:“server running”,“name”:“minio”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728450712.8548253,“logger”:“http.log”,“msg”:“server running”,“name”:“public”,“protocols”:[“h1”,“h2”,“h3”]}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use
{“level”:“info”,“ts”:1728450713.1554487,“msg”:“using provided configuration”,“config_file”:“/etc/caddy.yaml”,“config_adapter”:“yaml”}
{“level”:“info”,“ts”:1728450713.1578002,“msg”:“redirected default logger”,“from”:“stderr”,“to”:“stdout”}
{“level”:“info”,“ts”:1728450713.1583624,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//localhost:2019”,“//[::1]:2019”,“//127.0.0.1:2019”]}
{“level”:“info”,“ts”:1728450713.15939,“logger”:“http.auto_https”,“msg”:“enabling automatic HTTP->HTTPS redirects”,“server_name”:“minio”}
{“level”:“info”,“ts”:1728450713.1645668,“msg”:“warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again”}
{“level”:“info”,“ts”:1728450713.1645827,“msg”:“define JAVA_HOME environment variable to use the Java trust”}
{“level”:“debug”,“ts”:1728450713.1647186,“logger”:“layer4”,“msg”:“listening”,“address”:“tcp/[::]:443”}
{“level”:“debug”,“ts”:1728450713.1647341,“logger”:“layer4”,“msg”:“listening”,“address”:“tcp/[::]:1935”}
{“level”:“info”,“ts”:1728450713.1647668,“logger”:“http.log”,“msg”:“server running”,“name”:“public”,“protocols”:[“h1”,“h2”,“h3”]}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use
{“level”:“info”,“ts”:1728450713.6162953,“msg”:“using provided configuration”,“config_file”:“/etc/caddy.yaml”,“config_adapter”:“yaml”}
{“level”:“info”,“ts”:1728450713.6185207,“msg”:“redirected default logger”,“from”:“stderr”,“to”:“stdout”}
{“level”:“info”,“ts”:1728450713.6188912,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//localhost:2019”,“//[::1]:2019”,“//127.0.0.1:2019”]}
{“level”:“info”,“ts”:1728450713.619433,“logger”:“http.auto_https”,“msg”:“enabling automatic HTTP->HTTPS redirects”,“server_name”:“minio”}
{“level”:“info”,“ts”:1728450713.6262913,“msg”:“warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again”}
{“level”:“info”,“ts”:1728450713.6264093,“msg”:“define JAVA_HOME environment variable to use the Java trust”}
{“level”:“info”,“ts”:1728450713.6265798,“logger”:“http”,“msg”:“enabling HTTP/3 listener”,“addr”:“:9000”}
{“level”:“info”,“ts”:1728450713.6267512,“logger”:“http.log”,“msg”:“server running”,“name”:“minio”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728450713.6268094,“logger”:“http.log”,“msg”:“server running”,“name”:“public”,“protocols”:[“h1”,“h2”,“h3”]}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use
{“level”:“info”,“ts”:1728450714.2447345,“msg”:“using provided configuration”,“config_file”:“/etc/caddy.yaml”,“config_adapter”:“yaml”}
{“level”:“info”,“ts”:1728450714.2471874,“msg”:“redirected default logger”,“from”:“stderr”,“to”:“stdout”}
{“level”:“info”,“ts”:1728450714.2479267,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//localhost:2019”,“//[::1]:2019”,“//127.0.0.1:2019”]}
{“level”:“info”,“ts”:1728450714.2488184,“logger”:“http.auto_https”,“msg”:“enabling automatic HTTP->HTTPS redirects”,“server_name”:“minio”}
{“level”:“info”,“ts”:1728450714.250211,“logger”:“http”,“msg”:“enabling HTTP/3 listener”,“addr”:“:9000”}
{“level”:“info”,“ts”:1728450714.2506592,“logger”:“http.log”,“msg”:“server running”,“name”:“minio”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728450714.2507212,“logger”:“http.log”,“msg”:“server running”,“name”:“public”,“protocols”:[“h1”,“h2”,“h3”]}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use

egress

  • rm -rf ‘/home/egress/tmp/*’
  • rm -rf /var/run/pulse /var/lib/pulse /home/egress/.config/pulse /home/egress/.cache/xdgr/pulse
  • pulseaudio -D --verbose --exit-idle-time=-1 --disallow-exit
    I: [pulseaudio] main.c: Daemon startup successful.
  • exec egress
    2024-10-09T05:11:53.392Z INFO egress redis/redis.go:142 connecting to redis {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”, “simple”: true, “addr”: “localhost:7000”}
    2024-10-09T05:11:53.395Z INFO egress stats/monitor.go:169 cpu available: 8.000000 max cost: 0.010000 {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”}
    2024-10-09T05:11:53.395Z INFO egress service/service.go:143 service ready {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”}
    2024-10-09T05:11:54.581Z INFO egress server/main.go:142 exit requested, finishing recording then shutting down {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”, “signal”: “terminated”}
    2024-10-09T05:11:54.582Z INFO egress service/service.go:145 shutting down {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”}
    2024-10-09T05:11:54.582Z INFO egress service/service.go:209 closing server {“nodeID”: “NE_2FB4Mr4BQAQz”, “clusterID”: “”}

docker logs dashboard
Pinging MongoDB…
MongoDB is reachable
OpenVidu Dashboard listening on port 5000

docker logs -f minio
05:11:00.42 INFO ==>
05:11:00.43 INFO ==> Welcome to the Bitnami minio container
05:11:00.43 INFO ==> Subscribe to project updates by watching GitHub - bitnami/containers: Bitnami container images
05:11:00.43 INFO ==> Submit issues and feature requests at Issues · bitnami/containers · GitHub
05:11:00.43 INFO ==> Upgrade to Tanzu Application Catalog for production environments to access custom-configured and pre-packaged software components. Gain enhanced features, including Software Bill of Materials (SBOM), CVE scan result reports, and VEX documents. To learn more, visit VMware Tanzu Application Catalog
05:11:00.43 INFO ==>
05:11:00.43 INFO ==> ** Starting MinIO setup **
minio 05:11:00.44 INFO ==> Starting MinIO in background…
minio 05:11:05.48 INFO ==> Adding local Minio host to ‘mc’ configuration…
minio 05:11:05.51 INFO ==> Creating default buckets…
minio 05:11:05.53 INFO ==> Bucket local/openvidu already exists, skipping creation.
minio 05:11:05.55 INFO ==> Stopping MinIO…
05:11:05.58 INFO ==> ** MinIO setup finished! **

minio 05:11:05.58 INFO ==> ** Starting MinIO **
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 - GNU Affero General Public License - GNU Project - Free Software Foundation
Version: DEVELOPMENT.2024-06-13T22-53-53Z (go1.21.11 linux/amd64)

API: http://localhost:9100
WebUI: https://openvidu.test.com/minio-console/

Docs: MinIO Object Storage for Linux — MinIO Object Storage for Linux
Status: 1 Online, 0 Offline.
STARTUP WARNINGS:

  • The standard parity is set to 0. This can lead to data loss.

Mongo Logs

{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“SignalHandler”,“msg”:“Shutting down the ServiceEntryPoint”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“SignalHandler”,“msg”:“Shutting down the TTL monitor”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“INDEX”, “id”:3684100, “ctx”:“SignalHandler”,“msg”:“Shutting down TTL collection monitor thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“INDEX”, “id”:3684101, “ctx”:“SignalHandler”,“msg”:“Finished shutting down TTL collection monitor thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:6278511, “ctx”:“SignalHandler”,“msg”:“Shutting down the Change Stream Expired Pre-images Remover”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}
{“t”:{“$date”:“2024-10-09T05:10:08.749+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.750+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}
{“t”:{“$date”:“2024-10-09T05:10:08.754+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,use_timestamp=false,”}}
{“t”:{“$date”:“2024-10-09T05:10:08.755+00:00”},“s”:“I”, “c”:“WTCHKPT”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:{“ts_sec”:1728450608,“ts_usec”:755390,“thread”:“1:0x7f1b8ace3640”,“session_name”:“close_ckpt”,“category”:“WT_VERB_CHECKPOINT_PROGRESS”,“category_id”:6,“verbose_level”:“DEBUG_1”,“verbose_level_id”:1,“msg”:“saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 50”}}}
{“t”:{“$date”:“2024-10-09T05:10:08.811+00:00”},“s”:“I”, “c”:“WTRECOV”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:{“ts_sec”:1728450608,“ts_usec”:811530,“thread”:“1:0x7f1b8ace3640”,“session_name”:“WT_CONNECTION.close”,“category”:“WT_VERB_RECOVERY_PROGRESS”,“category_id”:30,“verbose_level”:“DEBUG_1”,“verbose_level_id”:1,“msg”:“shutdown checkpoint has successfully finished and ran for 56 milliseconds”}}}
{“t”:{“$date”:“2024-10-09T05:10:08.811+00:00”},“s”:“I”, “c”:“WTRECOV”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:{“ts_sec”:1728450608,“ts_usec”:811689,“thread”:“1:0x7f1b8ace3640”,“session_name”:“WT_CONNECTION.close”,“category”:“WT_VERB_RECOVERY_PROGRESS”,“category_id”:30,“verbose_level”:“DEBUG_1”,“verbose_level_id”:1,“msg”:“shutdown was completed successfully and took 56ms, including 0ms for the rollback to stable, and 56ms for the checkpoint.”}}}
{“t”:{“$date”:“2024-10-09T05:10:08.856+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:102}}
{“t”:{“$date”:“2024-10-09T05:10:08.856+00:00”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}
{“t”:{“$date”:“2024-10-09T05:10:08.856+00:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}
{“t”:{“$date”:“2024-10-09T05:10:08.856+00:00”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}
{“t”:{“$date”:“2024-10-09T05:10:08.857+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}
{“t”:{“$date”:“2024-10-09T05:10:08.857+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:8423404, “ctx”:“SignalHandler”,“msg”:“mongod shutdown complete”,“attr”:{“Summary of time elapsed”:{“Statistics”:{“Enter terminal shutdown”:“0 ms”,“Step down the replication coordinator for shutdown”:“0 ms”,“Time spent in quiesce mode”:“0 ms”,“Shut down FLE Crud subsystem”:“0 ms”,“Shut down MirrorMaestro”:“0 ms”,“Shut down WaitForMajorityService”:“1 ms”,“Shut down the logical session cache”:“0 ms”,“Shut down the transport layer”:“0 ms”,“Shut down the global connection pool”:“0 ms”,“Shut down the flow control ticket holder”:“0 ms”,“Kill all operations for shutdown”:“0 ms”,“Shut down all tenant migration access blockers on global shutdown”:“0 ms”,“Shut down all open transactions”:“0 ms”,“Acquire the RSTL for shutdown”:“0 ms”,“Shut down the IndexBuildsCoordinator and wait for index builds to finish”:“0 ms”,“Shut down the replica set monitor”:“0 ms”,“Shut down the migration util executor”:“0 ms”,“Shut down the health log”:“0 ms”,“Shut down the TTL monitor”:“0 ms”,“Shut down expired pre-images and documents removers”:“0 ms”,“Shut down the storage engine”:“107 ms”,“Wait for the oplog cap maintainer thread to stop”:“0 ms”,“Shut down full-time data capture”:“0 ms”,“shutdownTask total elapsed time”:“109 ms”}}}}
{“t”:{“$date”:“2024-10-09T05:10:08.857+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}

@Shunmugam do you have more services running at the same time in your machine?

I see that caddy can’t bind to port 80 and 443 and it is needed by OpenVidu to work

logs after ports freed, all services exiting and restarting

docker logs caddy -f

{“level”:“info”,“ts”:1728454692.4235513,“msg”:“using provided configuration”,“config_file”:“/etc/caddy.yaml”,“config_adapter”:“yaml”}
{“level”:“info”,“ts”:1728454692.4267817,“msg”:“redirected default logger”,“from”:“stderr”,“to”:“stdout”}
{“level”:“info”,“ts”:1728454692.4272146,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//127.0.0.1:2019”,“//localhost:2019”,“//[::1]:2019”]}
{“level”:“info”,“ts”:1728454692.4277232,“logger”:“http.auto_https”,“msg”:“enabling automatic HTTP->HTTPS redirects”,“server_name”:“minio”}
{“level”:“info”,“ts”:1728454692.4327595,“msg”:“warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again”}
{“level”:“info”,“ts”:1728454692.4327745,“msg”:“define JAVA_HOME environment variable to use the Java trust”}
{“level”:“info”,“ts”:1728454692.4331384,“logger”:“http”,“msg”:“enabling HTTP/3 listener”,“addr”:“:9000”}
{“level”:“info”,“ts”:1728454692.433322,“logger”:“http.log”,“msg”:“server running”,“name”:“minio”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728454692.4333541,“logger”:“http.log”,“msg”:“server running”,“name”:“public”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728454692.4333742,“logger”:“http.log”,“msg”:“server running”,“name”:“remaining_auto_https_redirects”,“protocols”:[“h1”,“h2”,“h3”]}
{“level”:“info”,“ts”:1728454692.4333806,“logger”:“http”,“msg”:“enabling automatic TLS certificate management”,“domains”:[“openvidu.test.com”]}
{“level”:“debug”,“ts”:1728454692.433412,“logger”:“layer4”,“msg”:“listening”,“address”:“tcp/[::]:443”}
{“level”:“debug”,“ts”:1728454692.4334257,“logger”:“layer4”,“msg”:“listening”,“address”:“tcp/[::]:1935”}
{“level”:“info”,“ts”:1728454692.4335387,“msg”:“autosaved config (load with --resume flag)”,“file”:“/root/.config/caddy/autosave.json”}
{“level”:“info”,“ts”:1728454692.4335444,“msg”:“serving initial configuration”}
{“level”:“info”,“ts”:1728454693.9531095,“msg”:“shutting down apps, then terminating”,“signal”:“SIGTERM”}
{“level”:“warn”,“ts”:1728454693.953163,“msg”:“exiting; byeee!! :wave:”,“signal”:“SIGTERM”}
{“level”:“info”,“ts”:1728454693.953229,“logger”:“http”,“msg”:“servers shutting down with eternal grace period”}
{“level”:“info”,“ts”:1728454693.9534879,“logger”:“admin”,“msg”:“stopped previous server”,“address”:“localhost:2019”}
{“level”:“info”,“ts”:1728454693.9535034,“msg”:“shutdown complete”,“signal”:“SIGTERM”,“exit_code”:0}

I see. It’s quite strange because it looks like the containers are handling a SIGTERM. It’s like something externally is stopping those containers. Do you have somekind of daemon stopping your containers or something? It’s quite strange…

Did you try to deploy in a clean machine?

Is this reliable by running the beta version in server. Also we have tried using nginx as proxy instead of caddy. I have did some changes in the compose files, like each services I have ran one by one as per the order.

Now except egress I am able to run all other services.

egress is throwing below error

**+ rm -rf '/home/egress/tmp/*'**
**+ rm -rf /var/run/pulse /var/lib/pulse /home/egress/.config/pulse /home/egress/.cache/xdgr/pulse**
**+ pulseaudio -D --verbose --exit-idle-time=-1 --disallow-exit**
**E: [pulseaudio] main.c: Daemon startup failed.**

This is my docker-compose.yaml which is customized.

services:
  mongo:
    image: docker.io/mongo:7.0.11
    container_name: mongo
    network_mode: host
    restart: always
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mongoadmin
      - MONGO_INITDB_ROOT_PASSWORD=mongoadmin
    volumes:
      - ./data/mongo_data/data:/data/db   
    command: "mongod --bind_ip_all --port 20000"
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true

  minio:
    image: docker.io/bitnami/minio:2024.6.13
    container_name: minio
    network_mode: host
    restart: unless-stopped
    environment:
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=minioadmin
      - MINIO_CONSOLE_SUBPATH=/minio-console
      - MINIO_BROWSER_REDIRECT_URL=https://openvidu.test.com/minio-console/
      - MINIO_DEFAULT_BUCKETS=openvidu
      - MINIO_API_PORT_NUMBER=9100
      - MINIO_CONSOLE_PORT_NUMBER=9101
    volumes:
      - ./data/minio_data/data:/bitnami/minio/data
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true

  redis:
    image: docker.io/redis:7.2.5-alpine
    container_name: redis
    network_mode: host
    restart: unless-stopped
    command: >
      redis-server
      --bind 0.0.0.0
      --port 7000
      --requirepass redispassword
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true


  egress:
    image: docker.io/livekit/egress:v1.8.2
    restart: unless-stopped
    container_name: egress
    environment:
      - EGRESS_CONFIG_FILE=/etc/egress.yaml
    network_mode: "host"
    volumes:
      - ./config/egress.yaml:/etc/egress.yaml
      - ./data/egress_data/home/egress:/home/egress/
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true

  dashboard:
    image: docker.io/openvidu/openvidu-dashboard:3.0.0-beta2
    container_name: dashboard
    network_mode: host
    restart: unless-stopped
    environment:
      - SERVER_PORT=5000
      - ADMIN_USERNAME=admin
      - ADMIN_PASSWORD=admin
      - DATABASE_URL=mongodb://mongoadmin:mongoadmin@localhost:20000
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true

  minio:
    image: docker.io/bitnami/minio:2024.6.13
    container_name: minio
    network_mode: host
    restart: unless-stopped
    environment:
      - MINIO_ROOT_USER=minioadmin
      - MINIO_ROOT_PASSWORD=minioadmin
      - MINIO_CONSOLE_SUBPATH=/minio-console
      - MINIO_BROWSER_REDIRECT_URL=https://openvidu.test.com/minio-console/
      - MINIO_DEFAULT_BUCKETS=openvidu
      - MINIO_API_PORT_NUMBER=9100
      - MINIO_CONSOLE_PORT_NUMBER=9101
    volumes:
      - ./data/minio_data/data:/bitnami/minio/data
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true


  # Observability stack
  prometheus:
    image: docker.io/prom/prometheus:v2.50.1
    profiles:
      - observability
    restart: unless-stopped
    container_name: prometheus
    command:
      - --config.file=/etc/prometheus/prometheus.yaml
      - --storage.tsdb.retention.time=32d
    network_mode: host
    volumes:
      - ./config/prometheus.yaml:/etc/prometheus/prometheus.yaml
      - ./data/prometheus_data/prometheus:/prometheus

  promtail:
    image: docker.io/grafana/promtail:2.8.9
    profiles:
      - observability
    restart: unless-stopped
    container_name: promtail
    command: -config.file=/etc/promtail/promtail.yaml
    network_mode: host
    volumes:
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
      - ./config/promtail.yaml:/etc/promtail/promtail.yaml

  loki:
    image: docker.io/grafana/loki:2.8.9
    profiles:
      - observability
    restart: unless-stopped
    container_name: loki
    command: -config.file=/etc/loki/loki.yaml
    network_mode: host
    volumes:
      - ./config/loki.yaml:/etc/loki/loki.yaml
      - ./data/loki_data/data:/loki

  grafana:
    image: docker.io/grafana/grafana:10.3.3
    profiles:
      - observability
    restart: unless-stopped
    container_name: grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=password
      - GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH=/etc/grafana/provisioning/dashboards/openvidu_metrics.json
      - GF_SERVER_ROOT_URL=https://openvidu.test.com/grafana/
      - GF_SERVER_SERVE_FROM_SUB_PATH=true
    network_mode: host
    volumes:
      - ./data/grafana_data/data:/var/lib/grafana
      - ./config/grafana_config:/etc/grafana/provisioning

  ingress:
    image: docker.io/livekit/ingress:v1.2.0
    restart: unless-stopped
    container_name: ingress
    environment:
      - INGRESS_CONFIG_FILE=/etc/ingress.yaml
    network_mode: host
    volumes:
      - ./config/ingress.yaml:/etc/ingress.yaml
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true


  openvidu:
    image: docker.io/openvidu/openvidu-server:3.0.0-beta2
    restart: unless-stopped
    container_name: openvidu
    command: --config /etc/livekit.yaml
    network_mode: host
    volumes:
      - ./config/livekit.yaml:/etc/livekit.yaml
    logging:
      options:
        max-size: "${DOCKER_LOGS_MAX_SIZE:-200M}"
    labels:
      - logging=true


  nginx:
    image: nginx:latest
    container_name: nginx-proxy
    network_mode: host
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./owncert:/etc/nginx/certs

nginx.conf

worker_processes 1;

events {
    worker_connections 1024;
}
http {
    # React application server block
    server {
        listen          443 ssl;
        server_name     openvidu.test.com;
        ssl_certificate /etc/nginx/certs/openvidu.test.com.cert;
        ssl_certificate_key /etc/nginx/certs/openvidu.test.com.key;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;

        # Proxy WebSocket connections for OpenVidu
        location / {
            proxy_pass http://localhost:7880;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }

        # Minio console
        location /minio-console {
            proxy_pass http://localhost:9101;
            proxy_set_header Host $host;
        }

        # Grafana
        location /grafana {
            proxy_pass http://localhost:3000;
            proxy_set_header Host $host;
        }


        # WHIP service proxy
        location /whip {
            proxy_pass http://localhost:8080;
            proxy_set_header Host $host;
        }

        # Dashboard
        location /dashboard {
            proxy_pass http://localhost:5000;
            proxy_set_header Host $host;
        }

        # Health check
        location /health/caddy {
            return 200 "OK";
        }
    }

    # RTMP Proxying
    server {
        listen 1935;
        server_name openvidu.test.com;

        location / {
            proxy_pass http://localhost:1945;
            proxy_set_header Host $host;
        }
    }

    # Minio Proxy
    server {
        listen 9000;
        server_name openvidu.test.com;

        location / {
            proxy_pass http://localhost:9100;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

@cruizba - If possible can you share the docker deployment files which you have used to run the openvidu setup. Also please brief the steps which you followed for the same.

When i tried in my ubuntu server the setup-service conatiner is exiting for me. i did some manual work like removing same setup and running those command in terminal to achieve it.

Later I have removed the depends on from docker-compose which helped me in making all other services up and running. But the issue is that I am not able to login to the dashboard.

The docker-compose is the one generated by the installer. It should work without doing changes. If it is not, it’s probably because there are some edge cases we are not taking into account.

I would like to know which cloud providers and linux distributions you are you using, and which docker version you are using.

@VAISHAKH_VM Please if you can provide this information, it would be nice to detect those edge cases we are not contemplating.

@VAISHAKH_VM @Shunmugam

I will try before the official release to have some video tutorials uploaded in the main OpenVidu youtube channel on how to install every OpenVidu deployment, so people have a clear path on how to deploy it. We’ve improved the doc but a video I think it would clarify a lot.

I’ve seen that people tend to skip a lot of the documentation (I understand it, no issues about that).

Expect OpenVidu v3 to be stable this Q4 of 2024.

Just in case you’ve missed something:

  1. Ensure Docker and Docker compose is the latest version.
  2. Check that ports stated in the documentation are not used by any other process.
  3. Read the docs carefully.
  4. After install, remember to run OpenVidu with systemctl start openvidu, not with docker compose up. In beta3, I forbid this possibility so users don’t use docker compose up and instead use systemctl

Please, any inconvenience and feedback is very welcome, we want to make the installation as smooth as possible.

Best Regards

@cruizba

In my case

  1. Ensure Docker and Docker compose is the latest version.
    Docker and Docker compose are latest versions.

  2. Check that ports stated in the documentation are not used by any other process.
    Since we are using a fresh server for openvidu, I am not running any other services. The server is dedicated for openvidu.

  3. Read the docs carefully.
    These are done.

  4. After install, remember to run OpenVidu with systemctl start openvidu, not with docker compose up. In beta3, I forbid this possibility so users don’t use docker compose up and instead use systemctl
    Up on proper installation once i did systemctl start openvidu my containers are keep on exiting on giving docker compose up

@VAISHAKH_VM check your DM please.

1 Like