Issue with OPENVIDU_URL variable

Hello everyone,

I am reaching out regarding an issue I am encountering while deploying version 3.5.0 using AWS HA.

Specifically, the OPENVIDU_URL variable is associated with a unique IP address, but it is not returning any data. This behavior is causing the error shown in the attached file.

Could you please help me investigate why this is happening?

Best regards, Matteo.

You are using the Angular component, right?

I think this issue is related to this: OpenVidu 3.5: Components (Angular) can’t join in split-VM setup

Take a look to the workaround of the first message.

Hi,

yes, I’m using the “Openvidu platform” angular components solution.

I’ll take a look at the link you suggested, however are there any problem on my deployment?

Is it correct that if I launch my OPENVIDU_URL value in the browser I got this?

Shouldn’t I be seeing the Meet App or something else?

Thank you, Matteo.

No, It should open OpenVidu Meet.

I would check if every docker container is running correctly on every Master Node.

SSH into every Master Node and execute:

sudo su
docker ps

See if there is any restarting service.

In that case, restart with:

systemctl restart openvidu

EDIT:

The ERR_TIMED_OUT also suggest me that maybe the DNS is not configured to point to the LoadBalancer? Verify it.

Hi,

it was the DNS that hasn’t kicked in yet and now I see the Meet App.

I tried to modify my caddy.yaml configuration inside the Master Node 1, however I don’t know where to change the configuration, could you help me please?

# Caddy configuration file
# ----------------------
# NOTES:
# ----------------------
# WARNING: Take into account that this file will be overwritten by OpenVidu when you upgrade it.
#
# This file uses the same interpolation rules as Docker Compose.
# For more details, refer to the documentation: https://docs.docker.com/compose/compose-file/12-interpolation/
#
# - ${openvidu.CONFIG_PARAMETER} must be used to access environment variables from openvidu.env.
# - ${master_node.CONFIG_PARAMETER} must be used to access environment variables from master_node.env.
# ----------------------
admin:
  listen: unix//var/run/caddy/admin.sock
storage:
  module: redis
  client_type: failover
  address:
    - master-node-1:${master_node.REDIS_SENTINEL_INTERNAL_PORT:?mandatory}
    - master-node-2:${master_node.REDIS_SENTINEL_INTERNAL_PORT:?mandatory}
    - master-node-3:${master_node.REDIS_SENTINEL_INTERNAL_PORT:?mandatory}
    - master-node-4:${master_node.REDIS_SENTINEL_INTERNAL_PORT:?mandatory}
  master_name: openvidu
  password: ${master_node.REDIS_PASSWORD:?mandatory}
  password_sentinel: ${master_node.REDIS_PASSWORD:?mandatory}
apps:
  
  layer4:
    servers:
      rtmp:
        listen:
          - ":${openvidu.CADDY_RTMPS_PUBLIC_PORT:?mandatory}"
        routes:
          - handle:
            - handler: proxy
              # @id is used to dinamically update upstreams
              # by the operator
              "@id": ov-rtmp
              upstreams:
                - dial: []
      turn:
        listen:
          - ":${openvidu.LIVEKIT_TURN_TLS_INTERNAL_PORT:?mandatory}"
        routes:
          - handle:
            - handler: proxy
              # @id is used to dinamically update upstreams
              # by the operator
              "@id": ov-turn
              upstreams:
                - dial: []
  http:
    http_port: ${openvidu.CADDY_HTTP_PUBLIC_PORT:?mandatory}
    https_port: ${openvidu.CADDY_HTTPS_PUBLIC_PORT:?mandatory}
    servers:

      # Redirect HTTP to HTTPS
      redirect:
        listen:
          - ":${openvidu.CADDY_HTTP_PUBLIC_PORT:?mandatory}"
        routes:
          - handle:
            - handler: static_response
              status_code: 301
              headers:
                Location:
                  - https://{http.request.host}:${openvidu.CADDY_HTTPS_PUBLIC_PORT:?mandatory}{http.request.uri}

      public:
        listen:
          - ":${openvidu.CADDY_HTTP_INTERNAL_PORT:?mandatory}"
        logs:
          default_logger_name: default
        automatic_https:
          disable: true
        routes:
          - handle:
              - handler: subroute
                # @id is used to dinamically update openvidu route
                # by the operator
                "@id": "ov-public-handler"
                routes: []
          # Ingress WHIP service
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: reverse_proxy
                        # @id is used to dinamically update upstreams
                        # by the operator
                        "@id": ov-whip
                        health_checks:
                          active:
                            expect_status: 404
                            interval: 5s
                            timeout: 2s
                            uri: /
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: round_robin
                        upstreams: []
                    match:
                      - path:
                          - /whip
          # Dashboard
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: rewrite
                        strip_path_prefix: /dashboard
                      - handler: reverse_proxy
                        health_checks:
                          active:
                            expect_status: 200
                            interval: 5s
                            timeout: 2s
                            uri: /dashboard/api/healthcheck
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: round_robin
                        upstreams:
                          - dial: master-node-1:${openvidu.DASHBOARD_INTERNAL_PORT:?mandatory}
                          - dial: master-node-2:${openvidu.DASHBOARD_INTERNAL_PORT:?mandatory}
                          - dial: master-node-3:${openvidu.DASHBOARD_INTERNAL_PORT:?mandatory}
                          - dial: master-node-4:${openvidu.DASHBOARD_INTERNAL_PORT:?mandatory}
                    match:
                      - path:
                          - /dashboard/*
                          - /dashboard
          # Minio Console
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: static_response
                        headers:
                          Location:
                            - "/minio-console/"
                        status_code: 302
                    match:
                      - path:
                          - /minio-console
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: rewrite
                        strip_path_prefix: /minio-console
                      - handler: reverse_proxy
                        health_checks:
                          active:
                            expect_status: 200
                            interval: 5s
                            timeout: 2s
                            uri: /minio-console/
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: round_robin
                        upstreams:
                          - dial: master-node-1:${openvidu.MINIO_CONSOLE_INTERNAL_PORT:?mandatory}
                          - dial: master-node-2:${openvidu.MINIO_CONSOLE_INTERNAL_PORT:?mandatory}
                          - dial: master-node-3:${openvidu.MINIO_CONSOLE_INTERNAL_PORT:?mandatory}
                          - dial: master-node-4:${openvidu.MINIO_CONSOLE_INTERNAL_PORT:?mandatory}
                    match:
                      - path:
                          - /minio-console/*
          # OpenVidu v2 compatibility API
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: reverse_proxy
                        health_checks:
                          active:
                            expect_status: 401
                            interval: 5s
                            timeout: 2s
                            uri: /openvidu/api/health
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: ip_hash
                        upstreams:
                          - dial: master-node-1:${openvidu.OPENVIDU_V2COMPAT_INTERNAL_PORT:?mandatory}
                          - dial: master-node-2:${openvidu.OPENVIDU_V2COMPAT_INTERNAL_PORT:?mandatory}
                          - dial: master-node-3:${openvidu.OPENVIDU_V2COMPAT_INTERNAL_PORT:?mandatory}
                          - dial: master-node-4:${openvidu.OPENVIDU_V2COMPAT_INTERNAL_PORT:?mandatory}
                    match:
                      - path:
                          - /openvidu/api/*
                          - /openvidu/recordings/*
                          - /openvidu/ws/*
          # Grafana
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: reverse_proxy
                        health_checks:
                          active:
                            expect_status: 200
                            interval: 5s
                            timeout: 2s
                            uri: /api/health
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: ip_hash
                        upstreams:
                          - dial: master-node-1:${openvidu.GRAFANA_INTERNAL_PORT:?mandatory}
                          - dial: master-node-2:${openvidu.GRAFANA_INTERNAL_PORT:?mandatory}
                          - dial: master-node-3:${openvidu.GRAFANA_INTERNAL_PORT:?mandatory}
                          - dial: master-node-4:${openvidu.GRAFANA_INTERNAL_PORT:?mandatory}
                    match:
                      - path:
                          - /grafana/*
                          - /grafana
          # Health check Caddy
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: static_response
                        status_code: 200
                        body: "OK"
                    match:
                      - path:
                          - /health/caddy
          # OpenVidu Meet
          - handle:
              - handler: subroute
                routes:
                  - handle:
                      - handler: reverse_proxy
                        health_checks:
                          active:
                            expect_status: 200
                            interval: 5s
                            timeout: 2s
                            uri: /
                        load_balancing:
                          retries: 2
                          selection_policy:
                            policy: ip_hash
                        upstreams:
                          - dial: master-node-1:${openvidu.MEET_INTERNAL_PORT:?mandatory}
                          - dial: master-node-2:${openvidu.MEET_INTERNAL_PORT:?mandatory}
                          - dial: master-node-3:${openvidu.MEET_INTERNAL_PORT:?mandatory}
                          - dial: master-node-4:${openvidu.MEET_INTERNAL_PORT:?mandatory}
                    match:
                      - path:
                          - /*

      minio:
        listen:
          - ":${openvidu.CADDY_MINIO_PUBLIC_PORT:?mandatory}"
        logs:
          default_logger_name: default
        automatic_https:
          disable: true
        routes:
          - handle:
              - handler: reverse_proxy
                health_checks:
                  active:
                    expect_status: 200
                    interval: 5s
                    timeout: 2s
                    uri: /minio/health/live
                load_balancing:
                  retries: 2
                  selection_policy:
                    policy: round_robin
                upstreams:
                  - dial: master-node-1:${openvidu.MINIO_API_INTERNAL_PORT:?mandatory}
                  - dial: master-node-2:${openvidu.MINIO_API_INTERNAL_PORT:?mandatory}
                  - dial: master-node-3:${openvidu.MINIO_API_INTERNAL_PORT:?mandatory}
                  - dial: master-node-4:${openvidu.MINIO_API_INTERNAL_PORT:?mandatory}

logging:
  logs:
    default:
      level: WARN

# Comment the previous logging section and
# uncomment the following lines to enable detailed Caddy logs
# Disabled by default to avoid performance issues
# ----------------------
# logging:
#   logs:
#     default:
#       level: INFO
#       encoder:
#         format: filter
#         wrap:
#           format: json
#         fields:
#           "request>headers":
#             filter: delete
#           "resp_headers":
#             filter: delete
#           "request>uri":
#             filter: query
#             actions:
#               - parameter: "access_token"
#                 type: delete
#       include:
#         - http
#       writer:
#         output: stdout
#     layer4access:
#       level: DEBUG
#       include:
#         - layer4
#       writer:
#         output: stdout

Thank you, Matteo.

Ah, sorry, that /rtc parameter is configured via the operator in PRO versions

It is not a mandatory change, is it working now with the DNS?

If not I will build an operator with that specific change

Hi, no, unfortunately it still not working.

I get the same error shared in the image in the previous messages.

Thank you, Matteo.

PS: It works using the Meet App, but if I want to use my customized Openvidu Platform that uses Angular Components I still got the error.

I’ll prepare an updated image with the operator fix and let you know here once it’s ready with instructions on how to deploy it.

@Developer3010

I’ve created an image docker.io/openvidu/openvidu-operator:3.5.1 with the caddy rules changed.

You need to:

  1. SSH into every master node
  2. Go to /opt/openvidu/docker-compose.yaml
  3. Locate the operator service and change the image to: docker.io/openvidu/openvidu-operator:3.5.1:
  operator:
    image: docker.io/openvidu/openvidu-operator:3.5.1
  1. Pull the image:
sudo su
cd /opt/openvidu
docker compose pull
  1. Restart with:
systemctl restart openvidu

You need to change the docker-compose.yaml and restart every Master Node.

Hi, now it works!

Thank you again for the fix, have a nice day.

2 Likes