OpenViduPro : Waiting for kibana

Hello,
I have two machines, one for Openvidu Pro and the second for Media node.
Elastic is working, but i’m getting a message :

Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…
Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…
Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…
Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…

I tried to put OPENVIDU_PRO_KIBANA_HOST=
with empty string but he is not taking it as it’s shown in logs :

cat /opt/openvidu/openvidu-report-01-02-2021-12-23.txt | grep kibana :


PATH=/usr/share/kibana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SERVER_BASEPATH="/kibana"
HOME=/usr/share/kibana
OPENVIDU_PRO_KIBANA_HOST=http://127.0.0.1/kibana
WAIT_KIBANA_URL=http://127.0.0.1/kibana


REgards.

Hello, this is probably a missconfiguration in NGINX or something wrong in the .env file.

Can you please share the content of /opt/openvidu/.env

Also could you share your nginx logs?: You can get them just with this commands:

sudo su
cd /opt/openvidu
docker-compose logs nginx

I tried to put OPENVIDU_PRO_KIBANA_HOST=
with empty string but he is not taking it as it’s shown in logs…

Kibana is mandatory to be running at the moment

.env :
#OpenVidu configuration
#----------------------
#Documentation: https://docs.openvidu.io/en/stable/reference-docs/openvidu-config/

#NOTE: This file doesn’t need to quote assignment values, like most shells do.
#All values are stored as-is, even if they contain spaces, so don’t quote them.

#Domain name. If you do not have one, the public IP of the machine.
#For example: 198.51.100.1, or openvidu.example.com
DOMAIN_OR_PUBLIC_IP=192.168.115.10
#OpenVidu PRO License
OPENVIDU_PRO_LICENSE=***

#OpenVidu SECRET used for apps to connect to OpenVidu server and users to access to OpenVidu Dashboard
OPENVIDU_SECRET=password123

#Certificate type:
#- selfsigned: Self signed certificate. Not recommended for production use.
#Users will see an ERROR when connected to web page.
#- owncert: Valid certificate purchased in a Internet services company.
#Please put the certificates files inside folder ./owncert
#with names certificate.key and certificate.cert
#- letsencrypt: Generate a new certificate using letsencrypt. Please set the
#required contact email for Let’s Encrypt in LETSENCRYPT_EMAIL
#variable.
CERTIFICATE_TYPE=selfsigned

#If CERTIFICATE_TYPE=letsencrypt, you need to configure a valid email for notifications
LETSENCRYPT_EMAIL=

#Proxy configuration
#If you want to change the ports on which openvidu listens, uncomment the following lines

#Allows any request to http://DOMAIN_OR_PUBLIC_IP:HTTP_PORT/ to be automatically
#redirected to https://DOMAIN_OR_PUBLIC_IP:HTTPS_PORT/.
#WARNING: the default port 80 cannot be changed during the first boot
#if you have chosen to deploy with the option CERTIFICATE_TYPE=letsencrypt
#HTTP_PORT=80

#Changes the port of all services exposed by OpenVidu.
#SDKs, REST clients and browsers will have to connect to this port
#HTTPS_PORT=443

#Old paths are considered now deprecated, but still supported by default.
#OpenVidu Server will log a WARN message every time a deprecated path is called, indicating
#the new path that should be used instead. You can set property SUPPORT_DEPRECATED_API=false
#to stop allowing the use of old paths.
#Default value is true
#SUPPORT_DEPRECATED_API=true

#If true request to with www will be redirected to non-www requests
#Default value is false
#REDIRECT_WWW=false

#How many workers to configure in nginx proxy.
#The more workers, the more requests will be handled
#Default value is 10240
#WORKER_CONNECTIONS=10240

#Access restrictions
#In this section you will be able to restrict the IPs from which you can access to
#Openvidu API and the Administration Panel
#WARNING! If you touch this configuration you can lose access to the platform from some IPs.
#Use it carefully.

#This section limits access to the /dashboard (OpenVidu CE) and /inspector (OpenVidu Pro) pages.
#The form for a single IP or an IP range is:
#ALLOWED_ACCESS_TO_DASHBOARD=198.51.100.1 and AlLOWED_ACCESS_TO_DASHBOARD=198.51.100.0/24
#To limit multiple IPs or IP ranges, separate by commas like this:
#ALLOWED_ACCESS_TO_DASHBOARD=198.51.100.1, 198.51.100.0/24
#ALLOWED_ACCESS_TO_DASHBOARD=

#This section limits access to the Openvidu REST API.
#The form for a single IP or an IP range is:
#ALLOWED_ACCESS_TO_RESTAPI=198.51.100.1 and ALLOWED_ACCESS_TO_RESTAPI=198.51.100.0/24
#To limit multiple IPs or or IP ranges, separate by commas like this:
#ALLOWED_ACCESS_TO_RESTAPI=198.51.100.1, 198.51.100.0/24
#ALLOWED_ACCESS_TO_RESTAPI=

#Mode of cluster management. Can be auto (OpenVidu manages Media Nodes on its own.
#Parameter KMS_URIS is ignored) or manual (user must manage Media Nodes. Parameter
#KMS_URIS is used: if any uri is provided it must be valid)
OPENVIDU_PRO_CLUSTER_MODE=manual

#Which environment are you using
#Possibles values: aws, on_premise
OPENVIDU_PRO_CLUSTER_ENVIRONMENT=on_premise

#Unique identifier of your cluster. Each OpenVidu Server Pro instance corresponds to one cluster.
#You can launch as many clusters as you want with your license key.
#Cluster ID will always be stored to disk so restarting OpenVidu Server Pro will keep the same previous cluster ID
#if this configuration parameter is not given a distinct value.
#OPENVIDU_PRO_CLUSTER_ID=

#The desired number of Media Nodes on startup. First the autodiscovery process is performed.
#If there are too many Media Nodes after that, they will be dropped until this number is reached.
#If there are not enough, more will be launched.
#This only takes place if OPENVIDU_PRO_CLUSTER_MODE is set to auto
#If set to zero no media servers will be lauched.
#Type: number >= 0
#OPENVIDU_PRO_CLUSTER_MEDIA_NODES=

#How often each running Media Node will send OpenVidu Server Pro Node load metrics, in seconds.
#This property is only used when OPENVIDU_PRO_CLUSTER_LOAD_STRATEGY is ‘cpu’. Other load strategies
#gather information synchronously when required
#Type: number >= 0
OPENVIDU_PRO_CLUSTER_LOAD_INTERVAL=

#Whether to enable or disable autoscaling. With autoscaling the number of Media Nodes will
#be automatically adjusted according to existing load
#Values: true | false
OPENVIDU_PRO_CLUSTER_AUTOSCALING=false

#How often the autoscaling algorithm runs, in seconds
#Type number >= 0
#OPENVIDU_PRO_CLUSTER_AUTOSCALING_INTERVAL=

#If autoscaling is enabled, the upper limit of Media Nodes that can be reached.
#Even when the average load exceeds the threshold, no more Media Nodes will be added to cluster
#Type number >= 0
#OPENVIDU_PRO_CLUSTER_AUTOSCALING_MAX_NODES=
#If autoscaling is enabled, the lower limit of Media Nodes that can be reached.
#Even when the average load is inferior to the threshold, no more Media Nodes will
#be removed from the cluster
#OPENVIDU_PRO_CLUSTER_AUTOSCALING_MIN_NODES=

#If autoscaling is enabled, the upper average load threshold that will trigger the addition
#of a new Media Node.
#Percentage value (0 min, 100 max)
#OPENVIDU_PRO_CLUSTER_AUTOSCALING_MAX_LOAD=

#If autoscaling is enabled, the lower average load threshold that will trigger the removal
#of an existing Media Node.
#Percentage value (0 min, 100 max)
#OPENVIDU_PRO_CLUSTER_AUTOSCALING_MIN_LOAD=

#What parameter should be used to distribute the creation of new sessions
#(and therefore distribution of load) among all available Media Nodes
OPENVIDU_PRO_CLUSTER_LOAD_STRATEGY=streams

#Whether to enable or disable Network Quality API. You can monitor and
#warn users about the quality of their networks with this feature
#OPENVIDU_PRO_NETWORK_QUALITY=false

#If OPENVIDU_PRO_NETWORK_QUALITY=true, how often the network quality
#algorithm will be invoked for each user, in seconds
#OPENVIDU_PRO_NETWORK_QUALITY_INTERVAL=5

#Max days until delete indexes in state of rollover on Elasticsearch
#Type number >= 0
#Default Value is 15
#OPENVIDU_PRO_ELASTICSEARCH_MAX_DAYS_DELETE=

#Private IP of OpenVidu Server Pro
#For example 192.168.1.101
#OPENVIDU_PRO_PRIVATE_IP=

#Where to store recording files. Can be ‘local’ (local storage) or ‘s3’ (AWS bucket).
#You will need to define a OPENVIDU_PRO_AWS_S3_BUCKET if you use it.
#OPENVIDU_PRO_RECORDING_STORAGE=

#S3 Bucket where to store recording files. May include paths to allow navigating
#folder structures inside the bucket. This property is only taken into account
#if OPENVIDU_PRO_RECORDING_STORAGE=s3
#OPENVIDU_PRO_AWS_S3_BUCKET=

#If you’re instance has a role which has access to read
#and write into the s3 bucket, you don’t need this parameter
#OPENVIDU_PRO_AWS_ACCESS_KEY=

#AWS credentials secret key from OPENVIDU_PRO_AWS_ACCESS_KEY. This property is only
#taken into account if OPENVIDU_PRO_RECORDING_STORAGE=s3
#If you’re instance has a role which has access to read
#and write into the s3 bucket, you don’t need this parameter
#OPENVIDU_PRO_AWS_SECRET_KEY=

#AWS region in which the S3 bucket is located (e.g. eu-west-1). If not provided,
#the region will try to be discovered automatically, although this is not always possible.
#This property is only taken into account if OPENVIDU_PRO_RECORDING_STORAGE=s3
#OPENVIDU_PRO_AWS_REGION=

#Whether to enable recording module or not
OPENVIDU_RECORDING=false

#Use recording module with debug mode.
OPENVIDU_RECORDING_DEBUG=false

#Openvidu Folder Record used for save the openvidu recording videos. Change it
#with the folder you want to use from your host.
OPENVIDU_RECORDING_PATH=/opt/openvidu/recordings

#System path where OpenVidu Server should look for custom recording layouts
OPENVIDU_RECORDING_CUSTOM_LAYOUT=/opt/openvidu/custom-layout

#if true any client can connect to
#https://OPENVIDU_SERVER_IP:OPENVIDU_PORT/recordings/any_session_file.mp4
#and access any recorded video file. If false this path will be secured with
#OPENVIDU_SECRET param just as OpenVidu Server dashboard at
#https://OPENVIDU_SERVER_IP:OPENVIDU_PORT
#Values: true | false
OPENVIDU_RECORDING_PUBLIC_ACCESS=false
#Which users should receive the recording events in the client side
#(recordingStarted, recordingStopped). Can be all (every user connected to
#the session), publisher_moderator (users with role ‘PUBLISHER’ or
#‘MODERATOR’), moderator (only users with role ‘MODERATOR’) or none
#(no user will receive these events)
OPENVIDU_RECORDING_NOTIFICATION=publisher_moderator

#Timeout in seconds for recordings to automatically stop (and the session involved to be closed)
#when conditions are met: a session recording is started but no user is publishing to it or a session
#is being recorded and last user disconnects. If a user publishes within the timeout in either case,
#the automatic stop of the recording is cancelled
#0 means no timeout
OPENVIDU_RECORDING_AUTOSTOP_TIMEOUT=120

#Maximum video bandwidth sent from clients to OpenVidu Server, in kbps.
#0 means unconstrained
OPENVIDU_STREAMS_VIDEO_MAX_RECV_BANDWIDTH=1000

#Minimum video bandwidth sent from clients to OpenVidu Server, in kbps.
#0 means unconstrained
OPENVIDU_STREAMS_VIDEO_MIN_RECV_BANDWIDTH=300

#Maximum video bandwidth sent from OpenVidu Server to clients, in kbps.
#0 means unconstrained
OPENVIDU_STREAMS_VIDEO_MAX_SEND_BANDWIDTH=1000

#Minimum video bandwidth sent from OpenVidu Server to clients, in kbps.
#0 means unconstrained
OPENVIDU_STREAMS_VIDEO_MIN_SEND_BANDWIDTH=300

#true to enable OpenVidu Webhook service. false’ otherwise
#Values: true | false
OPENVIDU_WEBHOOK=false

#HTTP endpoint where OpenVidu Server will send Webhook HTTP POST messages
#Must be a valid URL: http(s)://ENDPOINT
#OPENVIDU_WEBHOOK_ENDPOINT=

#List of headers that OpenVidu Webhook service will attach to HTTP POST messages
#OPENVIDU_WEBHOOK_HEADERS=

#List of events that will be sent by OpenVidu Webhook service
#Default value is all available events
OPENVIDU_WEBHOOK_EVENTS=[sessionCreated,sessionDestroyed,participantJoined,participantLeft,webrtcConnectionCreated,webrtcConnectionDestroyed,recordingStatusChanged,filterEventDispatched,mediaNodeStatusChanged]

#How often the garbage collector of non active sessions runs.
#This helps cleaning up sessions that have been initialized through
#REST API (and maybe tokens have been created for them) but have had no users connected.
#Default to 900s (15 mins). 0 to disable non active sessions garbage collector
OPENVIDU_SESSIONS_GARBAGE_INTERVAL=900

#Minimum time in seconds that a non active session must have been in existence
#for the garbage collector of non active sessions to remove it. Default to 3600s (1 hour).
#If non active sessions garbage collector is disabled
#(property ‘OPENVIDU_SESSIONS_GARBAGE_INTERVAL’ to 0) this property is ignored
OPENVIDU_SESSIONS_GARBAGE_THRESHOLD=3600
#Call Detail Record enabled
#Whether to enable Call Detail Record or not
#Values: true | false
OPENVIDU_CDR=false

#Path where the cdr log files are hosted
OPENVIDU_CDR_PATH=/opt/openvidu/cdr

#Openvidu Server Level logs
#--------------------------
#Uncomment the next line and define this variable to change
#the verbosity level of the logs of Openvidu Service
#RECOMENDED VALUES: INFO for normal logs DEBUG for more verbose logs
#OV_CE_DEBUG_LEVEL=INFO
#OpenVidu Java Options
#--------------------------
#Uncomment the next line and define this to add options to java command
#Documentation: Configuring the JVM, Java Options, and Database Cache
#JAVA_OPTIONS=-Xms2048m -Xmx4096m

#ElasticSearch Java Options
#--------------------------
#Uncomment the next line and define this to add options to java command of Elasticsearch
#Documentation: Configuring the JVM, Java Options, and Database Cache
#By default ElasticSearch is configured to use “-Xms2g -Xmx2g” as Java Min and Max memory heap allocation
#ES_JAVA_OPTS=-Xms2048m -Xmx4096m
#Kibana And ElasticSearch Configuration
#--------------------------
#Kibana And ElasticSearch Basic Auth configuration (Credentials)
#This credentials will aso be valid for Kibana dashboard
ELASTICSEARCH_USERNAME=elasticadmin
ELASTICSEARCH_PASSWORD=123456

#Media Node Configuration
#--------------------------
#You can add any KMS environment variable as described in the
#documentation of the docker image: Docker Hub
#If you want to add an environment variable to KMS, you must add a variable using this prefix: ‘KMS_DOCKER_ENV_’,
#followed by the environment variable you want to setup.
#For example if you want to setup KMS_MIN_PORT to 50000, it would be KMS_DOCKER_ENV_KMS_MIN_PORT=50000

#Docker hub kurento media server: Docker Hub
#Uncomment the next line and define this variable with KMS image that you want use
#By default, KMS_IMAGE is defined in media nodes and it does not need to be specified unless
#you want to use a specific version of KMS
#KMS_IMAGE=kurento/kurento-media-server:6.15.0

#Uncomment the next line and define this variable to change
#the verbosity level of the logs of KMS
#Documentation: Debug Logging — Kurento 6.15.0 documentation
#KMS_DOCKER_ENV_GST_DEBUG=

#Cloudformation configuration
#--------------------------

If you’re working outside AWS ignore this section

#AWS_DEFAULT_REGION=
#AWS_IMAGE_ID=
#AWS_INSTANCE_TYPE=
#AWS_KEY_NAME=
#AWS_SUBNET_ID=
#AWS_SECURITY_GROUP=
#AWS_STACK_ID=
#AWS_STACK_NAME=

OPENVIDU_PRO_KIBANA_HOST=
indent preformatted text by 4 spaces

nginx logs :
nginx_1 |
nginx_1 | =======================================
nginx_1 | = INPUT VARIABLES =
nginx_1 | =======================================
nginx_1 |
nginx_1 | Config NGINX:
nginx_1 | - Http Port: 80
nginx_1 | - Https Port: 443
nginx_1 | - Worker Connections: 10240
nginx_1 | - Allowed Access in Openvidu Dashboard: all
nginx_1 | - Allowed Access in Openvidu API: all
nginx_1 | - Support deprecated API: true
nginx_1 | - Redirect www to non-www: false
nginx_1 |
nginx_1 | Config Openvidu Application:
nginx_1 | - Domain name: 192.168.115.10
nginx_1 | - Certificated: selfsigned
nginx_1 | - Letsencrypt Email:
nginx_1 | - Openvidu Application: true
nginx_1 | - Openvidu Application Type: PRO
nginx_1 |
nginx_1 | =======================================
nginx_1 | = CONFIGURATION NGINX =
nginx_1 | =======================================
nginx_1 |
nginx_1 | Configure 192.168.115.10 domain…
nginx_1 | - New configuration: selfsigned 192.168.115.10
nginx_1 | - Old configuration: selfsigned 192.168.115.10
nginx_1 |
nginx_1 | - Selfsigned certificate already exists, using them…
nginx_1 |
nginx_1 | =======================================
nginx_1 | = ALLOWED ACCESS =
nginx_1 | =======================================
nginx_1 |
nginx_1 | 2021/02/01 13:30:44 [error] 18#18: *1 open() “/etc/nginx/html/kibana” failed (2: No such file or directory), client: 127.0.0.1, server: , request: “HEAD /kibana HTTP/1.1”, host: “127.0.0.1”
nginx_1 | Adding rules…127.0.0.1 - - [01/Feb/2021:13:30:44 +0000] “HEAD /kibana HTTP/1.1” 404 0 “-” “curl/7.47.0” “-”
nginx_1 |
nginx_1 | - Public IPv4 for rules: 92.103.170.138
nginx_1 |
nginx_1 | Finish Rules:
nginx_1 | Openvidu Dashboard:
nginx_1 | - allow all;
nginx_1 | Openvidu API:
nginx_1 | - allow all;
nginx_1 |
nginx_1 | =======================================
nginx_1 | = START OPENVIDU PROXY =
nginx_1 | =======================================
nginx_1 |
nginx_1 | 2021/02/01 13:30:44 [warn] 69#69: “ssl_stapling” ignored, no OCSP responder URL in the certificate “/etc/letsencrypt/live/192.168.115.10/fullchain.pem”
nginx_1 | nginx: [warn] “ssl_stapling” ignored, no OCSP responder URL in the certificate “/etc/letsencrypt/live/192.168.115.10/fullchain.pem”
nginx_1 | 2021/02/01 13:30:44 [notice] 69#69: signal process started
nginx_1 | 2021/02/01 13:30:44 [warn] 16#16: “ssl_stapling” ignored, no OCSP responder URL in the certificate “/etc/letsencrypt/live/192.168.115.10/fullchain.pem”
nginx_1 | 2021/02/01 13:30:45 [error] 71#71: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:45 [warn] 71#71: *2 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:45 [error] 71#71: *2 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:45 [warn] 71#71: *2 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:46 [error] 71#71: *5 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:47 [error] 71#71: *6 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:48 [error] 71#71: *7 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:49 [error] 71#71: *8 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:50 [error] 71#71: *9 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:51 [error] 71#71: *11 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:52 [error] 71#71: *12 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:53 [error] 71#71: *13 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:54 [error] 71#71: *14 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:55 [error] 71#71: *15 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:56 [error] 71#71: *16 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:56 [warn] 71#71: *16 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:56 [error] 71#71: *16 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:56 [warn] 71#71: *16 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:57 [error] 71#71: *19 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:58 [error] 71#71: *20 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:30:59 [error] 71#71: *21 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:00 [error] 71#71: *22 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:01 [error] 71#71: *23 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:02 [error] 71#71: *24 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:03 [error] 71#71: *25 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:04 [error] 71#71: *26 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:05 [error] 71#71: *27 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:06 [error] 71#71: *28 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:07 [error] 71#71: *29 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:07 [warn] 71#71: *29 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:07 [error] 71#71: *29 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:07 [warn] 71#71: *29 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:08 [error] 71#71: *32 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:09 [error] 71#71: *33 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:10 [error] 71#71: *34 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:11 [error] 71#71: *35 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:12 [error] 71#71: *36 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:13 [error] 71#71: *37 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:14 [error] 71#71: *38 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:15 [error] 71#71: *39 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:16 [error] 71#71: *40 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:17 [error] 71#71: *41 no live upstreams while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://kibana/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:18 [error] 71#71: *42 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:18 [warn] 71#71: *42 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://127.0.0.1:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:18 [error] 71#71: *42 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”
nginx_1 | 2021/02/01 13:31:18 [warn] 71#71: *42 upstream server temporarily disabled while connecting to upstream, client: 127.0.0.1, server: 192.168.115.10, request: “HEAD /kibana HTTP/1.1”, upstream: “http://[::1]:5601/”, host: “127.0.0.1”

I suppose by previous messages, that you’re trying to run OpenVidu locally in two different Virtual machines and access it locally, right?

If this is the case, please confirm it and follow this:

Instead of using your Private IP of your LAN, try to use the private IP of your VM in DOMAIN_OR_PUBLIC_IP

Regards

If you want to make the deployment accessible in your LAN, it will be more hard: VirtualBox: How To Access Host Port From Guest - DEV Community

I strongly recommend to test with a fully qualified domain with two VPS, one for Master and another for the Media Node.

No, now I’m deploying the application in production, so it’s two different servers that have two different IP one with xxx.xxx.115.10 the second with xxx.xxx.115.11

Are those machine running in a VPS with a public IP? Or behind a NAT?
Why are you using 192.168.115.10 then? Use the public IP instead.

The problem here it’s not to make connection between the two servers, the problem is with the launch of openvidu server, he stuck in this line :
openvidu-server_1 | Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…

Do your servers have public IPs?

Put in DOMAIN_OR_PUBLIC_IP the public IP of OpenVidu Server Pro

now I’m using the local ip for preprod and then we’ll open it for public

Ok, I’ve checked the configuration and it seems to be ok.
Just delete the OPENVIDU_PRO_KIBANA_HOST= line, it will not affect anything. Even the nginx looks ok with selfigned configuration

I’m thinking that maybe, even if Kibana and Elasticsearch appears running with docker ps they’re not running properly.

How much CPU and RAM do you have in your OpenVidu Pro instance?

Could you please show me your Kibana logs?

docker-compose logs kibana

I found the problem :grin: :grin:

in kibana’s logs it was :
FATAL Error: [config validation of [elasticsearch].password]: expected value of type [string] but got [number]

because I had :
ELASTICSEARCH_PASSWORD=123456

I just changed to other password with letters and it works.

Thank you so much
Regards

Oops :).

Well, now, if you’re connected via VPN to this network or you’re in this network you should be able to access OpenVidu

Regards

I’m already in, I have to add the media node

Getting Waiting for kibana in ‘http://127.0.0.1/kibana’ URL… repeatedly.
None of above solution worked for me

so do I, got stuck in Waiting for kibana in ‘http://127.0.0.1/kibana’ URL…

got resolved, it elasticsearch was not starting due insufficient ram