I am excited to see OV 2.21.0 released, we are deploying it in our development environment so we can start updating our product to take advantage of the enterprise support offered.
We have run into a snag, when we deploy (using the very helpful video in the documentation), we are running into an issue where the master nodes are never entering a healthy state.
As a result, the master nodes are regularly created and destroyed.
Where do we go to diagnose why this is occurring?
One thing of note, I don’t see an S3 bucket created like have seen in the 2.20.x releases. I am not sure if this is a cause or a symptom of what is occurring but it is the only thing I have seen that seems out of the ordinary for a high availability deployment.
Thanks,
Matthew VanDerlofske
What parameters are you using to deploy? If you don’t specify the s3 bucket parameter, a new one will be created.
Regards
I sent those details to the support email address.
I can **** out the parts I don’t want to be seen here.
Key |
Value |
AwsInstanceTypeKMS |
c5.xlarge |
AwsInstanceTypeOV |
c5.xlarge |
DesiredMasterNodes |
2 |
DesiredMediaNodes |
2 |
DomainName |
devvideo.*******.com |
ElasticsearchPassword |
**** |
ElasticsearchUrl |
https://search-openvidusearch-**************.us-east-1.es.amazonaws.com:443 |
ElasticsearchUser |
elasticadmin |
KeyName |
ThriveeFrontendKey |
KibanaUrl |
https://search-openvidusearch-******************.us-east-1.es.amazonaws.com:443/_plugin/kibana/ |
LoadBalancerCertificateARN |
arn:aws:acm:us-east-1:**********:certificate/-e03b-4bb6-87ad-7dfe63f4fbb4 |
MaxMasterNodes |
4 |
MaxMediaNodes |
4 |
MediaServer |
mediasoup |
MinMasterNodes |
2 |
MinMediaNodes |
2 |
OpenViduLicense |
**** |
OpenViduProClusterId |
openvidu_dev_webrtc |
OpenViduRecording |
TRUE |
OpenViduS3BucketName |
openvidu_webrtc_store |
OpenViduSecret |
**** |
OpenViduSubnets |
subnet-aee423f3,subnet-0972ec6d |
OpenViduVPC |
vpc-038d0f7b |
ScaleDownMediaNodesAvgCpu |
30 |
ScaleUpMediaNodesAvgCpu |
70 |
Tricks to diagnose would be very helpful as well.
@mvander115 to update follow those steps.
- Redeploy, but without using the
OpenViduS3BucketName
bucket name. (Keep it blank).
- Wait for master nodes to be healthy.
- After that, change your needed configuration in the new s3 bucket created in the new deployment. Don’t use the same .env file of the previous environment, modify the parameters (or add) those parameters that you need.
- After changing the
.env
file of the new bucket, terminate all master nodes and wait the Autoscaling group to create new master nodes with updated configuration.
We need to work a little bit to improve the upgrading process and reuse previous buckets.
Regards.
That did work, in the future it would be good to be able to diagnose what is going on that is causing the failure. Even if I have to suspend the autoscaling group and SSH in to look at logs.