🎄 Happy Holidays! 🥳
Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.
Happy Holidays!
Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.
'Message VPN' showing 'Down' status after docker restart
Hi there,
I am taking over an old Solace VM setup for our proof of concept with the intent to productionize Solace at a later point. I see that the solace application was running on docker on an Azure VM with the below script and it seemed to be running fine. I've had to restart docker container recently and now I see that the 'Message VPN' continues to show 'Down' status. I've gone to configuration mode (in the docker exec) and have tried 'no shutdown' for the message VPN in question. I've also disabled and enabled the message VPN in the GUI. None of this seems to help. Also, no errors are shown. Could anyone please point me to the right direction? Screenshot of the issue also attached. Thanks in advance.
#!/bin/bash sudo docker run \ --network=host \ --uts=host \ --shm-size=2g \ --ulimit core=-1 \ --ulimit memlock=-1 \ --ulimit nofile=2448:42192 \ --restart=always \ --detach=true \ --memory-swap=-1 \ --memory-reservation=0 \ --env 'username_admin_globalaccesslevel=admin' \ --env 'username_admin_password=XXXXX' \ --env 'nodetype=message_routing' \ --env 'routername=primary' \ --env 'redundancy_matelink_connectvia=10.XXX.X.51' \ --env 'redundancy_activestandbyrole=primary' \ --env 'redundancy_group_password=XXXXXXX' \ --env 'redundancy_enable=yes' \ --env 'redundancy_group_node_primary_nodetype=message_routing' \ --env 'redundancy_group_node_primary_connectvia=10.XXX.X.50' \ --env 'redundancy_group_node_backup_nodetype=message_routing' \ --env 'redundancy_group_node_backup_connectvia=10.XXX.X.51' \ --env 'redundancy_group_node_monitor_nodetype=monitoring' \ --env 'redundancy_group_node_monitor_connectvia=10.XXX.X.8' \ --env 'configsync_enable=yes' \ -v /opt/vmr/internalSpool:/usr/sw/internalSpool \ -v /opt/vmr/diags:/var/lib/solace/diags \ -v /opt/vmr/jail:/usr/sw/jail \ -v /opt/vmr/softAdb:/usr/sw/internalSpool/softAdb \ -v /opt/vmr/var:/usr/sw/var \ -v /opt/vmr/adb:/usr/sw/adb \ --name=solacePrimary store/solace/solace-pubsub-standard:9.2.0.14
Comments
-
Hi @anoj, in your Docker command you have these lines:
--env 'redundancy_activestandbyrole=primary' \ --env 'redundancy_group_password=XXXXXXX' \ --env 'redundancy_enable=yes' \ --env 'redundancy_group_node_primary_nodetype=message_routing' \ --env 'redundancy_group_node_primary_connectvia=10.XXX.X.50' \ --env 'redundancy_group_node_backup_nodetype=message_routing' \ --env 'redundancy_group_node_backup_connectvia=10.XXX.X.51' \ --env 'redundancy_group_node_monitor_nodetype=monitoring' \ --env 'redundancy_group_node_monitor_connectvia=10.XXX.X.8'
These lines configure the broker to be part of an HA triplet - see this part of the documentation for details on how this works.
When part of an HA triplet, if the other brokers are not running (so the primary is the only broker running), we purposefully declare the broker out of service. This is to ensure there is no split brain operation. So, if you only have the primary broker container running, I'd expect to see something like this.
How to fix: start at least one of the other containers in this triplet (backup and/or monitor) and make sure these two or three containers can see each other.
As an alternative intermediate step, you might like to start a test broker container without the HA triplet configuration to check that it starts, and then move on to getting your HA configuration connected.
1