'Message VPN' showing 'Down' status after docker restart

Anoj
Anoj Member Posts: 12

Hi there,
I am taking over an old Solace VM setup for our proof of concept with the intent to productionize Solace at a later point. I see that the solace application was running on docker on an Azure VM with the below script and it seemed to be running fine. I've had to restart docker container recently and now I see that the 'Message VPN' continues to show 'Down' status. I've gone to configuration mode (in the docker exec) and have tried 'no shutdown' for the message VPN in question. I've also disabled and enabled the message VPN in the GUI. None of this seems to help. Also, no errors are shown. Could anyone please point me to the right direction? Screenshot of the issue also attached. Thanks in advance.

#!/bin/bash
sudo docker run \
--network=host \
--uts=host \
--shm-size=2g \
--ulimit core=-1 \
--ulimit memlock=-1 \
--ulimit nofile=2448:42192 \
--restart=always \
--detach=true \
--memory-swap=-1 \
--memory-reservation=0 \
--env 'username_admin_globalaccesslevel=admin' \
--env 'username_admin_password=XXXXX' \
--env 'nodetype=message_routing' \
--env 'routername=primary' \
--env 'redundancy_matelink_connectvia=10.XXX.X.51' \
--env 'redundancy_activestandbyrole=primary' \
--env 'redundancy_group_password=XXXXXXX' \
--env 'redundancy_enable=yes' \
--env 'redundancy_group_node_primary_nodetype=message_routing' \
--env 'redundancy_group_node_primary_connectvia=10.XXX.X.50' \
--env 'redundancy_group_node_backup_nodetype=message_routing' \
--env 'redundancy_group_node_backup_connectvia=10.XXX.X.51' \
--env 'redundancy_group_node_monitor_nodetype=monitoring' \
--env 'redundancy_group_node_monitor_connectvia=10.XXX.X.8' \
--env 'configsync_enable=yes' \
-v /opt/vmr/internalSpool:/usr/sw/internalSpool \
-v /opt/vmr/diags:/var/lib/solace/diags \
-v /opt/vmr/jail:/usr/sw/jail \
-v /opt/vmr/softAdb:/usr/sw/internalSpool/softAdb \
-v /opt/vmr/var:/usr/sw/var \
-v /opt/vmr/adb:/usr/sw/adb \
--name=solacePrimary store/solace/solace-pubsub-standard:9.2.0.14

Tagged:

Comments

  • TomF
    TomF Member, Employee Posts: 406 Solace Employee

    Hi @anoj, in your Docker command you have these lines:

    --env 'redundancy_activestandbyrole=primary' \
    --env 'redundancy_group_password=XXXXXXX' \
    --env 'redundancy_enable=yes' \
    --env 'redundancy_group_node_primary_nodetype=message_routing' \
    --env 'redundancy_group_node_primary_connectvia=10.XXX.X.50' \
    --env 'redundancy_group_node_backup_nodetype=message_routing' \
    --env 'redundancy_group_node_backup_connectvia=10.XXX.X.51' \
    --env 'redundancy_group_node_monitor_nodetype=monitoring' \
    --env 'redundancy_group_node_monitor_connectvia=10.XXX.X.8' 
    

    These lines configure the broker to be part of an HA triplet - see this part of the documentation for details on how this works.

    When part of an HA triplet, if the other brokers are not running (so the primary is the only broker running), we purposefully declare the broker out of service. This is to ensure there is no split brain operation. So, if you only have the primary broker container running, I'd expect to see something like this.

    How to fix: start at least one of the other containers in this triplet (backup and/or monitor) and make sure these two or three containers can see each other.

    As an alternative intermediate step, you might like to start a test broker container without the HA triplet configuration to check that it starts, and then move on to getting your HA configuration connected.