Docker will not work

Options

Hello!
I've been working with solace for a while now, and i've allways ran it localy on docker.
I've switched to linux recently and a stange thing happened.
Yesterday I was running solace ok on my machine, with docker, but today, I can not make it work.
Nothing major has changed but solace just wont work on docker.
This is the docker-compose i use:

version: '3.9'

services:
  mongo:
    image: mongo:latest
    environment:
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: admin
    ports:
      - "27017:27017"
    volumes:
      - /home/Docker/mongodb/database:/data/db

  solace:
    image: solace/solace-pubsub-standard:latest
    shm_size: 1g
    ulimits:
      core: 1
      nofile:
        soft: 2448
        hard: 6592
    ports:
      #Web transport
      - '8008:8008'
      #Web transport over TLS
      #- '1443:1443'
      #SEMP over TLS
      #- '1943:1943'
      #MQTT Default VPN
      - '1883:1883'
      #AMQP Default VPN over TLS
      #- '5671:5671'
      #AMQP Default VPN
      - '5672:5672'
      #MQTT Default VPN over WebSockets
      - '8000:8000'
      #MQTT Default VPN over WebSockets / TLS
      #- '8443:8443'
      #MQTT Default VPN over TLS
      #- '8883:8883'
      #SEMP / PubSub+ Manager
      - '8080:8080'
      #REST Default VPN
      - '9000:9000'
      #REST Default VPN over TLS
      #- '9443:9443'
      #SMF
      - '55555:55555'
      #SMF Compressed
      #- '55003:55003'
      #SMF over TLS
      #- '55443:55443'
      #SSH connection to CLI
      - '2222:2222'
    environment:
      - username_admin_globalaccesslevel=admin
      - username_admin_password=admin
      - system_scaling_maxconnectioncount=100

I can run mongo and access mongo, but solace just wont let me login, it timesout after some time.

Tagged:

Answers

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Hi PedromDimas ,

    Please check the output for

    docker logs solace

    And check the solace-jail-volume (you can find with docker inspect)... there is a file named "event.log"... maybe some errors are visible there.

  • PedromDimas
    PedromDimas Member Posts: 11
    Options

    Hello @uherbst
    That was the first thing i did and the logs did not change with the request.
    There was only the boot logs.
    I even put it on watch and made the login and nothing happened
    Thanks

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Hi @PedromDimas,

    Would you share this information with us:

    • output of docker ps -a
    • output of docker logs solace
    • output of docker inspect solace

    You say: "Solace just wont let me login". How do you login ? ssh ? Browser ? "docker exec ...cli" ? Something else ?

    Uli

  • PedromDimas
    PedromDimas Member Posts: 11
    Options

    Hello @uherbst
    So, i ran everything you asked
    The problem persistes

    and here is the commands:
    in logs.txt

    Thank you

  • PedromDimas
    PedromDimas Member Posts: 11
    Options

    Fun fact, just reinstalled linux and the problem persists.
    Which leads me to belive that is a problem with Docker it self (which is odd given mongo works just fine) or a problem with the image.

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Thanks. I have issues downloading your logs.txt.
    In the meantime, please try

    docker exec -it solace cli

    Does this work ?

    Uli

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Hm, that's strange, you have an empty list of Binds and Mounts in your docker inspect output.

    Would you mind deleting your solace container and starting it with a docker run command (instead of docker-compose), just to see the difference.

    https://docs.solace.com/Solace-SW-Broker-Set-Up/Docker-Containers/Set-Up-Single-Linux-Container.htm

    Uli

  • PedromDimas
    PedromDimas Member Posts: 11
    edited November 2021 #9
    Options

    Hello @uherbst
    So i did teh exec -it and it did not work, it gave me an error after 3 seconds
    and stating solace the other way did nothing as far as i can tell

    Thank you for your time

  • dcremonini
    dcremonini Member Posts: 4
    Options

    Hello, same issue of @PedromDimas here.

    @uherbst do you have the chance to have a look?


    You can find my report below.

    Command:

    % docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=4g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --memory="3g" --name=solace solace/solace-pubsub-standard

    macOS 10.15.7 (Intel architecture)

    Docker version 20.10.9, build c2ea9bc

    To run Docker I (have to) use Rancher Desktop. It doesn't give any particular problems though.

    -----------------------

    % docker info

    Client:

    Context:  default

    Debug Mode: false

    Plugins:

    buildx: Docker Buildx (Docker Inc., v0.7.1)

     compose: Docker Compose (Docker Inc., v2.2.3)


    Server:

     Containers: 18

     Running: 1

     Paused: 0

     Stopped: 17

     Images: 19

     Server Version: 20.10.14

     Storage Driver: overlay2

     Backing Filesystem: extfs

     Supports d_type: true

     Native Overlay Diff: true

     userxattr: false

     Logging Driver: json-file

     Cgroup Driver: cgroupfs

     Cgroup Version: 1

     Plugins:

     Volume: local

     Network: bridge host ipvlan macvlan null overlay

     Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog

     Swarm: inactive

     Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc

     Default Runtime: runc

     Init Binary: docker-init

     containerd version: 3df54a852345ae127d1fa3092b95168e4a88e2f8

     runc version: 52b36a2dd837e8462de8e01458bf02cf9eea47dd

     init version: 

     Security Options:

     seccomp

      Profile: default

     Kernel Version: 5.15.32-0-virt

     Operating System: Alpine Linux v3.15

     OSType: linux

     Architecture: x86_64

     CPUs: 2

     Total Memory: 5.809GiB

     Name: lima-rancher-desktop

     ID: 6BS3:6H5Y:TCXZ:JCRC:3Y4Q:76EZ:XX5X:R4AT:FCSZ:CY34:BXWO:YLYV

     Docker Root Dir: /var/lib/docker

     Debug Mode: false

     Registry: https://index.docker.io/v1/

     Labels:

     Experimental: false

     Insecure Registries:

     127.0.0.0/8

     Live Restore Enabled: false

    -----------------------

    When the container is started the solace cli has not effect (see below)

     % docker exec -it solace cli

    SolOS startup in progress, status: 'Finishing pre-startup actions'

    Please try again later...

    % docker exec -it solace cli

    %

    -----------------------

    When I try to log in using the web interface I get the timeout message

    Timeout of 30000ms exceeded

    and the container dies. After that I get the logs below:

    % docker logs solace                                                                                                                                

    Host Boot ID: ffbd1abc-f121-4a1d-846f-eac3e7452658

    Starting PubSub+ Software Event Broker Container: Tue May 24 10:18:40 UTC 2022

    Setting umask to 077

    SolOS Version: soltr_9.13.1.38

    2022-05-24T10:18:41.434+00:00 <syslog.info> 0f7cda9acb01 rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0-5.el8" x-pid="115" x-info="https://www.rsyslog.com"] start

    2022-05-24T10:18:42.433+00:00 <local6.info> 0f7cda9acb01 appuser[113]: rsyslog startup

    2022-05-24T10:18:43.457+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Log redirection enabled, beginning playback of startup log buffer

    2022-05-24T10:18:43.470+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: /usr/sw/var/soltr_9.13.1.38/db/dbBaseline does not exist, generating from confd template

    2022-05-24T10:18:43.510+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: repairDatabase.py: no database to process

    2022-05-24T10:18:43.534+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Finished playback of log buffer

    2022-05-24T10:18:43.555+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Updating dbBaseline with dynamic instance metadata

    2022-05-24T10:18:43.861+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Generating SSH key

    ssh-keygen: generating new host keys: RSA DSA ECDSA ED25519 

    2022-05-24T10:18:44.257+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Starting solace process

    2022-05-24T10:18:46.490+00:00 <local0.info> 0f7cda9acb01 appuser: EXTERN_SCRIPT INFO: Launching solacedaemon: /usr/sw/loads/soltr_9.13.1.38/bin/solacedaemon --vmr -z -f /var/lib/solace/config/SolaceStartup.txt -r -1

    2022-05-24T10:18:50.584+00:00 <local0.info> 0f7cda9acb01 appuser[200]: unknownDir           post:262               (EXTERN_SCRIP - 0x00000000) unknownThread                 WARN   POST Violation [023]:Recommended system resource missing, System CPUs do not support invariant TSC (nonstop_tsc)

     File: /

     Size: 4096      Blocks: 8     IO Block: 4096  directory

    Device: 3bh/59d Inode: 4328695   Links: 1

    Access: (0755/drwxr-xr-x) Uid: (  0/ appuser)  Gid: (  0/  root)

    Access: 2022-05-24 10:18:50.880007748 +0000

    Modify: 2022-05-24 10:18:40.190007682 +0000

    Change: 2022-05-24 10:18:43.460007702 +0000

     Birth: 2022-05-24 10:18:40.190007682 +0000

    2022-05-24T10:18:53.183+00:00 <local0.warning> 0f7cda9acb01 appuser[1]: /usr/sw            main.cpp:752             (SOLDAEMON  - 0x00000000) main(0)@solacedaemon             WARN   Determining platform type: [ OK ]

    2022-05-24T10:18:53.221+00:00 <local0.warning> 0f7cda9acb01 appuser[1]: /usr/sw            main.cpp:752             (SOLDAEMON  - 0x00000000) main(0)@solacedaemon             WARN   Generating license file: [ OK ]

    2022-05-24T10:18:53.409+00:00 <local0.warning> 0f7cda9acb01 appuser[1]: /usr/sw            main.cpp:752             (SOLDAEMON  - 0x00000000) main(0)@solacedaemon             WARN   Running pre-startup checks: [ OK ]

    2022-05-24T10:18:53.459+00:00 <local0.warning> 0f7cda9acb01 appuser[271]: /usr/sw            ipcCommon.cpp:447           (BASE_IPC   - 0x00000000) main(0)@cli(?)                WARN   SolOS is not currently up - aborting attempt to start cli process

    Unable to raise event; rc(would block)

    -----------------------

    Obviously this is a show stopper because not being able to use PubSub+ in a containerized environment means no automatic tests.

    Thank you

    Daniele

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Please identify the root volume for the solace container in your filesystem.

    Inside that volume, please search for /usr/sw/jail/logs/event.log and debug.log.

    If you like - post them here.


    Uli

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    In the meantime, I tried your docker command at my Mac (12.3.1):

    docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=4g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --memory="3g" --name=solace solace/solace-pubsub-standard
    


    => works as expected. So it's not your docker-run-command.

  • dcremonini
    dcremonini Member Posts: 4
    edited May 2022 #13
    Options

    Hi,

    thank you uherbst.

    I wasn't able to find any volume for this image inside macOS. Instead I could access those files using the shell of the solace container.

    Please note that the container dies after about 3 minutes, so I had to read event.log and debug.log in 2 different runs.

    I need to split the message in three because of a limitation in this forum.

    ------------

    % docker exec -it solace /bin/sh

    sh-4.4$ less /usr/sw/jail/logs/event.log

    T13:17:56.496+00:00 <local3.notice> ac46edc01f72 event: SYSTEM: SYSTEM_AD_MSG_SPOOL_CHG: - - Message spool on Primary Virtual Router operational state change from AD-Unknown to AD-NotReady

    T13:17:56.496+00:00 <local3.notice> ac46edc01f72 event: SYSTEM: SYSTEM_AD_MSG_SPOOL_CHG: - - Message spool on Primary Virtual Router operational state change from AD-NotReady to AD-Disabled

    T13:17:56.497+00:00 <local3.notice> ac46edc01f72 event: SYSTEM: SYSTEM_AD_MSG_SPOOL_CHG: - - Message spool on Backup Virtual Router operational state change from AD-Unknown to AD-NotReady

    T13:17:56.497+00:00 <local3.notice> ac46edc01f72 event: SYSTEM: SYSTEM_AD_MSG_SPOOL_CHG: - - Message spool on Backup Virtual Router operational state change from AD-NotReady to AD-Disabled

    T13:17:56.497+00:00 <local3.warning> ac46edc01f72 event: SYSTEM: SYSTEM_LINK_ADB_LINK_DOWN: - - ADB Mate link 1/3/1 changed from Unknown to LOS due to N/A

    T13:17:56.497+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_LINK_ADB_LINK_UP: - - ADB Mate link 1/3/2 changed from Unknown to Ok

    T13:17:56.497+00:00 <local3.err> ac46edc01f72 event: SYSTEM: SYSTEM_LINK_ADB_HELLO_PROTOCOL_DOWN: - - ADB Mate link hello protocol is down due to mate-link down

    T13:17:59.247+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SEMP Listen Port 8080 enabled

    T13:17:59.248+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SEMP SSL Listen Port 1943 enabled

    T13:17:59.250+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SMF Listen Port 55555 enabled

    T13:17:59.250+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SMFC Listen Port 55003 enabled

    T13:17:59.250+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SMF Listen Port 55556 enabled

    T13:17:59.251+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service SMF-SSL Listen Port 55443 enabled

    T13:17:59.251+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service Matelink Listen Port 8741 enabled

    T13:17:59.252+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service HealthCheck Listen Port 5550 enabled

    T13:17:59.254+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service Web Transport Listen Port 8008 enabled

    T13:17:59.254+00:00 <local3.info> ac46edc01f72 event: SYSTEM: SYSTEM_SERVICE_LISTEN_PORT_ENABLE: - - Service Web Transport SSL Listen Port 1443 enabled

    T13:17:59.368+00:00 <local3.notice> ac46edc01f72 event: SYSTEM: SYSTEM_AD_MSG_SPOOL_CHG: - - Message spool on Primary Virtual Router operational state change from AD-Disabled to AD-Standby

    T13:17:59.590+00:00 <local3.warning> ac46edc01f72 event: SYSTEM: SYSTEM_CFGSYNC_DOWN: - - The Config-Sync feature is operationally down.

    1/3

  • dcremonini
    dcremonini Member Posts: 4
    Options

    % docker exec -it solace /bin/sh

    sh-4.4$ less /usr/sw/jail/logs/debug.log

    .178+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceSmf.cpp:67 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service SMF listen port 55556 enabled

    .178+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceSmf.cpp:67 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service SMF-SSL listen port 55443 enabled

    .178+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceMatelink:25 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service Matelink listen port 8741 enabled

    .178+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceHealthCh:27 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service HealthCheck listen port 5550 enabled

    .179+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceWebTrans:45 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service Web Transport listen port 8008 enabled

    .179+...... mgmtplane: /usr/sw moTypeHdlrRouterServiceWebTrans:45 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Service Web Transport SSL listen port 1443 enabled

    .182+...... dataplane-linux[458]: unknownDir ipInterface.cpp:117 (LC_MGMT_THRE - 0x00000000) unknownThread WARN Core 0: IpInterface::ctor intf0:1 IPv6 addr (word1 0 word2 0)

    .187+...... mgmtplane: /usr/sw moTypeHdlrIntfIp.cpp:1007 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Operational IP interface intf0:1(eth0): admin status UP

    .197+...... mgmtplane: /usr/sw moTypeHdlrIntfIp.cpp:1021 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Operational IP interface intf0:1(eth0): got assigned ip addr 172.17.0.3/16

    .229+...... mgmtplane: /usr/sw sshUtils.cpp:54 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Starting SSHD service

    .230+...... appuser[1]: /usr/sw main.cpp:2843 (SOLDAEMON - 0x00000000) main(0)@solacedaemon WARN Starting dynamic child: /usr/sbin/sshd -D &> /var/log/solace/sshd.log

    .230+...... appuser[324]: /usr/sw adCmnOperation.cpp:337 (ADMANAGER - 0x00000000) AdMgmtThread(4)@dataplane(11) WARN Enable health monitoring for operation (SA:JournalDiskLatency)

    .230+...... appuser[324]: /usr/sw adCmnOperation.cpp:337 (ADMANAGER - 0x00000000) AdMgmtThread(4)@dataplane(11) WARN Enable health monitoring for operation (SA:BackingStoreDiskThroughput)

    .230+...... appuser[324]: /usr/sw adCmnOperation.cpp:337 (ADMANAGER - 0x00000000) AdMgmtThread(4)@dataplane(11) WARN Enable health monitoring for operation (SA:NetworkLatency)

    .230+...... appuser[324]: /usr/sw adCmnOperation.cpp:337 (ADMANAGER - 0x00000000) AdMgmtThread(4)@dataplane(11) WARN Enable health monitoring for operation (SA:MateLinkLatency)

    .231+...... appuser[324]: /usr/sw adCmnOperation.cpp:337 (ADMANAGER - 0x00000000) AdFgThread(3)@dataplane(11) WARN Enable health monitoring for operation (FG:ComputeLatency)

    .231+...... dataplane-linux[458]: unknownDir redundancyMsgMoHdlr.cpp:114 (MP - 0x00000000) unknownThread WARN Core 0: Redundancy interface "intf0" configured

    .231+...... appuser[325]: /usr/sw mplRedundancyHdlr.cpp:416 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN Redundancy interface "intf0" configured

    .231+...... appuser[325]: /usr/sw mplRedundancyHdlr.cpp:723 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN Primary Redundancy IpInterface [intf0] admin state changed from invalid to disabled

    .231+...... appuser[325]: /usr/sw mplRedundancyHdlr.cpp:738 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN Backup Redundancy IpInterface [intf0] admin state changed from invalid to disabled

    .232+...... appuser[325]: /usr/sw mplRedundancyHdlr.cpp:749 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN Static Redundancy IpInterface [intf0] admin state changed from invalid to enabled

    .233+...... appuser[325]: /usr/sw redundancyThread.cpp:794 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN Message spool disk is responsive again

    .292+...... mgmtplane: /usr/sw mpliMoMgr.cpp:656 (MP - 0x00000000) MplThread(3)@mgmtplane(9) WARN Restoring all MOs complete

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:927 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN PrimaryRedundancyFSM (in Init) LocalStateAggregator::evaluate: new localReady = 1, localPriority = 150

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:934 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN isMgmtRelease(false) isAdReady(true) isLinecardReady(true) isVrrpActive(false)

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:937 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN isSystemShuttingDown(false)

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:942 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN isAaLocalActive(false) isAutoRevert_m(false) revertActivity(false)

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:947 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN isSmrpReady(true) isSmrpDbBuildReady(true) isSmrpDbSyncReady(true)

    .294+...... appuser[325]: /usr/sw redundancyFsm.cpp:954 (REDUNDANCY - 0x00000000) RedundancyThread(10)@controlplane(10) WARN isClusterReady(false) haveMsgBB(false) haveRouter(false) haveConsulAD(false)

    .298+...... appuser[325]: /usr/sw adCpRedundancyFsm.cpp:860 (AD_REDUN - 0x00000000) unknownThread(0)@controlplane(10) WARN 123.152.131.35:P:AD-Disabled - State transition to AD-Standby

    .470+...... mgmtplane: /usr/sw nginxUtils.cpp:112 (BASE_UTILS - 0x00000000) XmlSwitchThread(8)@mgmtplane(9) WARN Creating empty nginxSsl.conf

    .483+...... mgmtplane: /usr/sw nginxUtils.cpp:344 (BASE_UTILS - 0x00000000) XmlSwitchThread(8)@mgmtplane(9) WARN Starting uwsgi

    .484+...... appuser[1]: /usr/sw main.cpp:2843 (SOLDAEMON - 0x00000000) main(0)@solacedaemon WARN Starting dynamic child: uwsgi --ini loads/soltr_9.13.1.38/scripts/sempv2/uwsgi.ini --set sempV1Port=1025 2>/usr/sw/jail/diags/uwsgi_startup.log

    2/3

  • dcremonini
    dcremonini Member Posts: 4
    Options


    .485+...... mgmtplane: /usr/sw nginxUtils.cpp:355 (BASE_UTILS - 0x00000000) XmlSwitchThread(8)@mgmtplane(9) WARN Starting nginx

    .486+...... appuser[1]: /usr/sw main.cpp:2843 (SOLDAEMON - 0x00000000) main(0)@solacedaemon WARN Starting dynamic child: nginx -c /var/lib/solace/config/nginx.conf -g "pid /var/run/solace/nginx.pid;" 2>/usr/sw/jail/diags/nginx_startup.log

    .495+...... mgmtplane: /usr/sw logging.cpp:382 (BASE_LOG - 0x00000001) MplThread(3)@mgmtplane(9) WARN Restarting syslog service (15).

    T13:22:56.960+...... appuser[322]: /usr/sw watchdog.cpp:1419 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Did not get poll response from MplThread(3)@mgmtplane(9), sequence number = 3, attempt = 1

    T13:23:02.036+...... appuser[322]: /usr/sw watchdog.cpp:1419 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Did not get poll response from MplThread(3)@mgmtplane(9), sequence number = 3, attempt = 2

    T13:23:07.041+...... appuser[322]: /usr/sw watchdog.cpp:1419 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Did not get poll response from MplThread(3)@mgmtplane(9), sequence number = 3, attempt = 3

    .......

    T13:25:22.228+...... appuser[322]: /usr/sw watchdog.cpp:1419 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Did not get poll response from MplThread(3)@mgmtplane(9), sequence number = 3, attempt = 30

    T13:25:22.228+00:00 <local0.err> .. appuser[322]: /usr/sw watchdog.cpp:1506 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) ERROR Did not get poll response from MplThread(3)@mgmtplane(9) after 30 attempts @ cycle count 29188297435510 -> checking for other unresponsive threads

    T13:25:22.228+...... appuser[322]: /usr/sw watchdog.cpp:1543 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Performing broadcast poll attempt 1 of maximum 3 to check for unresponsive threads

    T13:25:22.296+...... appuser[322]: /usr/sw watchdog.cpp:1615 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Total of 67 polls broadcast on poll attempt 1 - starting retry timer = 5000ms

    T13:25:22.299+...... appuser[322]: /usr/sw watchdog.cpp:1359 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Marking LINECARD for core due to failure of a different thread/proc

    ....

    T13:25:22.299+...... appuser[322]: /usr/sw watchdog.cpp:1350 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Marking pid 331 for core due to failure of a different thread/proc

    T13:25:22.299+...... appuser[322]: /usr/sw watchdog.cpp:1350 (WATCHDOG - 0x00000000) Watchdog(3)@watchdog(15) WARN Marking pid 326 for core due to failure of a different thread/proc

    ------------

    Thank you

    Daniele

    3/3

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Hm, on first sight, no good idea about the issue.


    Do you have a support contract ? If yes, please open a support ticket for it (support@solace.com)

  • dreamoka
    dreamoka Member Posts: 44 ✭✭✭
    Options

    @dcremonini are you find the find the issue? I think i got same issue too