Dockerfile custom entry point not working in 9+?
I am trying to create a Dockerfile with my VPN pre-populated through a configuration script called configure.sh.
Dockerfile:
FROM <hidden>/solace-pubsub-enterprise:9.12.1.17 ADD config /config ADD wait-for-it.sh /wait-for-it.sh ADD configure.sh /configure.sh ADD start.sh /start.sh USER root RUN chmod +x /wait-for-it.sh RUN chmod +x /configure.sh RUN chmod +x /start.sh HEALTHCHECK --interval=30s --timeout=30s CMD /wait-for-it.sh ENTRYPOINT ["/start.sh"]
start.sh:
#!/bin/bash /configure.sh & /usr/sbin/boot.sh
Container launches unhealthy because calls to http://localhost:8080/SEMP/v2 time out i.e. solace seems to not respond. Yet I can see the solacedaemon is running with docker top:
$ docker top be7c8333b7d4 UID PID PPID C STIME TTY TIME CMD root 2742 2718 0 07:50 ? 00:00:00 /bin/bash /start.sh root 2774 2742 0 07:50 ? 00:00:00 bash /configure.sh root 2775 2742 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/solacedaemon --vmr -z -f /var/lib/solace/config/SolaceStartup.txt -r -1 root 2776 2774 0 07:50 ? 00:00:00 bash /wait-for-it.sh localhost:8080 --timeout=60 -- echo Solace is up root 3062 2742 1 07:50 ? 00:00:03 python3 /usr/sw/loads/soltr_9.12.1.17/scripts/vmr-solaudit -d --daemonize root 3127 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/watchdog -w 0 -R 900 root 3128 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/cmdserver root 3129 2775 11 07:50 ? 00:00:36 /usr/sw/loads/soltr_9.12.1.17/bin/dataplane -h 80 root 3130 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/controlplane -h 80 root 3131 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/smrp root 3132 2775 0 07:50 ? 00:00:02 /usr/sw/loads/soltr_9.12.1.17/bin/mgmtplane root 3133 2775 0 07:50 ? 00:00:02 /usr/sw/loads/soltr_9.12.1.17/bin/xmlmanager root 3134 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/trmmanager root 3135 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/msgbusadapter root 3136 2775 0 07:50 ? 00:00:00 /usr/sw/loads/soltr_9.12.1.17/bin/solcachemgr root 3267 2775 1 07:50 ? 00:00:05 /usr/sw/loads/soltr_9.12.1.17/firmware/3206/dataplane-linux root 3289 2775 0 07:50 ? 00:00:00 /usr/sbin/sshd -D root 3312 2775 0 07:50 ? 00:00:00 uwsgi --ini loads/soltr_9.12.1.17/scripts/sempv2/uwsgi.ini --set sempV1Port=1025 root 3313 2775 0 07:50 ? 00:00:00 nginx: master process nginx -c /var/lib/solace/config/nginx.conf -g pid /var/run/solace/nginx.pid; root 3330 3313 0 07:50 ? 00:00:00 nginx: worker process
If I change Dockerfile ENTRYPOINT as follows:
ENTRYPOINT ["/usr/sbin/boot.sh"]
then container will be healthy, but of course I do not get my pre-populated VPN.
I should add that this Dockerfile used to work with the older version 8.5.0.1008-enterprise.
Thankful for any assistance!
Comments
-
Hi @jahwag,
you earned an extra point for creativity - never seen such an idea for configuring a broker before :-)
To understand your idea:
just assuming that /usr/sbin/boot.sh is the standard entry point and will start the broker:
If you do configure BEFORE starting the broker (and your own entry point first runs configure.sh in the background and then starts the broker)... how can you ensure, that you configure something running ?
The most common solution nowadays to configure a broker is using SEMP or tools like ansible (that use SEMP in the background).
Uli
0 -
Thanks but I cannot take credit for the idea, it was originally by a coworker of mine who left the organisation :)
The configure.sh waits in the background for solace to start before it proceeds to use SEMP v2 to configure VPNs:
#!/usr/bin/env bash echo "Waiting for Solace to start." echo "" /wait-for-it.sh "localhost:8080" --timeout=60 -- echo "Solace is up" echo "Configuring Solace" echo "" curl -u admin:admin -s -H "Content-Type: application/json" -X POST -d @/config/domainevent/vpn.json http://localhost:8080/SEMP/v2/config/msgVpns echo "" ...
The issue is that while this worked in the past with 8.5, with 9.12.1.17 solace is unresponsive.
The console output implies that the solace daemon starts successfully yet it does not become accessible so configure.sh waits forever:
2022-04-20T12:41:29.490191200Z Waiting for Solace to start. 2022-04-20T12:41:29.490254200Z 2022-04-20T12:41:29.491354400Z Host Boot ID: 3640e4c3-faf1-49b2-8a77-20f6fefc6110 2022-04-20T12:41:29.492997900Z Starting PubSub+ Software Event Broker Container: Wed Apr 20 12:41:29 UTC 2022 2022-04-20T12:41:29.493102500Z Waiting for Solace to be up 2022-04-20T12:41:29.493721700Z Setting umask to 077 2022-04-20T12:41:29.502613900Z SolOS Version: soltr_9.12.1.17 2022-04-20T12:41:29.504381400Z Waiting... 2022-04-20T12:41:30.212506000Z 2022-04-20T12:41:30.211+00:00 <syslog.info> c2fac169a108 rsyslogd: [origin software="rsyslogd" swVersion="8.2110.0" x-pid="121" x-info="https://www.rsyslog.com"] start 2022-04-20T12:41:31.210417100Z 2022-04-20T12:41:31.210+00:00 <local6.info> c2fac169a108 appuser[119]: rsyslog startup 2022-04-20T12:41:31.524022000Z Waiting... 2022-04-20T12:41:32.227062500Z 2022-04-20T12:41:32.226+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Log redirection enabled, beginning playback of startup log buffer 2022-04-20T12:41:32.235494600Z 2022-04-20T12:41:32.235+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: /usr/sw/var/soltr_9.12.1.17/db/dbBaseline does not exist, generating from confd template 2022-04-20T12:41:32.248085300Z 2022-04-20T12:41:32.247+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: repairDatabase.py: no database to process 2022-04-20T12:41:32.255896300Z 2022-04-20T12:41:32.255+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Finished playback of log buffer 2022-04-20T12:41:32.263949100Z 2022-04-20T12:41:32.263+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Updating dbBaseline with dynamic instance metadata 2022-04-20T12:41:32.412320400Z 2022-04-20T12:41:32.412+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Generating SSH key 2022-04-20T12:41:32.690091900Z ssh-keygen: generating new host keys: RSA1 RSA DSA ECDSA ED25519 2022-04-20T12:41:32.808605400Z 2022-04-20T12:41:32.808+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Starting solace process 2022-04-20T12:41:33.535785900Z Waiting... 2022-04-20T12:41:33.671110000Z 2022-04-20T12:41:33.670+00:00 <local0.info> c2fac169a108 appuser: EXTERN_SCRIPT INFO: Launching solacedaemon: /usr/sw/loads/soltr_9.12.1.17/bin/solacedaemon --vmr -z -f /var/lib/solace/config/SolaceStartup.txt -r -1 2022-04-20T12:41:35.553875100Z Waiting... 2022-04-20T12:41:37.567266200Z Waiting... 2022-04-20T12:41:39.579813800Z Waiting... 2022-04-20T12:41:40.038702500Z 2022-04-20T12:41:40.038+00:00 <local0.warning> c2fac169a108 appuser[9]: /usr/sw main.cpp:752 (SOLDAEMON - 0x00000000) main(0)@solacedaemon WARN Determining platform type: [ OK ] 2022-04-20T12:41:40.176412900Z 2022-04-20T12:41:40.176+00:00 <local0.warning> c2fac169a108 appuser[9]: /usr/sw main.cpp:752 (SOLDAEMON - 0x00000000) main(0)@solacedaemon WARN Running pre-startup checks: [ OK ] 2022-04-20T12:41:41.600657000Z Waiting... 2022-04-20T12:41:43.622347100Z Waiting... 2022-04-20T12:41:45.640817400Z Waiting... 2022-04-20T12:41:47.664595000Z Waiting... 2022-04-20T12:41:49.676532300Z Waiting...
This has me stumped!
If I use /usr/sbin/boot.sh as entry point and then connect to a terminal inside the docker container then I am able to successfully run configure.sh.
0