How to setup public ssh keys on Solace container?

Options
TD_asilva
TD_asilva Member Posts: 13
edited February 2022 in PubSub+ Event Broker #1

Hello!

I am trying to setup ssl on my Solace Pub/Sub+ instance on aws. The instructions say you need to use sftp or scp to transfer the certificate(s) from another machine. The only way I seem to be able to authenticate with external machines for sftp/scp is via public ssh key - but, the only ways to get a public key into the container is to copy one from outside the container or generate it inside the container - neither of which, do I seem able to do (i.e. the cli file management methods support copying files around that are already inside the container or getting them from outside via sftp/scp and there isn't one for creating an authorized_keys file that I could find, which would be the other option). This feels like a chicken and the egg problem, but I am confident that you would not leave this feature in a state like that, so I am sure I am missing something. It doesn't help that I am fairly green when it comes to dealing with certificates in general. Is there a way to transfer files from the host into the container that doesn't involve sftp/scp or to generate the public key from inside the container? Can someone point me to what I might be missing?

For what it's worth, I have tried running the copy command as "copy sftp://username@mysftpserver/filepath ." and get the expected "Permission denied" response back. I do have the public key setup on the host machine and am able to get the file in question from the host machine using the standard sftp command - so it seems like everything else is setup properly, there just isn't a public key inside the container that the solace cli version of the copy sftp command can use.

Thank you!

Tagged:

Best Answer

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    #2 Answer ✓
    Options

    Hi TD_asilva
    Option #1 (doing scp/sftp from solace cli to a scp/sftp host): Please give a copy what you have tried, what error you have seen.
    a simple
    sftp blakeks@sftphost.com
    should be sufficent. It will ask for password.

    Option #2 (doing sftp from outside to solace cli): Please give a copy what you have tried. Just guessing: Which scp/sftp/ssh-port have you used ? What was your command line ? (just to be sure: To use scp with a different port, you have to use something like
    scp -P 2222 blakeks@solace.ip (and that's a capital "P")

    Option #3 (docker cp) and option #4 (mapped volumes): In both cases, you have maybe the same issue :-)
    Inside the docker container, the path is named "/usr/sw/jail/certs".
    Inside the solace cli, this path is mapped to "/certs"
    The mapping between docker container and docker host says, this path is mapped to: "/var/lib/docker/volumes/jail/_data/certs"

    With that in mind, you should do:
    docker cp xxx.cert solace:/usr/sw/jail/cert
    OR
    (on your linux shell): cp xxx.cert /var/lib/docker/volumes/jail/_data/certs/

Answers

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    Options

    Hi TD_asilva,
    just to understand your question in the right way: You're looking for a way to transport files to a broker that is deployed as a docker container. Right ?
    And you have access to the host where the container is deployed ?
    Then you have multiple options:
    1. On the cli (the old fashioned command line interface), you can do scp from any other ip address outside. You have to use user/password, because you cant give your private ssh key anywhere.
    2. You can do scp/sftp from the outside to the cli (that's the same port as the "ssh to cli"-port). You have to use user/password, because the CLI cant configured to use private keys
    3. you can use "docker cp" to copy files to the docker container
    4. If you have your volumes mounted both in your container AND on the host, you just can copy files directly there.

    Have fun.
    Uli

  • TD_asilva
    TD_asilva Member Posts: 13
    Options

    Hi Uli,

    Thank you for the response, but I am still having issues/questions. The answer to your initial question is yes it is deployed as a container, however, I did not setup the container - I used one of the aws default ec2 images that you have made available on the marketplace (using Solace v9.2.0.14). Also, just to avoid confusion, I would like to use the following cli names - I am using a bash shell on my laptop to do all of this which will be called "dev cli", the standard cli of the ec2 instance that is running the Solace image will be called "ec2 cli", the Solace-specific cli that I am in when I run solacectl cli on the ec2 instance will be called "solace cli" - there's also a 4th cli that I can get to by running solacectl shell - this will be called "solace shell".

    Regarding your proposed solutions, for #1, using username/password on the external server is not an option for me, and in addition to that, everything I can find about passing a password to sftp/scp command requires some other program (sshpass) to be able to pass it, but I can't install sshpass on the solace cli to be able to do that - so how do I even use username/password with sftp/scp from the solace cli out to/from an external server? For #2, I added a file transfer user using the solace cli command detailed here: https://docs.solace.com/Configuring-and-Managing/Configuring-File-Transfer-User-Accounts.htm#mc-main-content and then tried to use those credentials (using the sshpass tool) to sftp/scp from both the ec2 cli and my dev cli and got "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)." for both cases. When I run docker cp, as in #3, I can see the file in the file system via the solace shell but don't know how to get it from there to be visible in the solace cli file system. For #4, I wasn't sure which volume to use to map to the file system shown here: https://docs.solace.com/Configuring-and-Managing/Managing-Event-Broker-Files.htm. When I run docker inspect solace on the ec2 instance, I see the following volumes:

    "Mounts": [ { "Type": "volume", "Name": "adbBackup", "Source": "/var/lib/docker/volumes/adbBackup/_data", "Destination": "/usr/sw/adb", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "internalSpool", "Source": "/var/lib/docker/volumes/internalSpool/_data", "Destination": "/usr/sw/internalSpool", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "adb", "Source": "/var/lib/docker/volumes/adb/_data", "Destination": "/usr/sw/internalSpool/softAdb", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "jail", "Source": "/var/lib/docker/volumes/jail/_data", "Destination": "/usr/sw/jail", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "var", "Source": "/var/lib/docker/volumes/var/_data", "Destination": "/usr/sw/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "diagnostics", "Source": "/var/lib/docker/volumes/diagnostics/_data", "Destination": "/var/lib/solace/diags", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }

    Thank you in advance for any further insight/help you can provide.

  • TD_asilva
    TD_asilva Member Posts: 13
    edited March 2020 #5
    Options

    Hi Uli,

    Thank you for the response, but I am still having issues/questions. The answer to your initial question is yes, it is deployed as a container, however, I did not do any of the docker setup, I am using one of your pre-built aws ami's (version 9.2.0.14) from the marketplace running on an ec2 instance. To minimize confusion for the rest of this discussion, I would like to use the following terms - the bash shell that I am using on my local laptop to do all of this will be called "dev cli", the shell of the ec2 instance/docker host will be called "ec2 cli" and the Solace-specific cli that I get when I run solacectl cli on the ec2 instance will be called "solace cli". There's also a 4th cli when you run solacectl shell on the ec2 instance - that will be called "solace shell". Also, when I mention "external", I am meaning external to aws.

    Regarding your proposed solutions, for #1, using a password for an external sftp server is not an option for me, however, even if it were, how do I specify a password when inside the solace cli and trying to copy from the external server? Everything I've found says you can only use username/password with sftp/scp by setting the password ahead of time using the sshpass tool, which is not available in the solace cli. For #2, I created a file-transfer user on the solace cli as detailed here: https://docs.solace.com/Configuring-and-Managing/Configuring-File-Transfer-User-Accounts.htm#mc-main-content and then tried to transfer a file into the solace cli file system (i.e. the one shown here: https://docs.solace.com/Configuring-and-Managing/Managing-Event-Broker-Files.htm) using sftp/scp from the dev cli and ec2 cli, but got "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)." for both cases. For #3, I can transfer a file onto the container using docker cp, but when I do, I can only see it via the solace shell, not the solace cli - so is there a path for getting a file from the solace shell file system to the solace cli file system? For #4, I wasn't sure which volume mapped to the solace cli filesystem - I see the following volumes in the container when I do docker inspect solace:

    "Mounts": [ { "Type": "volume", "Name": "adbBackup", "Source": "/var/lib/docker/volumes/adbBackup/_data", "Destination": "/usr/sw/adb", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "internalSpool", "Source": "/var/lib/docker/volumes/internalSpool/_data", "Destination": "/usr/sw/internalSpool", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "adb", "Source": "/var/lib/docker/volumes/adb/_data", "Destination": "/usr/sw/internalSpool/softAdb", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "jail", "Source": "/var/lib/docker/volumes/jail/_data", "Destination": "/usr/sw/jail", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "var", "Source": "/var/lib/docker/volumes/var/_data", "Destination": "/usr/sw/var", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" }, { "Type": "volume", "Name": "diagnostics", "Source": "/var/lib/docker/volumes/diagnostics/_data", "Destination": "/var/lib/solace/diags", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "" } ],

    Assuming maybe internalSpool, jail or var, but not sure. Thank you in advance for any further insight you can provide.

  • TD_asilva
    TD_asilva Member Posts: 13
    Options

    @uherbst bump :smile:

  • uherbst
    uherbst Member, Employee Posts: 121 Solace Employee
    #7 Answer ✓
    Options

    Hi TD_asilva
    Option #1 (doing scp/sftp from solace cli to a scp/sftp host): Please give a copy what you have tried, what error you have seen.
    a simple
    sftp blakeks@sftphost.com
    should be sufficent. It will ask for password.

    Option #2 (doing sftp from outside to solace cli): Please give a copy what you have tried. Just guessing: Which scp/sftp/ssh-port have you used ? What was your command line ? (just to be sure: To use scp with a different port, you have to use something like
    scp -P 2222 blakeks@solace.ip (and that's a capital "P")

    Option #3 (docker cp) and option #4 (mapped volumes): In both cases, you have maybe the same issue :-)
    Inside the docker container, the path is named "/usr/sw/jail/certs".
    Inside the solace cli, this path is mapped to "/certs"
    The mapping between docker container and docker host says, this path is mapped to: "/var/lib/docker/volumes/jail/_data/certs"

    With that in mind, you should do:
    docker cp xxx.cert solace:/usr/sw/jail/cert
    OR
    (on your linux shell): cp xxx.cert /var/lib/docker/volumes/jail/_data/certs/

  • TD_asilva
    TD_asilva Member Posts: 13
    edited April 2020 #8
    Options

    I was able to make #2 work. It was the -P 2222 part that I was missing. Then I was able to use docker cp to get it from the host into the correct path on the container/solace cli. Thank you!

  • TD_asilva
    TD_asilva Member Posts: 13
    Options

    Actually, I have a slight clarification on what worked. #2 worked and put the file straight into the solace cli as you had alluded to. I had previously put the same file on the host in its home directory and forgot so when I ssh'd in and saw the file there, I thought it was the one I had just transferred. One other tip in case someone else reads this - transferring a file more than once gives a Permission denied error after you enter your password. Spent a while trying to figure this one out, but eventually realized what was going on using scp -v to see the full output and it showed that my password was indeed being accepted.