🎄 Happy Holidays! 🥳
Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.
Happy Holidays!
Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.
External Linking two standard Event Brokers results in "Neighbor HandShake Failed"
I'm trying to connect two PubSub+ Standard event brokers running in docker containers.
This is the docker run command used:docker run -d --privileged -p 8080:8080 -p 55555:55555 -p 55003:55003 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 --shm-size=2g --env username_admin_globalaccesslevel=admin --env 'system_scaling_maxconnectioncount=1000' --env username_admin_password=admin --name=solace solace/solace-pubsub-standard
Broker A is on-premise behind a corporate firewall.
Broker B in in AWS.
I'm following this guide using SEMP API and postman.
https://dev.to/solacedevs/pubsub-configuring-dmr-using-sempv2-361d
All of the commands executed properly and returned 200 in the postman collections indicating no errors.
However, when looking in Broker A status the link is down and the reason is:Failure Reason: Neighbor HandShake Failed
Broker A is set to 'local node initiates'. Broker B is set to 'remote node initiates'.
Any ideas why this is happening? Is it a firewall thing?
Comments
-
You're using the postman collection I put together - I haven't seen this error before.
It looks like you are self-hosting the broker in AWS? Did you enable incoming connections on port 55555? I assume you used the collection as is, i.e. didn't change any TLS related properties - the collection sets up plain SMF protocol?0 -
@RichardB , adding to what @swenhelge said, verify that broker A can connect to broker B - can you telnet to port 55555 for the host broker A is on? If not, you'll probably need to add that port to your security group's incoming ports.
1 -
In addition to Tom's answer:
If you want to check reachability from broker A to broker B, you can do:
(doing both things in a shell on the same VM as brokerA is running)telnet brokerb 55555
ORnc -v brokerb 55555
(Personally I prefer nc to telnet... but often, just one of both is installed)
1 -
@swenhelge Yes, I'm hosting both brokers. I substituted the cloud side with my AWS instance. And I'm just using plain SMF no TLS.
@TomF Yes, I've added the ports and protocols to the AWS security group as specified here https://docs.solace.com/Configuring-and-Managing/Default-Port-Numbers.htm. I'm able to telnet and ncat on port 55555 from my on-prem instance to my aws instance.
One thing I tried since my last post made things work. When I set up a VPN connection to AWS, and used AWS private IPs, DMR connects and works.1 -
@RichardB - I did test the setup with a local broker on my laptop behind my router's firewall which doesn't allow any inbound connections.
Sometimes it seems changes in security groups are not applied immediately, in some cases with a significant delay. Maybe that was occurring here?
The other possibility is that the corporate firewall applies protocol restrictions using packet inspection - which would explain why telnet/nc to the port work as well as the VPN tunnel.1 -
@swenhelge - That's a good thought. I think my corporate firewall may be doing packet inspection resulting in partial packet loss. Out of curiosity, I might try to do SMF over TLS to see if that works.
1