Solace JMS Channel closing on it's own when running on a Kubernetes cluster (Spring Boot Autoconfig)

Log snippet:
{“com.solacesystems.jcsmp.protocol.impl.TcpClientChannel”,“label”:
Client-109: Connected to host ‘orig=host:55555, scheme=tcp://, host=host, port=55555’ (smfclient 109)”}

{“com.solacesystems.jcsmp.protocol.impl.TcpClientChannel”,“label”:“Client-109: Channel Closed
(smfclient 109)”}

It basically loops like this all day long and I can’t figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.

There is no other log to give me a clue why the channel is closing on its own like this.

Any one have any ideas what the problem might be?

PS: I am already using CachingConnectionfactory

Hi @pooja,

You tried my first guess - not using the CachingConnectionFactory is usually the culprit here. When you run your app do the logs say that the CachingConnectionFactory is being instantiated?

A few questions:

  1. Are you using core spring framework or Spring boot?
  2. Is this app a consumer, sender, or both?
  3. is it disconnecting after each send/receive?
  4. can you share more of your Spring configuration?

Hey marc, thanks for the quick response :smile: . Answering your questions inline. Let me know anything else required from my side.

[Pooja] No it doesnt say so

A few questions:

  1. Are you using core spring framework or Spring boot?
    [Pooja]Spring boot.
  1. Is this app a consumer, sender, or both?
    [Pooja]It is both, but even with sender alone it loops through creating closing connection
  1. is it disconnecting after each send/receive?
    [Pooja]Not actually. Connection creation and closing are happening in every fixed interval of time. Seems like some timer is set for connection .
  1. can you share more of your Spring configuration?
    [Pooja] Connection is created using spring autoconfig(spring managed connection) through JmsTemplate.

Interesting. I just re-read your question and noticed this which makes me think it is a configuration on the broker or network settings in K8s

It basically loops like this all day long and I can’t figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.

@Aaron @TomF - any ideas here?

Just a guess from me, but can you check the client-profile and the JMS JNDI Connection Factory settings for keepalives and see if there is anything disabled or drastically different than what you have locally?

As defaults I see this in my local docker but your admin may change some of these things based on what they expect to see or network restrictions in your k8s cluster.

On the client-profile

On the JMS Connection Factory

@pooja Another thing to consider would be the network path that your app is taking to connect to the broker. Is the broker also within K8s? is it external to K8s? is there a load balancer in the mix?

@marc These settings are same as yours, in local as well as in the server broker. Broker is (outside) external to the k8s cluster.

@pooja…interesting. Is there a load balancer between K8s and your broker? I wonder if there is some sort of load balancer flapping going on.

@TomF @KenBarr any other ideas?

might be worth looking at the connect/disconnect events on the event broker. “show log events” and match these to your JMS logs. The broker logs have a disconnect reason.

Hi Team,

I’m also facing same issue. solace connections are keep dropping every 10s and reconnect again it own while running spring boot application via docker image. i don’t see any issue running spring boot jar independently.

Hi all,
I could be wrong, but actually I recognised that it caused of health check, which probably configured for your pod in template yaml file.
To disable it you can specify in application.yaml file of your spring boot application, something like:
management:
health:
jms:
enabled: false

You can find implementation in JmsHealtIndicator class, all of these defaults checks are actually automatically configured and all their implementations you may find in spring-boot-actuator component.

Hope it will help you

Best regards,
Andrei

Thanks for sharing @bkossme ?. - I hadn’t thought about that!

I am facing similar issue, on Openshift I have deployed Solace consumer but it keeps disconnecting. This is causing big messages read failure.

Hi @Solace_chunnu ,
Is it also happening to you b/c of the health check? Is the container getting restarted?
OR when you say this are you saying that it doesn’t work with “large messages”?
This is causing big messages read failure

Hi @bkossme / @marc I was facing high connection churns while consuming messages from solace via spring’s DMLC.

Disabling the health check fixed the issue for me.

Can you elaborate on how enabling jms health check endpoint can cause connections to drop.

Thanks

Hi @jangrahul ,

Which health check are you referring to? Actuator?

Also, are you using a CachingConnectionFactory with the DMLC?

Hi @marc.dipasquale , no I am not using CahingConnectionFactory.

I was referring to the solution that @bkossme had suggested
viz.

management:

health:

jms:

enabled: false

The above, if not diabled, creates a health check endpoint

Hi @jangrahul ,

Ah okay, I found out why. It looks like the default JMS health check is just starting a new connection and closing it each time you check. I would recommend not using that and coding your own health check that checks the status of the JMS resources you are actually using instead.

See the code here:

Cool idea! If you create a custom healthcheck please feel free to share it with the community here