Solace JMS Channel closing on it's own when running on a Kubernetes cluster (Spring Boot Autoconfig)
Log snippet:
{"com.solacesystems.jcsmp.protocol.impl.TcpClientChannel","label":
"Client-109: Connected to host 'orig=host:55555, scheme=tcp://, host=host, port=55555' (smfclient 109)"}
{"com.solacesystems.jcsmp.protocol.impl.TcpClientChannel","label":"Client-109: Channel Closed
(smfclient 109)"}
It basically loops like this all day long and I can't figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.
There is no other log to give me a clue why the channel is closing on its own like this.
Any one have any ideas what the problem might be?
PS: I am already using CachingConnectionfactory
Comments
-
Hi @pooja,
You tried my first guess - not using the CachingConnectionFactory is usually the culprit here. When you run your app do the logs say that the CachingConnectionFactory is being instantiated?
A few questions:
1. Are you using core spring framework or Spring boot?
2. Is this app a consumer, sender, or both?
3. is it disconnecting after each send/receive?
4. can you share more of your Spring configuration?0 -
Hey marc, thanks for the quick response . Answering your questions inline. Let me know anything else required from my side.
[Pooja] No it doesnt say so
A few questions:
1. Are you using core spring framework or Spring boot?[Pooja]Spring boot.
- Is this app a consumer, sender, or both?
[Pooja]It is both, but even with sender alone it loops through creating closing connection
- is it disconnecting after each send/receive?
[Pooja]Not actually. Connection creation and closing are happening in every fixed interval of time. Seems like some timer is set for connection .
- can you share more of your Spring configuration?
[Pooja] Connection is created using spring autoconfig(spring managed connection) through JmsTemplate.
0 -
Interesting. I just re-read your question and noticed this which makes me think it is a configuration on the broker or network settings in K8s
It basically loops like this all day long and I can't figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.
@Aaron @TomF - any ideas here?
Just a guess from me, but can you check the client-profile and the JMS JNDI Connection Factory settings for keepalives and see if there is anything disabled or drastically different than what you have locally?
As defaults I see this in my local docker but your admin may change some of these things based on what they expect to see or network restrictions in your k8s cluster.
On the client-profile
On the JMS Connection Factory
0 -
might be worth looking at the connect/disconnect events on the event broker. "show log events" and match these to your JMS logs. The broker logs have a disconnect reason.
1 -
Hi all,
I could be wrong, but actually I recognised that it caused of health check, which probably configured for your pod in template yaml file.
To disable it you can specify in application.yaml file of your spring boot application, something like:
management:
health:
jms:
enabled: false
You can find implementation in JmsHealtIndicator class, all of these defaults checks are actually automatically configured and all their implementations you may find in spring-boot-actuator component.
Hope it will help you
Best regards,
Andrei
0 -
I am facing similar issue, on Openshift I have deployed Solace consumer but it keeps disconnecting. This is causing big messages read failure.
0 -
Hi @Solace chunnu,
Is it also happening to you b/c of the health check? Is the container getting restarted?
OR when you say this are you saying that it doesn't work with "large messages"?
This is causing big messages read failure
0 -
Hi @jangrahul,
Which health check are you referring to? Actuator?
Also, are you using a
CachingConnectionFactory
with the DMLC?0 -
Hi @jangrahul,
Ah okay, I found out why. It looks like the default JMS health check is just starting a new connection and closing it each time you check. I would recommend not using that and coding your own health check that checks the status of the JMS resources you are actually using instead.
See the code here:
0