It basically loops like this all day long and I can’t figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.
There is no other log to give me a clue why the channel is closing on its own like this.
You tried my first guess - not using the CachingConnectionFactory is usually the culprit here. When you run your app do the logs say that the CachingConnectionFactory is being instantiated?
A few questions:
Are you using core spring framework or Spring boot?
Hey marc, thanks for the quick response . Answering your questions inline. Let me know anything else required from my side.
[Pooja] No it doesnt say so
A few questions:
Are you using core spring framework or Spring boot?
[Pooja]Spring boot.
Is this app a consumer, sender, or both?
[Pooja]It is both, but even with sender alone it loops through creating closing connection
is it disconnecting after each send/receive?
[Pooja]Not actually. Connection creation and closing are happening in every fixed interval of time. Seems like some timer is set for connection .
can you share more of your Spring configuration?
[Pooja] Connection is created using spring autoconfig(spring managed connection) through JmsTemplate.
Interesting. I just re-read your question and noticed this which makes me think it is a configuration on the broker or network settings in K8s
It basically loops like this all day long and I can’t figure out why. When I run this locally on my development machine the connection remains open and and not closing on its own****.
Just a guess from me, but can you check the client-profile and the JMS JNDI Connection Factory settings for keepalives and see if there is anything disabled or drastically different than what you have locally?
As defaults I see this in my local docker but your admin may change some of these things based on what they expect to see or network restrictions in your k8s cluster.
@pooja Another thing to consider would be the network path that your app is taking to connect to the broker. Is the broker also within K8s? is it external to K8s? is there a load balancer in the mix?
might be worth looking at the connect/disconnect events on the event broker. “show log events” and match these to your JMS logs. The broker logs have a disconnect reason.
I’m also facing same issue. solace connections are keep dropping every 10s and reconnect again it own while running spring boot application via docker image. i don’t see any issue running spring boot jar independently.
Hi all,
I could be wrong, but actually I recognised that it caused of health check, which probably configured for your pod in template yaml file.
To disable it you can specify in application.yaml file of your spring boot application, something like:
management:
health:
jms:
enabled: false
You can find implementation in JmsHealtIndicator class, all of these defaults checks are actually automatically configured and all their implementations you may find in spring-boot-actuator component.
Hi @Solace_chunnu ,
Is it also happening to you b/c of the health check? Is the container getting restarted?
OR when you say this are you saying that it doesn’t work with “large messages”?
This is causing big messages read failure
Ah okay, I found out why. It looks like the default JMS health check is just starting a new connection and closing it each time you check. I would recommend not using that and coding your own health check that checks the status of the JMS resources you are actually using instead.