Solace jms spring boot starter has started failing eureka health check after april 2024.

Mansi1908
Mansi1908 Member Posts: 1

Hi All,

We are using dependency of solace-jms-spring-boot-starter (1.0.0) version and spring boot version is 2.7.13. Till April 2024 we didnt face any issue. But yesterday when we deployed our application in production we got an unexpected exception of Connection refused and application was up and running but was deregistered from eureka. Is thery any compatibility related issue

After setting health check of jms as false. It started running again and then was register in eureka as well. I have tried all the possible ways. Kindly help in resolving this issue so we don't need to disable the health check of solace.


Flag Used

management.health.jms.enabled=false


Dependency

 <parent>         <groupId>org.springframework.boot</groupId>         <artifactId>spring-boot-starter-parent</artifactId> 

        <version>2.7.13</version> 

        <relativePath/> <!-- lookup parent from repository --> 

    </parent>


 <dependency> 

            <groupId>com.solace.spring.boot</groupId> 

            <artifactId>solace-jms-spring-boot-starter</artifactId> 

            <version>1.0.0</version> 

        </dependency>



Exception Message


<dependency> 

            <groupId>com.solace.spring.boot</groupId> 

            <artifactId>solace-jms-spring-boot-starter</artifactId> 

            <version>1.0.0</version> 

        </dependency>  2.7.13 - 4.0.0

2024-07-09 09:36:38.660 INFO  [DiscoveryClient-InstanceInfoReplicator-0] NettyTransportExecutorService         - Epoll is enabled; Netty 4.1.94.Final 

2024-07-09 09:36:38.676 INFO  [DiscoveryClient-InstanceInfoReplicator-0] LogWrapper         - Client-1597: Connecting to host 'orig=tcp://localhost, scheme=tcp://, host=localhost' (host 1 of 1, smfclient 1597, attempt 1 of 1, this_host_attempt: 1 of 1) 

2024-07-09 09:36:38.689 INFO  [DiscoveryClient-InstanceInfoReplicator-0] LogWrapper         - Client-1597: Connection attempt failed to host 'localhost' ConnectException com.solacesystems.jcsmp.JCSMPTransportException: (Client name: XXXXXXXXXXXXXXXXXXXXXXX   ) - Error communicating with the router. cause: io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:55555 ((Client name: XXXXXXXXXXXXXXXXXXXXXXX   ) - ) 

2024-07-09 09:36:41.690 INFO  [DiscoveryClient-InstanceInfoReplicator-0] LogWrapper         - Client-1597: Channel Closed (smfclient 1597) 

2024-07-09 09:36:41.690 INFO  [DiscoveryClient-InstanceInfoReplicator-0] LogWrapper         - Client-1597: Channel Closed (smfclient 1597) 

2024-07-09 09:36:41.692 WARN  [DiscoveryClient-InstanceInfoReplicator-0] AbstractHealthIndicator         - JMS health check failed 

javax.jms.JMSException: Error creating connection - transport error ((Client name: XXXXXXXXXXXXXXXXXXXXXXX   ) - Error communicating with the router.) 

        at sun.reflect.GeneratedConstructorAccessor186.newInstance(Unknown Source) 

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) 

        at java.lang.reflect.Constructor.newInstance(Unknown Source) 

        at com.solacesystems.jms.impl.JMSExceptionValue.newInstance(JMSExceptionValue.java:36) 

        at com.solacesystems.jms.impl.JCSMPExceptionMapper$ArrayListMapper.get(JCSMPExceptionMapper.java:32) 

        at com.solacesystems.jms.impl.JCSMPExceptionMapper.get(JCSMPExceptionMapper.java:95) 

        at com.solacesystems.jms.impl.Validator.createJMSException(Validator.java:590) 

        at com.solacesystems.jms.SolConnection.<init>(SolConnection.java:180) 

        at com.solacesystems.jms.SolConnection.<init>(SolConnection.java:91) 

        at com.solacesystems.jms.SolConnectionFactoryImpl.createConnection(SolConnectionFactoryImpl.java:112) 

        at brave.jms.TracingConnectionFactory.createConnection(TracingConnectionFactory.java:64) 

        at org.springframework.cloud.sleuth.brave.instrument.messaging.LazyConnectionFactory.createConnection(TracingConnectionFactoryBeanPostProcessor.java:226) 

        at org.springframework.cloud.sleuth.brave.instrument.messaging.LazyTopicConnectionFactory.createConnection(TracingConnectionFactoryBeanPostProcessor.java:171)

Comments

  • marc
    marc Member, Administrator, Moderator, Employee Posts: 959 admin

    Hi @Mansi1908,
    Interesting. Did the Eureka version change? It would be weird if it started giving a different result without anything changing.

    I don't believe Solace is providing a specific HealthIndicator with our JMS starter so it should just be using whatever Spring provides by default.

    Can you make sure you aren't overriding or excluding the creation of the jmsHealthIndicator or jmsHealthContributor beans? By default Spring auto configures them here if missing: https://github.com/spring-projects/spring-boot/blob/main/spring-boot-project/spring-boot-actuator-autoconfigure/src/main/java/org/springframework/boot/actuate/autoconfigure/jms/JmsHealthContributorAutoConfiguration.java

    Hope that helps!

  • marc
    marc Member, Administrator, Moderator, Employee Posts: 959 admin
    edited July 11 #3

    Also from your post it looks like it is failing to connect to the message broker. That is separate than the health indicators. So if your app can't connect to the message broker then I would expect the HealthIndicator to report down which might be what makes it fail on the Eureka side. I'm not familiar with Eureka so I can't know for certain.

    You might also consider upgrading to a newer version of the Solace JMS starter. v1 is from 2018. Many newer version here: https://mvnrepository.com/artifact/com.solace.spring.boot/solace-jms-spring-boot-starter

  • Travis
    Travis Member Posts: 4
    edited November 14 #4

    Hi there,

    Thank you for sharing your issue. It seems like you're facing a significant challenge with the Connection refused error and JMS health check failures, which are causing unexpected behavior in your application and impacting Eureka registration.

    While I understand that your website doesn't directly relate to Solace, it’s crucial to resolve such technical issues to ensure the smooth running of your applications. Here are some practical suggestions that could help you overcome this challenge, especially in a production environment:

    1. Ensure Correct Server Connectivity: Verify that the Solace JMS server is running and accessible from the application’s host at the correct port. A connection refusal often points to a network or firewall issue, so ensure that your environment allows the required communication.
    2. Check Compatibility: The issue might stem from an incompatibility between your Spring Boot version (2.7.13) and the Solace JMS dependency (1.0.0). Even if it was working fine until April 2024, subtle changes or updates might have affected the compatibility. Reviewing the Solace JMS version or even upgrading to a later release could resolve hidden bugs.
    3. Review Health Check Configuration: The JMS health check failure and deregistration from Eureka can be a significant operational concern. Instead of disabling the health check altogether, look into why the JMS connection isn’t being established. You might need to configure retry logic or analyze the error logs more closely for more insights.
    4. Consider Application Resilience: If you’re using Eureka for service discovery and high availability, consider enhancing your application’s resilience by adding more robust error-handling mechanisms. This way, even if the JMS connection temporarily fails, your application can continue to be registered and active in Eureka without impacting end users.

    It’s always a good practice to ensure your core application logic is resilient and doesn’t depend on disabling critical checks. I hope these suggestions help in resolving the issue and ensuring smooth operations for your production environment.

    If you need further assistance or insights into application resilience, feel free to ask! We can discuss best practices around handling service dependencies and ensuring high availability.

    you might want to macblog, where we share insights into application optimization and scaling.