🎄 Happy Holidays! 🥳
Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.
Happy Holidays!
Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.
How do avoid 403: Replication Is Standby ?
We are using “spring-cloud-starter-stream-solace” to consume the message from queue but receiving the below error frequently. Please refer the below details and help us to avoid the issue.
application.yml configuration:
binders:
<<bindername>>:
type:
environment:
solace:
java:
clientUsername: <<User Name>>
connectRetries: 3
connectRetriesPerHost: 0
reconnectRetries: 3
host: tcps://<<Host Name 1>>:55443,tcps:// <<Host Name 2>>:55443
msgVpn: <<VPN Name>>
solace:
bindings:
<<function name>>-in-0:
consumer:
queueAdditionalSubscriptions: <<topic name>>
provisionDurableQueue: true
queueNamePrefix: ''
useFamiliarityInQueueName: false
useDestinationEncodingInQueueName: false
useGroupNameInQueueName: false
Error Message
com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.receive - Received error while trying to read message from endpoint <<Queue Name>>
com.solacesystems.jcsmp.StaleSessionException: Tried to call receive on a stopped message consumer.
at com.solacesystems.jcsmp.impl.flow.FlowHandleImpl.throwClosedException(FlowHandleImpl.java:1957)
at com.solacesystems.jcsmp.impl.flow.FlowHandleImpl.receive(FlowHandleImpl.java:899)
at com.solacesystems.jcsmp.impl.flow.FlowHandleImpl.receive(FlowHandleImpl.java:866)
at com.solace.spring.cloud.stream.binder.util.FlowReceiverContainer.receive(FlowReceiverContainer.java:279)
at com.solace.spring.cloud.stream.binder.util.FlowReceiverContainer.receive(FlowReceiverContainer.java:211)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.receive(InboundXMLMessageListener.java:93)
at com.solace.spring.cloud.stream.binder.inbound.InboundXMLMessageListener.run(InboundXMLMessageListener.java:73)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.solacesystems.jcsmp.JCSMPTransportException: (JCSMPTransportException) Error communicating with the router. (KeepAlive)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel$ClientChannelReconnect.call(TcpClientChannel.java:2438)
... 4 common frames omitted
Caused by: com.solacesystems.jcsmp.JCSMPErrorResponseException: 403: Replication Is Standby
at com.solacesystems.jcsmp.protocol.impl.TcpChannel.executePostOnce(TcpChannel.java:232)
at com.solacesystems.jcsmp.protocol.impl.ChannelOpStrategyClient.performOpen(ChannelOpStrategyClient.java:97)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel.performOpenSingle(TcpClientChannel.java:418)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel.access$800(TcpClientChannel.java:114)
at com.solacesystems.jcsmp.protocol.impl.TcpClientChannel$ClientChannelReconnect.call(TcpClientChannel.java:2274)
... 4 common frames omitted
Answers
-
Hi @JamilAhmed, Need your support to resolve the issue.
0 -
Hi @vadivelan,
I just tested in my environment with some brokers that are in a DR replication setup and have reproduced the error.
There are two parts here:
- The reconnect parameters you've set that govern how long the services wait for the broker connection to get restored. (You're essentially giving up too quickly.)
- Excessive logging when the service gives up connecting but the binder continues to do some work
For point (1), the settings and resulting behaviour is the same as what @Kaliappans hit in an earlier issue. @marc recommended to adjust them or perhaps even take them out entirely and rely on the defaults instead: https://solace.community/discussion/comment/5306#Comment_5306
For point (2), looks like this has been seen before too and is already tracked in a Jira issue: https://github.com/SolaceProducts/solace-spring-cloud/issues/174. While previously it was reported in the context of a High-Availability switch, your case involves a switch between Replication mates. We'll make sure to cover off all these situations in the behaviour improvement for it. (fyi: @giri, @amackenzie)
Thanks
Jamil
1