Durable Queue Inactive Consumer Indication
Hi community, we are exploring the Active/Inactive Consumer Indication feature of solace queue.
We have created a durable queue at Solace and connected two consumers with the queue.
We have tried with Java and Nodejs.
(1) JAVA JCSMP API
public void handleEvent(Object source, FlowEventArgs event) {
System.out.println("Flow Event - " + event);
}
(2) NodeJS API
messageConsumer.on(solace.MessageConsumerEventName.ACTIVE, function () {
sample.log('=== ' + messageConsumerName + ': received ACTIVE event - Ready to receive messages');}
messageConsumer.on(solace.MessageConsumerEventName.INACTIVE, function () {
sample.log('=== ' + messageConsumerName + ': received INACTIVE event'); }
However with both Java and Nodejs we are getting notification for active event only, not getting any notification for inactive event.
We checked with GET "monitor/msgVpns/default/queues/<queue-name>/txFlows" SEMP REST API and got the following response for the two consumers respectively.
"activityState": "active-consumer",
"activityUpdateState": "in-progress",
and
"activityState": "inactive",
"activityUpdateState": "in-progress",
Can anyone please help If we are missing something and not getting the inactive indication in java or nodejs ?
Thanks,
Triptesh
Best Answer
-
Hey @triptesh, when you first create the flow, it is ALWAYS in the inactive state.
That way, when you get an active indication, you know it has transitioned from inactive to active.
Thinking about your k8s problem, maybe the way to approach this is have all pods bind to all queues. When they get the first active flow indication from one of the queues, they unbind from all the other queues. That way fairly quickly all get one queue as active. You'd still need some form of timeout to guard for error conditions.
Example operation:
- Pod 1 starts and binds to all 4 queues. It gets an active flow indication for all 4 queues. It gets the AFI for queue 3 first, so it unbinds from queues 1, 2 and 4.
- Pod 4 starts next and binds to all 4 queues. It gets AFI for queues 1, 2 and 4, with 1 first. It unbinds from 2 & 4.
- Pod 2 starts next, binds to all 4 queues. It gets AFI for queues 2 & 4, and unbinds from 2.
- Pod 3 start last, binds to all 4 queues. If gets ADI for queue 2.
3
Answers
-
Hi @triptesh, Active Flow Indication is one of my favourite features!
If you're getting flow active indications, your code and configuration (you have to have the queue type set to Exclusive and Active Flow Indication is set) is correct.
When are you expecting to see the inactive notification? If your flow starts in the inactive state because another flow is active, I _think_ the flow is marked as inactive but no event is generated,
soisActiveFlowIndication()returns false. Let me check that.Once a flow is active, the only way it will become inactive is if the connection is lost or the flow is closed.
0 -
Hi @TomF , yes we are expecting to see the inactive notification.
So , when we are connecting a consumer to exclusive durable queue, we need to get the notification / indication if it is active or inactive. We are getting the active event notification in case of the active flow.
But we also need the notification of inactive flow.
Can you please help how to use isActiveFlowIndication() to get the inactive notification ?
0 -
When you bind to a queue, the flow starts in the inactive state, so you won't see an inactive indication event at that point. That's why you always see the flow entering the active state once bound.
isActiveFlowIndication()doesn't give you any notifications, it returns the current state of the flow.0 -
Hey @triptesh, when you first create the flow, it is ALWAYS in the inactive state.
That way, when you get an active indication, you know it has transitioned from inactive to active.
Thinking about your k8s problem, maybe the way to approach this is have all pods bind to all queues. When they get the first active flow indication from one of the queues, they unbind from all the other queues. That way fairly quickly all get one queue as active. You'd still need some form of timeout to guard for error conditions.
Example operation:
- Pod 1 starts and binds to all 4 queues. It gets an active flow indication for all 4 queues. It gets the AFI for queue 3 first, so it unbinds from queues 1, 2 and 4.
- Pod 4 starts next and binds to all 4 queues. It gets AFI for queues 1, 2 and 4, with 1 first. It unbinds from 2 & 4.
- Pod 2 starts next, binds to all 4 queues. It gets AFI for queues 2 & 4, and unbinds from 2.
- Pod 3 start last, binds to all 4 queues. If gets ADI for queue 2.
3 -
Just catching up this morning. Like @TomF, I was also tricked by
isActiveFlowIndication()
and will remove it from my previous post to avoid confusing others.If you can't leverage kubernetes stateful sets, which feels like the cleanest solution to me, then I like @TomF's suggestion to maybe connect to all the Qs and stay connected to the first one that says you're active + close the flows listening to the others. So sorry for the confusion!
0