How can we achieve solace message delivery of at least once with variable number of pods?

triptesh
triptesh Member Posts: 27

We have a kubernetes deployment having 2 pods. We want to send messages to both these pods in 1:n way (each message should be received by both) and each message should be delivered at least once(QoS1).

To achieve the same, we have a topic and 4 queues created beforehand. All four queues are subscribed to the topic and hence receives messages from the same. We want to now startup the 4 pods and ensure that each of them consumes from 4 different queues. However, pods do not have any index and hence it is tricky to decide on which queue one has to subscribe from. Is there a way to check if a queue has active consumer before joining to ensure that it does not join an already consumed queue ?

Tagged:

Answers

  • uherbst
    uherbst Member, Employee Posts: 129 Solace Employee
    edited March 2022 #2

    Hi @triptesh,

    what do you mean with "each of them consumes from 4 different queues" ? Each pod should consume from its own queue ?

    Maybe this can help you:

    1. Set your queues to exclusive. This means only one consumer can bind to the queue. Not sure, if you get an exception from the API in case of the queue is already in consumption.
    2. Set your queues from the API as soon as the pod starts. So each pod has its own queue and only it's own queue. Disadvantage: The queue just starts subscribing to a topic as soon as the pod created the queue... you wont see old messages (older than lifetime of pod) in that queue.

    Uli

  • TomF
    TomF Member, Employee Posts: 412 Solace Employee

    Hi @triptesh, the problem with pre-provisioning queues like this is that you have to know the number of pods in advance. If you need more flexible scaling, you could write your application to:

    1. Dynamically provision its own queue;
    2. Add the relevant subscription(s) to the queue;
    3. Start replay on the queue so the historic messages sent to those topics are added to the queue.

    This means each application instance creates its own queue as and when needed.

  • subhoatg
    subhoatg Member Posts: 17

    @uherbst

    You are right. We want the pods to consume from their own queue. We were also thinking of the first approach as the second approach has the disadvantage as you mentioned. We would like to understand if there are any exceptions/events we could react to for exclusive queues if an already existing consumer is there.

  • subhoatg
    subhoatg Member Posts: 17

    Hi @TomF

    We are reluctant to provide Queue lifecycle access to the pods because of security concerns actually. We would want pods to not have SEMP API access and have only Solace Native API access. Can you provide your opinion on the same ?

  • uherbst
    uherbst Member, Employee Posts: 129 Solace Employee

    Can you elaborate, which security concerns you have ?

    Just to prevent misunderstandings: With semp, you can delete, modify, create a queue, but not read it's messages.

  • subhoatg
    subhoatg Member Posts: 17

    The subscriber pods also run customer custom scripts, so we would not want to give these pods SEMP API access and hence we want to create the queues separately and also assign the topic subscriptions to them, via a different component. Subscriber pods would only use Solace native APIs and consume messages for the already created queue.

  • marc
    marc Member, Administrator, Moderator, Employee Posts: 959 admin
    edited March 2022 #8

    Hi @triptesh & @subhoatg,

    Catching up on the thread now. I think there are several different options that can be considered.

    I do have a few more questions that might help myself, @TomF and @uherbst (or others!) provide better guidance.

    1. Is the QoS 1 delivered at least once requirement only when the application is connected or do you want to also save messages when the application is offline/down/disconnected for an extended period of time?
    2. What API/language is your application using? (JCSMP, JMS, Spring Cloud Stream, .NET, Paho, Qpid, etc.)
    3. Are your pods ephemeral?
    4. When you say multiple pods are these pods different instances of the same application? And do you always know the exact number ahead of time?
    5. Are your pods ephemeral?

    The answers to the above might change the options, but here are a handful of options I can think of!

    1. Use the StatefulSets feature of Kubernetes to provide each pod a unique identifier. You could then use that identifier as part of the queue name. Say your identifiers were 1-4 then each app could know the queue naming convention and connec to "MYQUEUE-1", "MYQUEUE-2", etc.
    2. If you're using a SMF API that supports "Provisioning of durable endpoints" such as JCSMP, CCSMP, etc. the app can actually provision the queue itself without needing SEMP permissions. The client-profile just needs to allow for endpoint creation. Note that our Spring Cloud Stream binder actually does this for you by default if you go that route.
    3. You could go the SEMP route (HTTP or via Msg Bus) as mentioned previously and have each app check the queues to see if another app is already bound. Note that you can provide "read only" permissions to a semp user which may help address some security concerns.
    4. If you are using a SMF API with your exclusive queues then you can check the Active Flow Indication on your flow to see if it is active or standby. It would be a bit tedious but your app could connect to each queue until it finds one that it is "active" on. Here is a tutorial which shows this.
    5. Upon startup your app could check-in with another app that manages the queues/subscriptions and essentially ask "which queue should I connect to". This internal app could use SEMP to figure it out.

    Hope that helps!

  • subhoatg
    subhoatg Member Posts: 17
    edited March 2022 #9

    Hi @marc

    First of all, thanks for taking time and providing such a nice explanation. Really appreciate it!

    Answering your questions below.

    1. We want to also save messages when the application is offline/down/disconnected for an extended period of time. Our idea is to define a TTL at queue level for message clean up. But otherwise the queue should remain.
    2. We actually have multiple deployments implemented in different languages, like Java, NodeJS and Golang. We would like to use respective Solace native client libraries to connect and consume from the solace queues.
    3. Pods are ephemeral in most cases, but there are some deployments which are having states and can be statefulsets.
    4. These pods are different instances of the same application. I know creating queues in advance has a disadvantage as @TomF also mentioned in the case of autoscaling scenarios, our idea is to create the queues for max possible pods. This will have disadvantage of unnecessary queues accumulating messages, but we need to take that hit in order to avoid having SEMP API access in the subscriber pods and having create/delete queues permission there.


    Regarding your options,

    1. This is surely one approach we would like to have for components that need to have messages even after an instance crash and restart.
    2. We are trying to avoid this, as I explained earlier.
    3. I think this is something we can do, thanks for this suggestion.
    4. This suggestion also can be achieved, thanks again. We quickly checked the example mentioned in https://tutorials.solace.dev/jcsmp/active-flow-indication/. It seems to only give event for active flow. Does it not give for the inactive flow? The NodeJS example also seems to not work for inactive.
    5. We had thought about it. From the architecture perspective, this would make this special app super important and any downtime for that would affect all the apps.

This Week's Leaders