Try PubSub+

Spring Cloud Stream Solace - Sticky Load Balancing Implementation

samuelsamuel Member Posts: 6
edited June 3 in General Discussions #1

Can any one please help me with a sample implementation of sticky load balancing with spring cloud stream solace.
My use case is account transaction information are posted to a solace topic to 6 different partitions. There will be consumer(s) reading from these partitions. Initially there will be 1 consumer which is hosted as part of spring cloud stream micro service. The micro service should be horizontally scalable and when that happens I want the spawned consumers to equally share the 6 partitions among themselves.

Comments

  • samuelsamuel Member Posts: 6
    edited June 4 #3

    Thanks for sharing the links. I have gone through these posts.
    Following is the configuration in my service (spring could stream with solace).
    I have manually created 2 exclusive queues (queue0, queue1) and have subscriptions on these queues.
    queue0 with subscriptions "scm/account/0/bal and scm/account/1/bal"
    queue1 with subscriptions "scm/account/2/bal and scm/account/3/bal"
    I am running two instances of the service and expecting the each instance to bind to 1 queue. But I am observing that the first instance which comes up binds to both the queues. All the messages on both the queues are being consumed by only this instance. I am still in the process of figuring out whats going wrong here. It would be great if you can help me understand whats going wrong here. Also wanted to understand if this feature is available to be used in conjunction with spring cloud stream.
    I am using spring-cloud-stream-binder-solace 3.1.0

    spring:
    stream:
    bindings:
    process-in-0:
    destination: queue0,queue1
    group: my_consumer_group
    consumer:
    partitioned: true

  • marcmarc Member, Administrator, Moderator, Employee Posts: 463 admin

    Hey @samuel,
    If I understand you correctly, what you are seeing is what I would expect. That said, I agree it may not be ideal for your use case of having 2 apps that are the primary consumer on one queue and the backup on another. To explain what's happening: the Spring Cloud Stream binder leverages the functionality provided by the Solace Exclusive queue to achieve this "Primary", "Secondary", "Tertiary" pattern. Because of this the first Session/Flow to bind to each queue will be the active consumer and receive all messages delivered to that queue. In your case, since both of your apps are listening to both queues the first one that starts up will receive all messages from each queue and neither instance knows if/how many other instances are bound to the same queue. Only the broker has that logic.

    I'll have to think and get back to you if there is another option, but you may have to have a separate instance that serves as the backup. A few other options that come to mind is expiring messages to a DMQ as a work around but that sounds nasty or requesting that the Solace binder be enhanced to add functionality to start/stop individual bindings as defined here

    Side note - does that configuration of destination: queue0,queue1 actually work for you? I haven't tried to list multiple destinations on a single function.

    Not ideal news, but hopefully that helps and prevents you from wasting time figuring out why something is working how it is actually supposed to work 😝

  • samuelsamuel Member Posts: 6

    Hi Marc, Thanks for the update.
    To your question having multiple queue destinations as a comma separated list does work and the app starts receiving messages from both the queues.
    I was thinking that by adding the consumer property spring cloud stream partitioned: true, the binder will ensure that there is only one instance which is connected as primary to a particular queue, so that the load can be balanced among the available instances. I believe that's how it works with spring cloud stream Kafka binder.
    The plan that I had was to deploy this micro service behind an auto scaling group, scaling the instances based on the cpu or memory utilisation of instance. So during a peak load if we have 5 queues, then we will end up having 5 instances of micro service each reading from a single queue. Similar to how consumer group and partitioning works with Kafka.

  • marcmarc Member, Administrator, Moderator, Employee Posts: 463 admin

    Hi @samuel,

    To your question having multiple queue destinations as a comma separated list does work and the app starts receiving messages from both the queues.

    Thanks for that confirmation! I'll have to try that out.

    I was thinking that by adding the consumer property spring cloud stream partitioned: true, the binder will ensure that there is only one instance which is connected as primary to a particular queue, so that the load can be balanced among the available instances. I believe that's how it works with spring cloud stream Kafka binder. The plan that I had was to deploy this micro service behind an auto scaling group, scaling the instances based on the cpu or memory utilisation of instance. So during a peak load if we have 5 queues, then we will end up having 5 instances of micro service each reading from a single queue. Similar to how consumer group and partitioning works with Kafka.

    The Solace binder supports publish-subscribe and consumer groups patterns as defined by Spring Cloud Stream but does not support partitioning options in the framework so specifying partitioned: true won't do anything for our binder. That said you can still do partitioning with solace topics as defined in @Aaron's blog here. See the "Sticky Load-Balancing, or Keyed/Hashed Delivery" section. When doing that with Spring Cloud Stream I would recommend pre-creating your queues and having a separate app that manages the topic subscriptions on your queues. You can use Solace's "On Behalf Of" functionality to do this (there is some solid content out there if you google that).

  • samuelsamuel Member Posts: 6

    Thanks Marc. Will read through the On Behalf Of functionality

Sign In or Register to comment.