Limit Max Connection per Consumer (not client)

Robert
Robert Member Posts: 58 ✭✭
edited June 2022 in General Discussions #1

Problem:

  • I have a micro services which offers some REST endpoints and as well runs some Solace Event listeners.
  • The problem is now that the service scales up and down based on incoming requests and so as well the listeners are increasing with the scale. Any listener for sure blocks then a connections from solace although not needed and risk to run in maximumClientConnectionsPerClientUsername limit.

What i look for ?

  • Is there any way to solve this problem ? One way would be for sure to separate listener from REST implementation but somehow i want to keep together as it all belongs to same logical capability and business object i deal with.
  • One way i was thinking of could be that although you use a connection pool you could give the look up of connection a kind of Tag: look-up-item-change and that tag i can set some constraints. Connections with Tag: look-up-item-change only can get max. 5 connections from pool. So if not more than 5 instances would be created for the service it would just stop to provide any more than 5 connection and that listeners will just be down and not blocking connections.

I hope i could explain my problem. Any help is highly appreciated.

Answers

  • marc
    marc Member, Administrator, Moderator, Employee Posts: 956 admin

    Hi @Robert,

    I'm not sure I 100% understand your question, but let me try to repeat it back to you. You have some microservices which can be triggered both by REST and messaging endpoints. You are having to scale them up because of REST traffic, but you don't need to scale the message listeners. You also prefer not to scale the message listeners b/c you don't want to hit your max client connections per client username. Is that correct?

    If yes, then I think you have a few options:

    1. Separate the microservices to have 1 listening for REST requests and another for messages. You said you want to avoid this but it is a good option.
    2. Add some logic into your app where only the first X microservices listen for messaging and any that scale up after that only listen for REST requests. Hopefully a flag could be passed in as an environment variable via kubernetes that allows this logic to execute. e.g: env variable that is messaging.enabled is true or false . I think this is possible, but I'm honestly not sure.
    3. Trust your broker. Have a client-username that is dedicated to your app and set the max client connections to the max amount of microservices you want connected. Then as your app scales beyond that max number don't have it fail on startup if it can't connect due to max client connections, but instead check the exception and gracefully log and not listen for messages while continuing to process REST requests.
    4. Something along the lines of what you are suggesting where you have another app that is a "manager" of this deployment and your apps check-in when they start up and see if they should listen for messages or not.

    Hope that give you some ideas. Let us know what you end up doing!

  • Tamimi
    Tamimi Member, Administrator, Employee Posts: 538 admin

    Thinking out loud here, would leverage Solace RDP (Rest Deliver Point) help in architecting a solution for this use-case? @Robert do you know (or have you checked out) Solace RDP?

  • Robert
    Robert Member Posts: 58 ✭✭

    @Tamimi yes i am aware of RDP and we use in some cases. We use that for calling serverless components which normally must be pushed to wake up. You can control their the level of parallel threads (max 50). I was just not happy about quite low limits on RDP. But maybe that changes.

    I think it was 200 per broker. Sounds a lot but not if you have many microservices and knowing that

    1 published message can have 100 of subscriber's. ;-)

    @marc thanks for sharing your thoughts. I guess if using a listener one of these options will be to go with. Separation or as you stated control start of listeners. e.g. cache count (redis) to work as well across containers.