.NET best practices for lifetimes/instancing of context/session?
We are looking to add Solace support for publishing/receiving from Solace Topics & Queues to our existing .NET C# codebase that already handles MSMQ and MQ. One thing that I have not come across in the documentation / examples is how we should be instancing the IContext and ISession - the code examples simply create one of every object to send/receive a single message and exit which isn't quite a "real world" example .
For instance with MQ if we wanted to listen to four queues we create four threads each listening to their specific queue and processing their messages. If we were to apply this same pattern to Solace, the question becomes how we should be instancing/sharing across those threads. Should we have a single IContext for our executable process, shared by all listening and publishing threads? Then within that do we have a single ISession shared by all, or should each listening thread create it's own ISession? We also have other code doing publishing - should each thread publishing a message create an ISession/dispose after sending a message or share a single ISession with the listening threads...
Suggestions or pointing me to something I missed would be much appreciated!
Best Answers
-
Hi @Kiwidude, thanks for your question, as a matter of fact @Aaron and I are working on improving the samples to better illustrate best practices. One of those is definitely keep the session open and don't close it after a single message!
In most cases a single context and session suffice. You'll only need more for very high performance use cases. This gives you one context thread - don't block it! You can then structure the application threads however you like. For instance, you could create separate structures to temporarily hold messages read from the context for each broker queue, and have a separate thread read from those structures.
Some useful documentation links:
API Threading
C# Best Practices
A section on context/session threading model5 -
I definitely wouldn't mark as answered until you have the information you need, otherwise I might lose interest
As a best practice we definitely recommend the use of asynchronous processing, simply because the performance is so much better.
Are you aware of the concept of flows? These are the API objects that read messages from a queue. Each flow can have its own callback. Et viola, you have queue based message dispatch.
How to deal with a failed database transaction? Well, this is where carefully dealing with acknowledgements comes in to play. Ensure you have client acknowledgement configured (NOT auto ack). Now you must call acknowledge explicitly for every message. You should only do so once you've finished with the message - so if the database write hasn't finished yet, don't ack until it has. If the application fails before acknowledge is called, the messages will be re-sent with a redelivery flag set*. Keep the message in the interim structures, but don't acknowledge them until you get the ack from the database. Remember that you must explicitly acknowledge the messages, or you'll run out of unacknowledged messages in the flow.
*Usually. There are certain edge case conditions where we can't guarantee the re-delivery flag is set, so it's best not to rely on the redelivery flag and use it as a hint to speed processing.
5
Answers
-
Hi @Kiwidude, thanks for your question, as a matter of fact @Aaron and I are working on improving the samples to better illustrate best practices. One of those is definitely keep the session open and don't close it after a single message!
In most cases a single context and session suffice. You'll only need more for very high performance use cases. This gives you one context thread - don't block it! You can then structure the application threads however you like. For instance, you could create separate structures to temporarily hold messages read from the context for each broker queue, and have a separate thread read from those structures.
Some useful documentation links:
API Threading
C# Best Practices
A section on context/session threading model5 -
Sure - if you can tell me how? I'm looking at all the icons and not seeing anything that looks like it would mark that as an answer?
Still working through those documentation links and thinking about our scenarios/existing code pattern that I am trying to align Solace with. With the current MQ based design we effectively have x threads doing a synchronous read of their specific transactional queues, and if an error occurs on that thread while processing that message the message is left on the queue. This offers protection for temporary network blips to the database during processing for instance.The Solace API seems a little "unexpected" from what I have seen so far in that it is the Session that is prescribing the event delegate for handing messages - the Topic or Queue subscribe methods would appear just to be filtering what messages that handler would receive. So having only one session but trying to listen to multiple queues asynchronously would appear to not fit very nicely with our existing application design. Which would then force us into a completely different design like you mention of structures for each broker queue and then threads reading from those - but that sort of design has it's own problems and complications, particularly that at first glance we lose the ability to be semi-transactional. What if our service is stopped with these messages sitting in the interim structures - they will be lost instead of just remaining on their queue etc. Likely I am perhaps missing something obvious...
0 -
I definitely wouldn't mark as answered until you have the information you need, otherwise I might lose interest
As a best practice we definitely recommend the use of asynchronous processing, simply because the performance is so much better.
Are you aware of the concept of flows? These are the API objects that read messages from a queue. Each flow can have its own callback. Et viola, you have queue based message dispatch.
How to deal with a failed database transaction? Well, this is where carefully dealing with acknowledgements comes in to play. Ensure you have client acknowledgement configured (NOT auto ack). Now you must call acknowledge explicitly for every message. You should only do so once you've finished with the message - so if the database write hasn't finished yet, don't ack until it has. If the application fails before acknowledge is called, the messages will be re-sent with a redelivery flag set*. Keep the message in the interim structures, but don't acknowledge them until you get the ack from the database. Remember that you must explicitly acknowledge the messages, or you'll run out of unacknowledged messages in the flow.
*Usually. There are certain edge case conditions where we can't guarantee the re-delivery flag is set, so it's best not to rely on the redelivery flag and use it as a hint to speed processing.
5 -
I have a followup to this question. As far as I understand...
- flows are only available on the subscriber side, not the publisher side.
- publishing guaranteed messages will result in a session level acknowledgement event
- This event can be used by the client to know that the message has been received by the broker
Suppose I am publishing guaranteed messages. As the publisher, I need to know which messages have been successfully received so I can update some kind of "high water mark" on my outgoing messages. I can evidently do this by processing the acknowledgement events. Now, it's not clear to me what that event quite means if using the Send(IMessage[]...) call, but I'll leave that to another question.
On the subject of this question, however, this seems to imply that publishing guaranteed messages would require one session per publisher. Otherwise, how would I know how to route the acknowledgement event back to the "correct" publisher? But the apparent need for multiple sessions seems to conflict with the best practise advice, and the advice here, of typically using a single session except in unusual circumstances. I wouldn't call having multiple different publications particularly unusual, so the apparent need for multiple sessions for guaranteed messages confuses me. I feel like I must have missed something somewhere.
0 -
@allmhhuran - will be interesting to see the experts advice on this but this is what I did. My service class for the Solace context/session exposes an event for publishing classes to temporarily subscribe to when needed, and delegates to any of those when it gets a session event. Then in the publisher code I set the CorrelationKey property of the IMessage that you are publishing. When the session event fires on the service and passed through to my publisher event handler, the SessionEventArgs includes the same CorrelationKey property so you can check to see if is the droid you are looking for. Seems like it would do the job for our low volumes but yet to test it in anger
2 -
Ah yes, the CorrelationKey on the EventArgs was the bit I was missing. I have found that in one of the code samples now.
It sounds like you've already finished your build, but for what it's worth, I am using System.Threading.Tasks.Dataflow for my pipelines under the covers. I will probably be asking a question soon enough on this forum as to whether the acknowledgements always come back in the same order the messages were sent. If so, the correlationKey could be used to map messages back to a subscriber pipeline, and then a dataflow JoinBlock could be used to very efficiently allocate each ack with it's original message, since Dataflow will also guarantee order. This eliminates the need for buffers and lookups when processing the acks.Having said that, I'm not sure what I would do with a RejectedMessageError response, given a requirement to keep messages in order. If 2 messages are sent, and then the first comes back as rejected, while the second comes back as accepted, I suppose all one can do is drop the message. Retransmitting at that point would break message ordering. Of course, this only applies if batch sending. If sending single messages you could always wait for the first ack before sending the second, and if the first is rejected, I suppose you just kill the pipelin eentirely and raise the fault back to the generating application.
0