is there any spring samples which deal with partition , offset and grouping message concept in solace?
Hi I am new to solace, kindly let me know, do we have partition concept when payload size exceeds compare to message spool size.?
Hi Maniv! These sound very much like Kakfa questions. What are you trying to achieve? Are you looking for a way to send very large messages? Or how to load-balance among multiple consumers? Or store a large number of messages..? Let us know, thanks!
@Aaron , Yes trying to co-relate those action activities..
Large message which need to be chunked with sequence and then need to be regrouped while consuming the same.
Let me know if i am not clear in above statement. Looking for your great help.
Hi @maniv, the first thing I'd say is "how large is large?" If you can get away without chunking, your life will be very much simpler. If you are looking to persist messages (i.e. you can't tolerate any loss) then the largest message PubSub+ will accept is 30MB. For direct messages (you can afford to lose messages, for instance if the subscriber is off-line) then the limit is 64MB.
That said, there might be a case where you can't guarantee what the size of the message might be. In that case the application will need to chunk the message, just as it would with Kafka if it exceeds the Kafka message.max.bytes configurations. In this case there are a couple of things to remember about PubSub+ vs. Kafka:
PubSub+ doesn't have the concept of topic partitions. Simply send everything on a (dynamic, hierachical) topic. Because the topics are hierarchical, you could even create an id for this chunked message and use that as the topic - then there is no way chunked messages from multiple publishers will get interleaved:
Solace guarantees the ordering of messages on a topic at ingress to the broker. With the above this buys to a lot of what you need for free.
That said, if you still need sequence IDs, just add them in a message header.
If you need load balancing for the subscribers, you can shard messages based on the topic.
@TomF thanks, correct me if i am wrong.