Spring Cloud Stream Batch Consumer and Batch Publisher using Function
Consumer
spring.cloud.stream.bindings.receiveMessage-in-0.consumer.batch-mode=true
spring.cloud.stream.solace.bindings.receiveMessage-in-0.consumer.batchMaxSize=255
spring.cloud.stream.solace.bindings.receiveMessage-in-0.consumer.batchTimeout=500
### Publish messages to Topic
spring.cloud.stream.bindings.receiveMessage-out-0.destination = <<Topic Name>>
1st Mircoservice:
Function<Message<List<String>>, List<//USER Defined Object -1//>> receiveMessage()
2nd Microservice:
public Function<List<//USER Defined Object -1//>, List<//USER Defined Object - 2//>> messageReceiver()
This mircroservice listening to the topic is also function with batch-mode = true
The issue faced is , the 2nd Microservice is getting the payload/input as bytes instead of the pojo when in batch-mode = true.
Any inputs/suggestions please?
Best Answer
-
Hi @kirthi - If you want to receive messages as a batch in the 2nd microservice, you need to construct the messages as a batch (collection) and return them in the 1st microservice.
So the function signature of 1st microservice would be
Function<Message<List<String>>, List<Message<POJO>>> receiveMessage()
And the function signature of 2nd microservice would be
Function<Message<List<POJO>>, List<POJO> messageReceiver()
This seems to do the job - if you have found anything, please share.
1
Answers
-
Hi @kirthi - If you want to receive messages as a batch in the 2nd microservice, you need to construct the messages as a batch (collection) and return them in the 1st microservice.
So the function signature of 1st microservice would be
Function<Message<List<String>>, List<Message<POJO>>> receiveMessage()
And the function signature of 2nd microservice would be
Function<Message<List<POJO>>, List<POJO> messageReceiver()
This seems to do the job - if you have found anything, please share.
1 -
Thanks Giri.
In our micro service, we are using solace binder with SCS, during non-functional testing we found some differences in performance, Could you please help to understand.
- service - Processor is used. The TPS is approx 200 TPS.
Function<Message<List<String>>,
Collection<Message<SafeStoreObject>>
> receiveMessage()Output : The individual messages are placed in the destination topic/queue.
2. service - Processor is used. The TPS is approx 2000 TPS.
Function<Message<List<String>>,
List<SafeStoreObject>
> receiveMessage()Output : Batch messages are combined as one List and placed in the topic/queue.
How to improve the TPS ? only the publishing part takes ample time.
Is it because the acknowledgement of each message is taking a lot of time ?
How to fine tune our service to achieve good TPS ? any property specific to reduce the ack time for the publisher?
0 -
Hi Giri,
I am adding bit more context to above question. Kirthi and I work together.
Context :
We are using processor function which consumes and publishes in batch. Method signature is as below. It publishes each message in batch as individual messages. TPS observed is around 150.
Function<Message<List<String>>,
Collection<Message<POJO>>
> receiveMessage()Issue :
Batch publishing is very slow. For a batch size of 255, it takes more than a second to publish.
Observation :
If the method signature is updated as below then the entire batch is published as a single message. TPS observed is around 1000. There is a huge performance difference.
Function<Message<List<String>>,
Collection<<POJO>>
> receiveMessage()Questions :
We are building a service to handle around 2000 TPS. The service is very simple, just consume, perform minor transformation and publish. Can you please suggest config options to optimise publish. With batch publish are there any config to avoid acknowledge or round trip for each message?
Thanks
0 -
Hi @kirthi @Tilak - Unfortunately, it is a limitation - each message in the batch will be published individually. Looks like the Collection<Pojo> might help you based on your need and observation.
I will check on other possibilities to boost publish performance on the processor, if I come across any suggestions, will keep you updated.
1