🎄 Happy Holidays! 🥳

Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.

Happy Holidays!

Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.

Solace Python API for request/reply architecture

prasanna
prasanna Member Posts: 7

Does Solace support request/reply architecture? if yes, can i have sample request code (client side) and response code in python (server side code).

Here is my scenario and am new to solace world.

Right now we are using Tibco messaging queue, we need our process running in tibco to be converted to solace.

Our current logic is request and response model (Client and Server model). We are server side respond to all the client application posting the request to the tibco queue. i.e all our client system posts their request to tibco queue (nearly 10 clients), we have our code running in linux terminal which keep getting all the request (single code to process all the clients) and then process the request (some data ticker logic), finally results are send back as response.

Above needs to be written in python by accessing solace queue.

Could some one guide me here.


If no any python API available right now for request and response, please suggest me to achieve this in python using current pubsubplus API.

Comments

  • Tamimi
    Tamimi Member, Administrator, Employee Posts: 541 admin

    Hey @prasanna, request/reply is indeed supported in the new python API! We are currently putting together a consumable sample to showcase how you can leverage request reply and I can get back to you on this this week. The latest Python API documentation could be found on the Solace Docs site and the sample will be published on the github.com/solaceSamples python github repo 👍

  • prasanna
    prasanna Member Posts: 7

    @Tamimi

    Thank you Tamimi -

    My need is the below two samples;

    1. sample_client.py <argv[1]> - this client script should submit the argv[1] as request to the queue.
    2. sample_server.py - keep running in the linux terminal looking for the requests. When ever request comes in the queue, it should pick it up and send back the response value of argv[1] passed in the client request. i.e. I have logic to populate dic{argv[1]: value}.
  • prasanna
    prasanna Member Posts: 7
    edited February 2022 #4

    @Tamimi

    Please suggest that what type of queue we can create for us based on the below case,

    We have nearly 10 clients to post the request on the queue. In our case, all the 10 clients will be active at the same time, all the response should be send back to them correctly for the correct requestor.

    load level is 1000 messages/minute/per client

    whether it is (1) point-to-point model (single queue for both request and response)

    or (2) queue- subscription mapped.

  • prasanna
    prasanna Member Posts: 7

    @tamin

    Any updates about the comment you stated above.

  • Tamimi
    Tamimi Member, Administrator, Employee Posts: 541 admin
    edited February 2022 #6

    Hey @prasanna, as we finish up the request reply samples you can take a stab at it as follows.

    Client (i.e. requestor)

    from solace.messaging.publisher.request_reply_message_publisher import RequestReplyMessagePublisher
    
    # Create a direct message requestor and register the error handler
    direct_requestor: RequestReplyMessagePublisher = messaging_service.request_reply() \
                            .create_request_reply_message_publisher_builder() \
                            .build()
    
    direct_requestor.start()
    
    outbound_msg = messaging_service.message_builder().build(f'\n{insert_your_message_here}')
    response = direct_requestor_blocking.publish_await_response(request_message=outbound_msg, \
                                                                        request_destination=destination_topic, \
                                                                        reply_timeout=10000)
    

    Server (i.e. replier)

    from solace.messaging.receiver.request_reply_message_receiver import RequestMessageHandler, InboundMessage, Replier, RequestReplyMessageReceiver
    # Handle received messages
    class RequestMessageHandlerImpl(RequestMessageHandler):
        def on_message(self, request: InboundMessage, replier: Replier):
            
            print(f'Received request (body):' + request.get_payload_as_string())
            if replier is not None:
                outbound_msg = messaging_service.message_builder().build(f'Response message here')
                replier.reply(outbound_msg)
            else:
                print(f'replier is None')    
    
    request_topic = "the/destination/topic"
    direct_replier: RequestReplyMessageReceiver = messaging_service.request_reply() \
                            .create_request_reply_message_receiver_builder() \
                            .build(TopicSubscription.of(request_topic))
    
    direct_replier.start()
    try:
        # Callback for received messages
        direct_replier.receive_async(RequestMessageHandlerImpl())
        try: 
            while True:
                time.sleep(1)
        except KeyboardInterrupt:
            print('\nDisconnecting Messaging Service')
    finally:
        print('\nTerminating receiver')
        direct_replier.terminate()
        print('\nDisconnecting Messaging Service')
        messaging_service.disconnect()
    
  • prasanna
    prasanna Member Posts: 7

    Thanks @Tamimi for API code piece reference.


    Can you suggest which type of queue or topic is suitable for me.

    We have nearly 10 clients to post the request on the queue. In our case, all the 10 clients will be active at the same time, all the response should be send back to them correctly for the correct requestor.

    load level is 1000 messages/minute/per client.

    Please suggest what type of queue is advisable for me

    1. Point to point (separate queue for request and response need to create two queue)

    2. Exclusive queue -

    3. Topics - Request/Reply with Guaranteed Messages through topic

  • himanshu
    himanshu Member, Employee Posts: 67 Solace Employee

    Hi @prasanna,

    Looks like there is significant rearchitecture involved in the bigger project that you are working on. While the core ask of implementing request/reply is not so complicated, the bigger project of tibco to Solace migration requires taking a step back and evaluating the overall target architecture.

    For example, while you might have been doing something a certain way with tibco, does it still make sense to do it exactly the same way with Solace? What should the topic structure look like? How should the downstream consumers leverage queues, topic-to-queue mapping, wildcards etc to take full advantage of Solace's features?

    If you are an existing customer of Solace, I recommend getting the Solace account team involved to help you tackle this project.

  • prasanna
    prasanna Member Posts: 7

    Thank you @himanshu

    let me check with our internal solace admin team as well

  • prasanna
    prasanna Member Posts: 7

    @Tamimi - thanks for the above client and response model - based on your reference i have achieved my needs.

    This discussion is answered my needs.

    Thank you

  • Tamimi
    Tamimi Member, Administrator, Employee Posts: 541 admin

    Great to hear that !