🎄 Happy Holidays! 🥳

Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.

Happy Holidays!

Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.

Transmission time for messages Solace C API

chaudharys
chaudharys Member Posts: 25

Hi,
We are using the Solace C API to benchmark the communication between our applications.
We noticed a very unusual thing. In some runs, the transmission time increased with an increase of data, and in some cases, it decreased for messages of higher size.
What is the reason for this unusual behavior?

Comments

  • TomF
    TomF Member, Employee Posts: 412 Solace Employee

    Hi @chaudharys , that's an interesting question. There are a lot of variables here though: throughput (bandwidth), throughput (msgs/sec), latency. Then there's your network setup, how the hosts are setup, whether there's virtualisation...

    In general, you should expect message latency and "msgs/sec" throughput to decrease with increasing message size, which is common sense. At some point you'll begin to saturate the bandwidth of some component. At smaller message sizes, the message processing capability of the broker dominates.

    There is a second order effect though, especially on virtualised systems: how "hot" is the application on the host? This can either be the broker or the sending/receiving application. This refers to whether the application is in memory, in CPU cache etc. If the application isn't busy, the operating system or hypervisor will "swap out" your application and "swap in" something busier - even if that is just system update task.

    The only way to deal with this is to keep your applications busy. Make sure they are sending a constant stream of messages, and only measure performance after a time, say half a second, after the message stream starts to allow the O/S scheduling to stabilise.

  • chaudharys
    chaudharys Member Posts: 25

    Thanks for your quick response Tom. Few points for our clarification:
    1. Shouldn't the message latency decrease with increasing message Size only if its being cached inside the broker infra?
    2. We are ensuring that we send unique messages each time hence ideally they shouldn't be cached as they are not repetitive in nature.
    3. We have the clients(pub, sub) running in laptops and we are connecting to solace cloud broker for transmission. In this, we are sending Direct messages in Loop as well as all at once along with variations in number of topics and number of subscribers. We were expecting to get linear/exponential pattern for this experiment i.e Linear increase in Transmission Time as the message Size increases and the same was observed for some instances. But when we reach towards the limit of per msg ( i.e 64Mb) we see Transmission time to dip. The stats are part of the session properties and message header which is a solace feature. Solace broker's monitoring service would have ideally filled the TX Stats and Timestamps for the message as soon the the message is sent/received by it into a topic / queue. So why should the applications be kept active/busy after their jobs are done?
    4. We have some constraints where we need to measure the NFRs without any wait. We are sending messages in 10-70MBs for now and see that this dip towards 63MB. Is this an expected pattern?

  • TomF
    TomF Member, Employee Posts: 412 Solace Employee

    Hey @chaudharys , we're getting in to the details pretty quickly here!
    1 and 2. When I said "cached" I wasn't referring to the message itself - I was talking about the broker itself being cached in the CPU instruction cache (amongst other effects), along with all the associated VM and/or O/S components. This keeps the message path "hot."
    3. When you say "transmission time dip" for large messages, I assume you mean the transmission time gets longer by more than the linear increase you'd expect given the extra bandwidth consumed.
    4. Most cloud infrastructure has the concept of "burst credits" where you can consume more of a particular resource, say network bandwidth, for a period of time. When the credits expire the resource is then throttled. Is it possible you're seeing this? One thing you could try is to run your test where you increase the message size and as soon as you see the message rate decrease, immediately reduce the message size and restart the test. If the message rate is still reduced you've run out of some resource credit, probably either network or CPU.