🎄 Happy Holidays! 🥳
Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.
Happy Holidays!
Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.
OperationError: Cannot send message - no space in transport.
Hi guys
We are building a browser based fx trading platform This platform is fully event driven & all communication goes via a solace software broker hosted in AWS.
One capability that we have in place on the front end, is that when your connection drops (solace connection retries), we store/persist all telemetry data locally (indexed db). Once the connection is re-established, we start draining the locally persisted telemetry data, pushing the data to a solace persisted queue. This is key for accuracy, especially when shipping metrics.
Once we start draining the persisted 'offline' data, we start running into errors - 'OperationError: Cannot send message - no space in transport.'
According to the documentation, it seems like internal buffer of the transport layer has filled up and is unable to accept more data until some of the data is transmitted and acknowledged by the receiver.
What are my options here? Whats an elegant way to handle this?
Can or should I increase the internal buffer size? Perhaps I can implement some retry logic? Perhaps I need to batch messages?
Thanks
Shaun
Best Answer
-
Hello Shaun,
There are a number of approaches to address this: first and foremost you could increase the
windowSize
session property; however, that maxes out at 255.Detect and Respond
One approach is to catch when the API can no longer accept data, and then subscribe to the
CAN_ACCEPT_DATA
session event before continuing:const doSend = () => {
try {
producer.log(Starting send at: ${sentCount});
while (sentCount < producer.numOfMessages) {
var sequenceNr = sentCount + 1;
var message = producer.buildMessage(sequenceNr);
producer.session.send(message);
}
} catch (error) {
producer.log(Aborting send at: ${sentCount});
producer.log(error.toString());
producer.session.once(solace.SessionEventCode.CAN_ACCEPT_DATA, () => {
doSend();
});
}
};
doSend();This was recently discussed on another question, and a PR to demonstrate that capability in JavaScript has been made (but not yet merged at the time of this writing) here:
Prevent Buffer Overrun
Another approach, which I recently shared on a similar question was to use wrappers around the main Solace SDKs to make them async-awaitable, and only send new data when the underlying buffer can accept them.
A full description of this approach can be found here:
1
Answers
-
Hello Shaun,
There are a number of approaches to address this: first and foremost you could increase the
windowSize
session property; however, that maxes out at 255.Detect and Respond
One approach is to catch when the API can no longer accept data, and then subscribe to the
CAN_ACCEPT_DATA
session event before continuing:const doSend = () => {
try {
producer.log(Starting send at: ${sentCount});
while (sentCount < producer.numOfMessages) {
var sequenceNr = sentCount + 1;
var message = producer.buildMessage(sequenceNr);
producer.session.send(message);
}
} catch (error) {
producer.log(Aborting send at: ${sentCount});
producer.log(error.toString());
producer.session.once(solace.SessionEventCode.CAN_ACCEPT_DATA, () => {
doSend();
});
}
};
doSend();This was recently discussed on another question, and a PR to demonstrate that capability in JavaScript has been made (but not yet merged at the time of this writing) here:
Prevent Buffer Overrun
Another approach, which I recently shared on a similar question was to use wrappers around the main Solace SDKs to make them async-awaitable, and only send new data when the underlying buffer can accept them.
A full description of this approach can be found here:
1 -
Thanks Nick, very helpful.
0 -
Hi Shaun,
My understanding of your use case is that you have a database of telemetry events that has built up while disconnected from Solace, and you’re using a Solace queue to persist these events in a guaranteed manner on the broker, in order, from the database. You want to make sure none of these events are lost. You’re seeing the transport filling up as you stream these events across to the broker.It’s worth understanding that in this context the “transport” is the GM transport. It’s also important to understand there are 2 layers to this transport:
- The “transport” layer of the GM transport (see what I did there?) This is largely hidden from you and is run by the API. As messages are sent from the API to the broker, the broker returns acknowledgements to the API. The API then considers these messages dealt with and considers them finished in the API transport layer (please read on).
- The application layer of the GM transport. Once the transport acknowledgement is received from the broker, the acknowledgement is passed to the application layer via the SessionEvent ACKNOWLEDGED_MESSAGE. Only once this is received should the application consider the message sent and free memory etc.
Governing all of this is the GM “Publisher Window Size.” This dictates how many GM messages can be in-flight between the API and the broker. Once the API has this many messages it has sent to the broker but hasn’t received any transport acks for, the transport is considered full. That’s the point at which you get the no space in transport error.
As Nick said, you should fill the window and wait for the CAN_ACCEPT_DATA event to start filling the window again.
You can also expand the window to accept more messages, simply by increasing the Publisher Window Size up to 255. This can make the problem seem to go away, since the extra time taken to fill the window can allow the broker acks to start coming back, but I’d recommend still reacting to the no space error and CAN_ACCEPT_DATA event since a change in the processing time of your code or the RTT of the network to the broker could mean you still the effect.
Another side effect of increasing the Window size is dealing with rejected messages (via REJECTED_MESSAGE_ERROR) or the lack of an ack. Let’s say you transmist 100 messages, and the broker queue fills at message 75. Now let’s say you are draining the queue with another app, and the queue is empty enough to accept messages 80-100. You now have a gap in the message stream from 75-79, and out of order messages from 80-100. As you increase the size of the window, the chances of these kind of conditions increases. What this means is that you need to handle these cases: in you need strict ordering, increasing the window size may not be the right thing to do.
1 -
Thanks for adding more detail @TomF, this makes sense.
Do I follow this pattern (recommended above) with the dotnet messaging client as well? Strange thing, I only saw these 'no space in transport' errors in the browser, not my dotnet application.
0 -
Hi Shaun,
Yes, it's a good practice whenever you're using non-blocking sends.
Remember the browser environment is more constrained than your dotnet environment: the browser is doing a lot more and many browsers limit the resources they will grant to application code. That may not be the issue here but it's worth remembering.
0