🎄 Happy Holidays! 🥳
Most of Solace is closed December 24–January 1 so our employees can spend time with their families. We will re-open Thursday, January 2, 2024. Please expect slower response times during this period and open a support ticket for anything needing immediate assistance.
Happy Holidays!
Please note: most of Solace is closed December 25–January 2, and will re-open Tuesday, January 3, 2023.
Exceeded Spool File Limit - Topic 'my/topic'
2023-03-29 02:44:14,962 [WARNING] solace.messaging.core: [_solace_session.py:942] [[SERVICE: 0x7f78a946fbb0] - [APP ID: app_cfdbe3dc-bd2a-4558-bbe2-3ac086e12db5]] {'caller_description': 'From service event callback', 'return_code': 'Unknown (503)', 'sub_code': 'SOLCLIENT_SUBCODE_DATA_OTHER', 'error_info_sub_code': 28, 'error_info_contents': "Exceeded Spool File Limit - Topic 'my/topic'"}
Getting this error, so nothing is getting published to or consumed from my queue. My messages queued (MB) are well under the quota. In my message vpn I have two queues, Queue 1 has 371.28 MB messages queued out of 5000 MB configured quota. On Queue 2, it's 0.1748 MB out of 1500 MB configured quota.
I'm more concerned with queue 2. Does this mean I have to adjust Messages Queued Quota (MB) in advanced Queue settings. and what are the effects of changing maximum message spool usage? Couldn't find it on the documentation.
Thanks in advance!
Comments
-
@sheanne - sharing a note from the docs around this.
Checks performed before a message is spooled: It's possible for a message to not be spooled because of resource or operating limitations. A variety of checks are performed before a message is spooled. These include: - Would spooling the message exceed the event broker-wide message spool quota? - Would spooling the message exceed the Message VPN’s message spool quota? - Would spooling the message exceed the endpoint’s message spool quota? - Would spooling the message exceed the endpoint’s maximum permitted message size? - Would the message exceed the endpoint’s maximum message size?
Could it be possible that the spool quota at other levels might have maxed out?
0 -
Hey @giri - thanks for this. The Maximum Message Spool Usage (MB) in System -> Guaranteed Messaging is only set at 1500 MB. I've adjusted the message vpn and queue quota but not for the whole event broker. Just to be sure, how can I check for the ff:
- Would spooling the message exceed the endpoint’s message spool quota? - Would spooling the message exceed the endpoint’s maximum permitted message size? - Would the message exceed the endpoint’s maximum message size?
0 -
At a VPN level, you can use the Broker Manager (UI) and configure it - it is under the settings (advanced) of the Message VPN - I believe you have already taken care of that. For the queue level configuration, you can set it on the queue - settings in the Broker Manager.
0 -
Hey @giri, I tried recreating the error on my local machine by decreasing my maximum spool limit and publishing a huge message to the queue. I get this logging message instead:
{'caller_description': 'From service event callback', 'return_code': 'Unknown (503)', 'sub_code': 'SOLCLIENT_SUBCODE_SPOOL_OVER_QUOTA', 'error_info_sub_code': 71, 'error_info_contents': "Spool Over Quota. Queue or Topic endpoint limit exceeded - Topic 'my/topic'"}
Are these two errors the same? Error
SOLCLIENT_SUBCODE_SPOOL_OVER_QUOTA
and'SOLCLIENT_SUBCODE_DATA_OTHER', 'error_info_sub_code': 28, 'error_info_contents': "Exceeded Spool File Limit - Topic 'my/topic
?Also, I can't find function that handles these errors raised. Would you happen to know where I might be able to find and override it?
0 -
Hi @sheanne - You might be hitting the max message size limit or the maximum spool size limit, as you say 'huge message' triggered this.
I would like to point you to a tool, a resource calculator that can help with scaling parameters. Resource Calculator for PubSub+ Software Event Brokers On this page you can determine the resource requirements for a given scaling parameters.
Another doc reference around system scaling parameters can be found here.
Broker Manager allows you to set certain VPN and Queue level configurations - however, a few more system-level scaling parameters can be set at the boot time as a config-key=value on startup. You can see them on the resource calculator page under the deployment template section.
Regarding the question on the error codes, they are different.
SOLCLIENT_SUBCODE_DATA_OTHER: The broker rejected a data message for another reason not separately enumerated.
SOLCLIENT_SUBCODE_SPOOL_OVER_QUOTA: Message was not delivered because the Guaranteed Message spool is over its allotted space quota.
I am assuming you are using python API - I am not too familiar, but I will ask around and come back on that. Just saw that the direct_publisher has a publish failure listener, but persistent_publisher doesn't.
0 -
Hey @giri - thanks again! Yupp, the persistent publisher doesn't have a publish failure listener - at least, I couldn't find it. I used a publish with acknowledgement instead and failed publish events threw a PubSubClientError exception which can be caught and handled.
Thanks for all the help. Still bothered by the ambiguous error info:
SOLCLIENT_SUBCODE_DATA_OTHER: The broker rejected a data message for another reason not separately enumerated.
But ill try playing around the scaling parameters and see if it fixes anything. Thanks a lot!
0 -
That is correct, you can set the PublishFailureListener and use the on_failed_publish method to catch publish failure. If you are wrapping your overall logic in a try block, you can
catch
PubSubPlus exception in yourexcept
section and handle that the way you want to handle it.You can see the list of errors the Python API throws in the errors packageRegarding the
Exceeded Spool File Limit
youre getting, it's most likely due to message size... what is your message size?0 -
I'm trying to replicate that locally. A couple of clarifying questions:
- Are you using direct or persistent message publishing in your Python API?
- Have you tried publishing using other Solace APIs? Does publishing with the TryMe tab works successfully?
I replicated the same environment on my end and have not received the same error with either direct or persistent message publishing. This is what I have setup
- Queue 1: 5,000 mb quota filled with 371.28 MB
- Queue 2: 1,500 mb quote filled with 0.1748 MB
Publishing succeeds to these queues...
0 -
I'm having the same results as well. I am able to publish successfully on my local machine, but not on the remote deployment.
The only difference between my local machine and the container deployment is that Queue 1 has this much messages queued, while my local Queue 1 has none.
- Queue 1: 5,000 mb quota filled with 371.28 MB with 1,302,026 messages queued
- Queue 2: 5,000 mb quote filled with 0.1748 MB with 12 messages queued
And yes, I am using Persistent Message Publishing. Will try TryMe once I figure out how to connect to my remote broker url
0 -
Hey all. "Spool file limit" refers to the number of actual spool files stored on the broker. If using software broker, there are different limits depending on your tier.
The broker stores messages in spool files. The number of spool files grows as more data is written to queues. As messages are ACKed out of queues, the messages are removed from the spool files. Once all messages in one spool file are ACKed, the spool file will be deleted.
So if you're hitting this limit, that means you have a lot of messages sitting around in your broker. You should look at the queues on your broker and empty any that don't have any useful data in them.
For see what your current spool file utilization is, you need to use CLI. Unfortunately this isn't exposed in PubSub+ Manager.
> show message-spool Config Status: Enabled (Primary) Maximum Spool Usage: 1500 MB Using Internal Disk: Yes Spool Files Utilization: 0.00% <------------- Active Disk Partition Usage: 18.76% Mate Disk Partition Usage: -% Next Message Id: 3388792 Defragmentation Status: Idle Number of delete in-progress: 0
You can also do a
show message-spool detail
to see if you have a lot of fragmentation. That could be part of this as well.1