Best Of
Re: Solace Python API on Mac M1 ?
Hi @jrt,
We have had to postpone the release until April to give us some additional time to include another important feature in the next Python release. I will make a post and come back to this thread as well to notify everyone once it is available for download.
Thank you for your patience!
Re: Can we use Solace connection pool for node.js application using solaceclient library
Greetings @Sibendu,
I am not sure exactly what type of Azure Function template you are using, but in general, objects defined outside of the function scope callback will persist between invocations of the Azure function calls. As long as the session instance is hoisted to the top level scope, it can be re-used multiple times.
Note, that there are no guarantees on lifecycle - keeping the session open forever will keep the function instance alive for some maximum timeout, but it is probably better to explicitly close the connection after a defined interval and reconnect if that specific container happens to be reused.
Here is a sample of an HTTP triggered Azure function that publishes a message to Solace, and will re-use the same connection if it is called again within 8 seconds:
const { app } = require('@azure/functions'); const solace = require('solclientjs'); const brokerOptions = { url: "wss://HOSTNAME.messaging.solace.cloud:443", vpnName: "default", userName: "solace-cloud-client", password: ". . ." }; function createSolacePublisher() { const factoryProps = new solace.SolclientFactoryProperties(); factoryProps.profile = solace.SolclientFactoryProfiles.version10; solace.SolclientFactory.init(factoryProps); let publishCount = 0; let session = null; function initSession() { return session ? Promise.resolve() : new Promise((resolve) => { session = solace.SolclientFactory.createSession(brokerOptions); session.on(solace.SessionEventCode.UP_NOTICE, resolve); session.connect(); }); } let interval; function scheduleCloseSession() { if(interval) { clearInterval(interval); } interval = setInterval(() => { session.disconnect(); publishCount = 0; session = null; }, 8000); } return { publishToTopic: async function(topic, payloadText) { await initSession(); const message = solace.SolclientFactory.createMessage(); const publishDestination = solace.SolclientFactory.createTopicDestination(topic); message.setDestination(publishDestination); message.setBinaryAttachment(payloadText); session.send(message); publishCount++; scheduleCloseSession(); }, getPublishCount: function() { return publishCount; } }; } const publisher = createSolacePublisher(); app.http('httpTriggerDemo', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: async (request, context) => { context.log(`Http function processed request for url "${request.url}"`); await publisher.publishToTopic('try-me', 'Azure Function Message'); return { body: `Published Message to Solace! Publish Count: ${publisher.getPublishCount()}` }; } });
Re: Sequence Convoy Pattern with Solace
Following questions
1) Is Sequence supported on Topic as well as Queues ? or do I need to put a queue in front of topic.
2) Say i have my topic /xyz/customers/customerID - also I will let my client subscriber to /xyz/customers/* [all topics one per customer], does it mean the sequence will be automatically maintained on for example /xyz/customers/123 separate from /xyz/customers/456 ? (meaning client will listen and process multiple topics in parallel (for all customers), but maintain sequence within each separately.
3) Say one event has failed to get processed and I send NACK to the topic /xyz/customers/123, What happens then ? (Desired, stop processing this topic until the message is pushed) - also with manual interventions, to fix or push this event, how best to resume this topic ?
I am new to Solace and appreciate your guidance on the best practices that simplify the design.
Regards,
Amr
Re: Sequence Convoy Pattern with Solace
Can you clarify your requirement slightly? I am not sure which interpretation is right. I will describe a couple of interpretations using an example scenario with 2 producer applications rather than 20 for simplicity, and I will use T=? to represent some point in time, where T=1 happens before T=2, etc.
In application 1:
T=1, event A11 occurs on customer 123 T=4, event A12 occurs on customer 123 In application 2:
T=2, event A21 occurs on customer 123 T=3, event A22 occurs on customer 123
Interpretation 1:
Subscriber must receive messages in the order A11, A21, A22, A12
Interpretation 2:
Subscriber must receive A11 before A12,
and it must receive A21 before A22,
but not necessarily with a total order of A11, A21, A22, A12
If the requirement matches interpretation 1, then the desired result cannot be achieved unless the events in application 1 and application 2 are causally connected (in which case you can use vector clocks).
Re: Queue with random order?
In principle you can't ever get complete knowledge about whether a younger event is "about to arrive" at the message destination, because "about to arrive" encompasses multiple possibilities:
1. The destination has buffered multiple events associated with the same entity
2. You are using guaranteed messages and the broker infrastructure has multiple events in a subscriber's queue associated with the same entity
3. The publisher is currently in the process of publishing a new message for an entity, while the broker queue, or the subscriber, already has on older event for that entity.
The only one you can control is (1), so controlling this should happen on the subscriber side.
In my opinion the best solution is to have the subscriber receive messages into a buffer. You then release messages in batches. A batch would be released when, say, 500 messages are buffered, or some amount of time elapses, whichever comes first (tweak these numbers for your use case).
You then deduplicate the batch in the subscriber processing code, keeping only the most recent message for each entity (and immediately acknowledging all of the older messages without processing them if you are doing guaranteed delivery). A subset of the batch, containing only the most recent messages, then proceeds through to normal message processing.
Re: Sequence Convoy Pattern with Solace
Hi @amrosalah & @allmhhuran,
allmhhuran is correct that you will need to decide what scenario you require. The Solace brokers will maintain order based on when the broker receives the messages, but if you can't trust that the messages will be published in the right order and you need to look into the message to do resequencing then that will need to be done outside of the broker. But if messages are published to the broker in the order they should be processed by the consumers then you should be able to use a combination of a well defined topic hierarchy + maybe partitioned queues (if needed) to ensure your consumers can process the messages in order.
Hope that helps!
Re: Using an ITransactedSession and handling errors with the .net API
@allmhhuran this is an intriguing question, and I have asked around to fellow Solaceans their thought on the scenario. The general consensus is this:
Assuming all messages are valid and the queue(s) have sufficient capacity, a pub-pub-pub-ack-nak-ack
sequence is something that should never be encountered. An intermediate failed message should only happen if the message itself is invalid (and thus would fail on republish), or the queue has insufficient space to store it (for example if messages 1 and 3 are small, and message 2 is large).
That being said…
Let's assume that the intermittent NAK is just improbable, but not impossible - or that there is some extreme edge case we are overlooking, and we want something more performant than than single message pub-wait-ack-pub-wait-ack
sequencing.
In that case, I find the suggestion to use transacted sessions + an LVQ a very clever and reasonable way to accomplish this. I would, however, make some minor changes to the approach which would hopefully simplify the configuration overhead and make it even more flexible:
- During startup, provision any data queues (if needed) and add relevant topic subscriptions to them
- Create an LVQ named for the particular workflow or unit of work - but do not add topic subscriptions
- Open a queue browser to the LVQ and read data from it if any (from a previous session) - and perform corrective initialization actions as-needed
- Begin a transacted session and start publishing data messages to the appropriate topic(s)
- After n number of message or some fixed time interval or timeout (or possibly both) stop publishing data messages
- Send a special "ID" message (this could also be the same as the last sent payload) directly to the LVQ, call commit on the transacted session, and immediately read back from the LVQ browser until you receive back the message just published
- Rinse and repeat…
In this slightly modified scenario, the LVQ doesn't need to track or carry the same data topic subscriptions and only receives updates immediately before a commit operation. It also provides a convenient way to establish a restore point should the session go down and get a more direct acknowledgement that the transaction commit actually succeeded.
Assuming you are only performing this "control message / commit / readback" once every several hundred data messages, the overhead of this extra step will be negligible and very similar to the timing of just calling commit.
What are your thoughts? Can you spot any edge cases here?
If I have time tomorrow I might even draft up a code snippet that does this pattern.
Re: Intermittent "Assured message delivery is not enabled on this channel" errors
Thanks for the comment, @Aaron. Yes, I am going to have to reach out to our central admin group and have them investigate/contact Solace support. I obviously don't have access to the event.log
(wish I did), but those are definitely possibilities we should look into. Thanks again!
Re: Connection pool size for outbound messages
I have just come across this resource. It's well worth a read.
Re: Event Mesh using DMR
You should be able to edit a message if you click those three dots in the top-right corner. Should be a popup menu:
Anyhow, MNR is for Direct messaging only. So probably not applicable in most (modern) situations… it was primarily built to move pricing data / market data around very quickly through brokers.
Unless someone else can come up with a better solution (???) like perhaps A and B can be merged into one cluster, and then C is another… you might want to look at using VPN bridges, except they don't have the dynamic subscription behaviour of DMR… but you could build bridges between A and B, and then from B to C, and as long as you configure the subscriptions on those bridges correctly then messages published on A could make to C via B.