Dear all,
It’s a long shot, but this community has pulled off some amazing solutions, so it’s worth a try.
We’re positioning Solace as part of a business solution for one of our customers.
The solution as a whole is deployed in a single GCP Kubernetes autopilot cluster, including the Solace PubSub+ Standard workload.
Now, we’re seeing a somehow odd behavior in the monitoring and metrics console.
When we look at these reports, we see a tremendous difference between the Solace workload CPU and Memory hours request and all the other components, which are more or less evident depending on the time range of the analysis.
On an hourly analysis, there are no significant differences between Solace and the other workloads. On a daily analysis, we begin to see significant differences, which are exponentially bigger depending on the time range of the analysis (monthly is huge).
My guess is that Solace and GCP’s autopilot don’t get along, and a ton of CPU hour and memory hour requests are generated which then are not properly used.
As we’re using GCP’s Autopilot, we have no control on the number and size of the K8S nodes.
The example shown in the picture is from a development environment with little to no real workload.
Has anyone experienced something like this before?
Cheers