this post was submitted on 04 Jun 2024
25 points (96.3% liked)

Ask Experienced Devs

1228 readers
1 users here now

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

I'm just so exhausted these days. We have formal SLA's, but its not like they're ever followed. After all, Customer X needs to be notified within 5 minutes of any anomalous events in their cluster, and Customer Y is our biggest customer, so we give them the white glove treatment.

Yadda yadda, bla bla. So on and so forth, almost every customer has some exception/difference in SLAs.

I was hired on to be an SRE, but I'm just a professional dashboard starer at this point. The amount of times I've been alerted in the middle of the night because CPU was running high for 5 minutes is too damn high. Just so I can apologize to Mr. Customer that they maybe had a teensy slowdown during that time.

If I try to get us back to fundamentals and suggest we should only alert on impact, not short lived anomalies, there is some surface level agreement, but everyone seems to think "well we might miss something, so we need to keep it".

It's like we're trying to prevent outages by monitoring for potential issues rather than actually making our system more robust and automate-able.

How do I convince these people that this isn't sustainable? That trying to "catch" incidents before they happen is a fools errand. It's like that chart about the "war on drugs" where it shows exponential expense growth as you try to prevent ALL drug usage (which is impossible). Yet this tech company seems to think we should be trying to prevent all outages with excessive monitoring.

And that doesn't even get into the bonkers agreements we make with customers to agree to do a deep dive research on why 2 different environments have a response time that differs by 1ms.

Or the agreements that force us to complete customer provided training - while not assessing how much training we already committed to. It's entirely normal to do 3-4x HIPAA / PCI / Compliance trainings when everyone else in the org only has to do one set of those.

I'm at a point where I'm considering moving on. This job just isn't sustainable and there's no interest in the org to make it sustainable.

But perhaps one of y'all managed to fix something similar in their org with a few key conversations and some effort? What other things could I try as a sort of final "Hail Mary" before looking to greener pastures?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 0 points 5 months ago (1 children)

Ive managed an SRE team. I see 3 issues in your story.

  1. Upper MGMT and sales need to establish SLA and notification policy. Does the customer want to be notified of a high CPU over 5 min, or do they want notification of a slow down. If it's the latter, then you are alerting on the wrong metric. Someone needs to set expectations. If the customer wants and pays for any anomaly, well then it's your job to report it.

  2. SRE should make sense out of metrics. If all you do is stare at dashboards, then you are OPS, not SRE. SRE should setup and gather metrics and present them in ways meaningful to Dev and ops.

  3. If you are SRE, then man up and tell your manager what SREs should be doing and show some kind of idea or plan to push your monitori g forward. Be the Lead SRE if no one else is doing it.

[โ€“] th3raid0r 0 points 5 months ago* (last edited 5 months ago)
  1. They want to be notified of anything that could potentially slow down their system. So any anomaly. The catch being is that they constantly change patterns because they introduce new workloads weekly - which wouldn't be a problem if they could better communicate their forecasts. And that's just one of a few dozen customers - again all with unique cluster configuration and needs.

  2. Yeah, it sucks. The first year was pretty great and we had a fully integrated and unified managed services team where we were getting some great automation done. Then they split the team in half in order to focus on a different flavor of our product (with an entirely new backend) and left folks who were newer (myself included) with maintaining the old product. We were even told that we should be doing minimal maintenance on the thing as the new product would be the new norm. Then once upper management remembered how contracts work, they decided we needed to support 3 new platforms without growing the team. All while onboarding new customers and growing the environment count. We're now in operational overload after some turnover that was backfilled with offshore support that has a very minimal presence.

  3. I have tried championing this, but I don't expect an ableist, masculinity shaming person like you to understand a call for social pointers on how to "manage up".

"Man Up" - good lord, way to be an ass.