210
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Nov 2023
210 points (98.6% liked)
Technology
59436 readers
1147 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
This is interesting. What I'm hearing is they didn't have proper anti-affinity rules I'm place, or backups for mission-critical equipment.
The data center did some dumb stuff, but that shouldn't matter if you set up your application failover properly. Architecture and not testing failovers are the real issue here
Also sounded like they had apps centered/only there that had to be online for everything else to work.
But the 4-minutes-instead-of-10 batteries certainly didn’t help.
That's exactly it.
https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage/
Here is a quick summary, but the actual postmortem is worth reading.
Classic example of cascade failure or domino effect. Luckily their resilience wasn't a full outage
Basically, new features get developed fast and are iterated quickly. When they mature, they get integrated into the high availability cluster.
There are also some services that are deliberately not clustered. One of which is logging, which should cause logs to pile up "at the edge" when the logging core service is down.
Unfortunately, some services were too tightly coupled to the logging core. So they should've been HA clustered, but were unable to cope with the core logging service being down.
Whilst HA failover had been tested, the core services has never been taken offline, so all this was missed.
Which all ended up with inconsistent high-availability amongst different services and products. A lot of new features would have failed as expected, and some mature features that shouldn't have failed did.
When they brought their disaster recovery site up, there were some things that needed manual configuration, and some newer features that hadn't been tested in a disaster recovery scenario.
They are now focusing significant resources on: