✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags:
Check this out: https://bunny.net/blog/the-stack-overflow-of-death-dns-collapse/ This is ridiculously relevant to lesson #1, and with a perfect timing...
23.6.2021 11:36Comment on 7 Lessons From 10 Outages by Yuval YogevAfter 10 post-mortems in their first season, Tom and Jamie reflect on the common issues they’ve seen. Click through for details! Summing Up Downtime We’re just about through our inaugural season of The Downtime Project podcast, and to celebrate, we’re reflecting back on recurring themes we’ve noticed in many of the ten outages we’ve poured […]
22.6.2021 00:387 Lessons From 10 OutagesOn May 11, 2021, Salesforce had a multi hour outage that affected numerous services. Their public writeup was somewhat controversial — it’s the first one we’ve done on this show that called out the actions of a single individual in a negative light. The latest SRE Weekly has a good list of some different articles […]
31.5.2021 21:50Salesforce Publishes a Controversial Postmortem (and breaks their DNS)During a routine addition of some servers to the Kinesis front end cluster in US-East-1 in November 2020, AWS ran into an OS limit on the max number of threads. That resulted in a multi hour outage that affected a number of other AWS servers, including ECS, EKS, Cognito, and Cloudwatch. We probably won’t do […]
25.5.2021 02:06Kinesis Hits the Thread LimitIn November 2020, Coinbase had a problem while rotating their internal TLS certificates and accidentally unleashed a huge amount of traffic on some internal services. This was a refreshingly non-database related incident that led to an interesting discussion about the future of infrastructure as code, the limits of human code review, and how many load […]
17.5.2021 20:00How Coinbase Unleashed a Thundering HerdJust one day after we released Episode 5 about Auth0’s 2018 outage, Auth0 suffered a 4 hour, 20 minute outage that was caused by a combination of several large queries and a series of database cache misses. This was a very serious outage, as many users were unable to log in to sites across the […]
10.5.2021 21:06Auth0’s Seriously Congested DatabaseIn reply to <a href="https://downtimeproject.com/podcast/githubs-43-second-network-partition/#comment-12">The on-call DBA</a>. Oh, that's interesting. We had read <a href="https://github.blog/2018-06-20-mysql-high-availability-at-github/" rel="nofollow ugc">this blog post</a> from earlier in 2018 which mentioned it. Do you mean you weren't using it cross-DC, or at all?
6.5.2021 22:36Comment on GitHub’s 43 Second Network Partition by twkTom was feeling under the weather after joining Team Pfizer last week, so today we have a special guest episode with Sujay Jayakar, Jamie’s co-founder and engineer extraordinaire. While it’s great to respond well to an outage, it’s even better to design and test systems in such a way that outages don’t happen. As we […]
3.5.2021 22:02Talkin’ Testing with Sujay JayakarWe weren't using semisync replication.
27.4.2021 06:32Comment on GitHub’s 43 Second Network Partition by The on-call DBAIn 2018, after 43 seconds of connectivity issues between their East and West coast datacenters and a rapid promotion of a new primary, GitHub ended up with unique data written to two different databases. As detailed in the postmortem, this resulted in 24 hours of degraded service. This episode spends a lot of time on […]
26.4.2021 20:04GitHub’s 43 Second Network PartitionAuth0 experienced multiple hours of degraded performance and increased error rates in November of 2018 after several unexpected events, including a migration that dropped some indexes from their database. The published post-mortem has a full timeline and a great list of action items, though it is curiously missing a few details, like exactly what database […]
19.4.2021 18:34Auth0 Silently Loses Some IndexesIn reply to <a href="https://downtimeproject.com/podcast/one-subtle-regex-takes-down-cloudflare/#comment-4">John Graham-Cumming</a>. Thanks John, and thanks for writing such an exceptional post mortem. It was truly educational and I walked away extremely impressed with your engineering team and processes. You guys have earned the right to be such a key part of the internet's infrastructure.
16.4.2021 01:22Comment on One Subtle Regex Takes Down Cloudflare by twkThanks for talking about this. I greatly enjoyed hearing an external person's view of what happened that day.
14.4.2021 09:09Comment on One Subtle Regex Takes Down Cloudflare by John Graham-CummingOn July 2, 2019, a subtle issue in a regular expression took down Cloudflare (and with it, a large portion of the internet) for 30 minutes.
12.4.2021 21:27One Subtle Regex Takes Down CloudflareMonzo experienced some issues while adding servers to their Cassandra cluster on July 29th, 2019. Thanks to some good practices, the team recovered quickly and no data was permanently lost.
5.4.2021 20:44Monzo’s 2019 Cassandra OutageOn January 31st, 2017, Gitlab experienced 24 hours of downtime and some data loss. After it was over, the team wrote a fantastic post-mortem about the experience. Listen to Tom and Jamie walk through the outage and opine on the value of having a different color prompt on machines in your production environment. Tom: [00:00:00] […]
28.3.2021 17:56Gitlab’s 2017 Postgres OutageSlack was down for about 1.5 hours on the first day everyone was back in their (virtual) office in 2021, Jan 4th. Listen to Tom and Jamie walk through the timeline, complain about Linux's default file descriptor limit, and talk about some lessons learned.
20.3.2021 19:31Slack vs TGWsWelcome to The Downtime Project! Here is a quick episode where Tom and Jamie talk about why they created the show and what you can expect.
20.3.2021 03:23Introduction