As AI adoption accelerates across every product category, does overall service reliability degrade? The Death Index tracks this by treating internet service reliability like a market index — a single composite score representing ecosystem health.
We don't care if one particular service is down right now. We care about the trend. Is the internet getting worse?
We pull incident data from publicly available Atlassian Statuspage APIs. Hundreds of major companies use Statuspage to run their public status pages, and each one exposes a consistent JSON API at /api/v2/incidents.json.
We currently track 51 providers across 6 categories: cloud infrastructure, AI, developer tools, productivity, communications, and fintech.
Only real service degradation is counted. Incidents with impact: "none" (informational posts, scheduled maintenance, rollout notices) are filtered out.
Each provider gets a daily reliability score from 0 to 100, computed as a 7-day rolling average.
The problem with naive scoring: Not all providers report incidents the same way. Some (like Twilio) log every individual carrier route issue as a separate incident — so "SMS delays to 5 carriers" becomes 5 overlapping minor incidents. Others roll the same situation into one. If we simply sum deductions per incident, granular reporters get unfairly penalized.
Our solution: the severity envelope. Instead of counting incidents, we ask: at any given moment during the day, what was the worst thing happening?
For each day, we build a timeline and find the maximum severity active at each moment:
We then deduct from 100 based on how many hours fell into each severity tier:
| Max severity at that moment | Points deducted per hour |
|---|---|
| Minor — degraded performance | 5 |
| Major — partial outage | 15 |
| Critical — full outage | 30 |
Each day floors at 0 (can't go negative). The provider's score is the average of the 7 daily scores. This means a bad day drags the score down for a week, then naturally rolls off.
Duration caps: Some providers leave incidents open on their status page for weeks — not because there's an active outage, but because it's a known issue being tracked. A "minor" GPU booting issue open for 118 days is not the same as 118 days of downtime. To prevent stale entries from permanently tanking a score, we cap the scoring impact of any single incident:
| Severity | Max scoring duration |
|---|---|
| Critical | 3 days — real critical outages get resolved fast |
| Major | 5 days |
| Minor | 7 days — minor issues can linger longer |
The raw incident data is still stored with real dates — the cap only applies during score computation.
Example: A provider has 3 overlapping minor SMS delivery incidents from 10am–2pm (4 hours), plus 1 major API outage from 1pm–2pm (1 hour). The envelope is: 3 hours minor + 1 hour major. Score: 100 − (3×5 + 1×15) = 70. Without the envelope model, this would score 100 − (4×5 + 4×5 + 4×5 + 1×15) = 25 — a massive over-penalty.
Providers are grouped into categories. Each category's score is the simple average of its provider scores — every provider within a category has equal weight.
Current categories:
The global Death Index is the average of all category scores, with equal weight per category. This prevents any one category with lots of providers (e.g., developer tools) from dominating the index.
A provider is only included in the index for a given date if it has at least one historical incident on or before that date. This avoids inflating the score with providers we have no real data for.
The clock maps the Death Index to a "minutes to midnight" metaphor, inspired by the Bulletin of the Atomic Scientists' Doomsday Clock.
The mapping is linear:
minutes to midnight = 12 × (score / 100)
| Score | Minutes to midnight | Meaning |
|---|---|---|
| 100 | 12.0 | Everything's fine (noon) |
| 75 | 9.0 | Some degradation |
| 50 | 6.0 | Significant issues |
| 25 | 3.0 | Widespread outages |
| 0 | 0.0 | Midnight (total meltdown) |
The vertical red line on the chart marks January 1, 2026 — the symbolic start of mainstream AI adoption across product categories. Data before this line is the baseline. Data after is the experiment.
Monthly data points are computed for 2024–2025 (pre-AI baseline). Daily data points from January 2026 onward.
This project is open source. If you want to dig into the code, add providers, or improve the scoring methodology, contributions are welcome.