Updated:The outage, which occurred in AWS's largest region, US-EAST-1, caused delays or interruptions to more than 80 AWS services. The affected services include the following brands:
• Enterprise Tools:Zoom, Slack, Box
• Creative Software:Adobe Generative AI Features
• Game Services:Nintendo Network Services, Fortnite, Fate/Grand Order
• Internet Services:Recipe website CookPad, VTuber agency ANYCOLOR official website
Service restoration status:
Most services had returned to normal by the morning of the 21st, but according to Adobe officials, some issues had not yet been fully resolved as of 10:00 a.m. on the 21st. This outage once again highlights the high reliance of modern network services on cloud infrastructure, and the potential for a single cloud region failure to have a cascading impact on global network services.
As cloud services become the core infrastructure of the digital economy, such large-scale failures have prompted companies to rethink the importance of cloud strategies and disaster recovery plans.
Amazon's cloud service AWS (Amazon Web Services) experienced a severe outbreak earlier on Monday morning (October 20th) Eastern Time.Service interruption incidentThe disaster centered on AWS's most important region, US-EAST-1 (Northern Virginia), which is the default for many businesses. This caused countless websites, applications, and gaming services around the world that rely on AWS to shut down or experience slow responses, as if "half the internet" had gone offline simultaneously.
The incident highlights the potential risks of the current global Internet infrastructure being overly dependent on a few giant cloud providers.
Culprit: DynamoDB DNS resolution anomalies, experts say it's like "network amnesia"
According to AWS's official service health status page, Amazon began investigating "increased error rates and latency across multiple AWS services" in the US-EAST-1 region at around 3:11 a.m. Eastern Time.
By 5:01 a.m., AWS had identified the root cause of the problem: a DNS resolution issue with the API for DynamoDB, its core NoSQL database service, which AWS customers use to store critical information.
Mike Chapple, professor of IT, analytics and operations at the University of Notre Dame,CNN News InterviewWhen asked about the situation, he offered a precise metaphor. "Amazon still had the data securely stored, but for hours, no one could find it, temporarily separating the apps from their data," he said. "It was as if much of the internet had suffered a brief bout of amnesia," he said.
Disaster spreads: EC2 instance startup is blocked, AWS initiates "rate limiting"
Although AWS claimed at 6:35 a.m. that the DNS issue had been fully mitigated and that "most AWS service operations have returned to normal," it was clear that a knock-on effect had already been triggered.
The disaster quickly spread to AWS's EC2 (Elastic Compute Cloud) virtual hosting service, which many companies use to build their online applications. At 8:48 a.m., AWS admitted that it still faced problems when launching new EC2 instances in the US-EAST-1 region.
AWS recommended that customers not bind new instances to specific availability zones (AZs) when deploying them, so that the EC2 system could more flexibly select a data center with better performance.
However, at 9:42 a.m., AWS updated its status, noting that despite applying "multiple mitigation measures" across multiple AZs, it continued to experience elevated error rates when launching new EC2 instances. As a result, AWS had to implement "rate limiting" new instance launches to assist in system recovery.
Then at 10:14 a.m., AWS again admitted that it was still seeing significant API errors and connectivity issues in multiple services in the US-EAST-1 region.
Obviously, even if the fundamental problem is solved, AWS still needs to digest a large backlog of requests, and it is expected to take some time for all services to return to normal.
The hidden dangers of a 30% market share: Finance, gaming, and streaming services are all affected
Because so many businesses rely on US-EAST-1 as the core of their AWS service deployments, this outage caused a global disaster.
According to Down Detector, a large number of services saw a surge in outage reports around the same time. Besides Amazon's own services, reports also came from banks, airlines, Disney+, Snapchat, Reddit, Lyft, Apple Music, Pinterest, and even popular games like Fortnite and Roblox, as well as media outlets like The New York Times.
AWS offers a highly attractive infrastructure, such as elastically scalable computing resources to handle traffic peaks and a global network of data centers. According to mid-2025 estimates, AWS's share of the global cloud infrastructure market is expected to reach 30%.
This incident also once again sounded the alarm: when the backbone of the global network is overly dependent on a few suppliers (such as AWS, Azure, and GCP), once a problem occurs in one of them, or even in just one core area, the chain reaction is enough to cause incalculable losses.








