AWS recently demonstrated a transformational design at its data center in Oregon, USA – switching from traditional air cooling to a liquid cooling system. This not only improves data center performance but also provides a new solution for sustainable development.
Traditional data centers rely on air cooling for cooling and heat dissipation. However, with the current surge in demand for artificial intelligence computing, traditional air cooling systems are no longer able to achieve better heat dissipation while maintaining a cost-effective balance. Therefore, many data centers that drive artificial intelligence computing have begun to introduce liquid cooling systems, and AWS is no exception.
Air is no longer enough, liquid cooling is the only solution to the AI heat wave
In traditional data center design, "cooling" means blowing large amounts of cold air through servers to remove the heat generated by their operation. However, this model is gradually reaching its limits as the chip density and power consumption required for artificial intelligence computing increase dramatically.
Dave Klusas, senior manager of cooling systems for AWS data centers, stated bluntly: "Our goal isn't to create a comfortable office-like temperature, but to keep the servers from overheating using minimal energy and water resources." However, for artificial intelligence chips operating at teraflops, the heat transfer efficiency through air convection is far from sufficient.
In particular, workloads such as training large language models (LLMs) require the centralized deployment of a large number of high-efficiency chips to further increase data exchange speeds and reduce latency. While this arrangement greatly benefits computing performance, it also presents unprecedented heat dissipation challenges.
AWS built a dedicated liquid cooling system, going from whiteboard design to actual deployment in less than a year
AWS chose to design its own solution. They developed a liquid cooling system that directly contacts the chip. Using a "cold plate" placed on the chip, liquid circulates through a closed circuit, removing heat and then returning it to the backend cooling system for cooling.
This design not only provides thermal conductivity 900 times greater than that of air, but also operates as a fully closed-loop system, eliminating the need for additional water consumption within the data center. The liquid temperature can even reach "hot pool" levels, eliminating the need for the energy-intensive fan-driven cooling required by air.
AWS stated that it took only four months to design a prototype and another 4 months to fully deploy the system, from conception to implementation. This included establishing a supply chain, writing software, and conducting field testing. Its core philosophy is "scalability, adaptability, and flexibility."
Liquid cooling systems are now in mass production and will be gradually introduced into more data centers.
This system has now moved from AWS R&D centers to live data center environments. In the coming months, the deployment will be further expanded and flexibly configured to meet the needs of different data centers and applications.
Interestingly, AWS even developed its own Coolant Distribution Unit (CDU), which is superior in performance and efficiency to comparable products on the market. These efforts are not only technological upgrades, but also reflect AWS's commitment to the future of artificial intelligence, efficient computing, and green energy development.
Dave Klusas said, "We've created a liquid cooling system that's precisely deployed, energy-efficient, and cost-effective, and that can be flexibly expanded to meet customer needs in the future."




