For over three decades, Arm's core business model has been licensing its silicon intellectual property (IP) and computing subsystems (CSS). However, this tradition was officially broken today (March 24th). Arm made a historic announcement with the launch of its first officially designed and mass-produced physical silicon chip product – the "Arm AGI CPU." This processor, tailor-made for AI data centers, targets the rapidly growing infrastructure needs of "Agentic AI," with Meta as its initial co-development partner. Powered by TSMC's 3nm process, the Arm AGI CPU boasts more than twice the rack computing performance of traditional x86 platforms.

Breaking the IP licensing framework: Why did Arm decide to jump in and make chips itself?
alreadyThere were many rumors beforeArm CEO Rene Haas had already confirmed last year that the company would launch its own brand of chips, and this was officially unveiled at the event. To understand why Arm took this historic step, it is necessary to first understand the infrastructure changes brought about by "proxy AI".
In his statement, Rene Haas said that AI has completely redefined how computing is built and deployed. In the past, AI infrastructure was highly focused on "model training" on GPUs; but as AI applications have shifted to deploying continuously running "AI agents," these systems need to constantly perform inference, planning, coordination, and data transfer, resulting in an exponential increase in the number of tokens generated by AI systems.

It is estimated that when enterprises adopt agent-driven applications on a large scale, the number of CPUs required per GW (gigawatt) of electricity will increase by more than four times. However, under power constraints, the complex architecture and high energy consumption of traditional x86 processors are no longer sufficient to handle this demand.
Therefore, in order to help partners accelerate the deployment of AI agents, Arm decided to break with the past "convention" of only providing IP or CSS (Computing Subsystem) and directly launch its own branded physical chips, providing the market with more flexible and direct hardware options.
Arm will also move beyond simply providing IP design and CSS computing subsystem configuration to offering physical CPU products, thereby expanding its reach into the broader computing market in the context of proxy AI trends.

136 cores and TSMC's 3nm process: Performance nearly doubles that of the x86 architecture
As its debut product, Arm AGI CPUs demonstrate strong ambition in terms of hardware specifications and energy efficiency:
• Top-tier core and bandwidth:Each CPU features up to 136 Arm Neoverse V3 cores, providing 6GB/s of memory bandwidth per core and latency of less than 100ns.
• Ultimate Energy Efficiency (TDP):Power consumption is controlled at 300 watts (TDP), while each program thread is equipped with a dedicated core to ensure decisive performance under continuous high load and eliminate the waste of frequency reduction and idle execution.
• Ultra-high rack density:Supports high-density 1U server racks. In air-cooled deployment mode, each rack can accommodate up to 8160 CPU cores; with liquid cooling system design, this can be increased to more than 45000 CPU cores per rack.
This chip is manufactured by TSMC using its advanced 3nm process. Arm emphasizes that the AGI CPU's performance per rack is more than twice that of traditional x86 architecture CPUs, which means that enterprises can save up to $10 billion in capital expenditure per gigawatt of AI data center deployment.

Tech giants unanimously support Meta as a launch partner.
Arm's move to directly develop chips did not trigger a strong backlash from existing IP customers; instead, it garnered widespread support from the industry.
Meta has become the first partner and co-developer of this chip. Santosh Janardhan, Meta's Head of Infrastructure, said that Meta will use the Arm AGI CPU to optimize the infrastructure of its application family and work with it in conjunction with Meta's self-developed AI acceleration chip "MTIA" to achieve more efficient computing scheduling in large-scale AI systems. Both parties have also committed to continuing to work together in depth on product roadmaps for multiple generations in the future.

In addition to Meta, several other companies, including OpenAI, Cerebras, Cloudflare, SAP, and SK Telecom, have also confirmed that they will adopt this chip for core tasks such as accelerator management, control plane processing, and cloud API hosting.
On the hardware side, Arm has already partnered with OEMs and ODMs such as ASRock Rack, Lenovo, Quanta Computer, and Supermicro, and more systems are expected to be launched in the second half of this year.
In addition, more than 50 tech giants, including AWS, Google, Microsoft, NVIDIA (whose CEO Jensen Huang also offered his congratulations), as well as Samsung and SK hynix, have expressed strong support for Arm's expansion into the chip product line.

Analysis of viewpoints
Initial concerns arose that Arm selling its own chips might create a conflict of interest with major customers like AWS, Google, or Microsoft, who were already using Arm architecture to develop their own custom-designed CPUs.
In conclusion, Arm has precisely positioned its AGI CPUs in the emerging and urgently needed field of "proxy AI." For companies like Meta or OpenAI, which require massive amounts of CPUs to power their AI accelerators but don't necessarily want to invest huge resources in "designing a general-purpose CPU from scratch," directly purchasing off-the-shelf Arm AGI CPUs that have already pushed the Neoverse V3 performance to its limits is the most cost-effective approach.
At the same time, this is also a "decisive blow" launched by Arm against the x86 camp (Intel and AMD) in the data center field. When Arm chips with a thermal design power of only 300W can be packed with 136 cores under the same rack and power constraints through TSMC's 3nm process, and provide twice the performance of x86 architecture CPUs, the fatal flaw of x86 architecture's "excessive power consumption ratio" in the AI era will be further amplified. It also means that the main computing force of data centers is inevitably shifting to the ARM architecture.


