At the Snapdragon Summit 2025 event held in Hawaii this year, HUMAIN, a subsidiary of Saudi Arabia's Public Investment Fund, and QualcommAnnouncement of cooperationAfter promoting the development of agent AI, the two parties have now announced a transformative cooperation agreement to jointly deploy advanced AI infrastructure in Saudi Arabia and provide global AI inferencing services. Qualcomm also simultaneously unveiled its next-generation solution designed for data center AI inferencing - the Qualcomm AI200 and AI250 chip accelerator cards, as well as the corresponding rack design.
The new collaboration agreement between HUMAIN and Qualcomm emphasizes that the two parties will create the world's first fully optimized "edge-to-cloud hybrid AI" service, further positioning Saudi Arabia as a global artificial intelligence hub.
This collaboration was announced on the eve of the 5th Future Investment Initiative (FII) conference and is the concrete implementation of the cooperation between the two companies following their initial announcement at the US-Saudi Investment Forum in May this year.
Aiming for 200MW computing power, introducing Qualcomm's next-generation solutions
Under the agreement, HUMAIN aims to deploy a total of 200MW of Qualcomm AI200 and AI250 rack-scale solutions starting in 2026. These hardware, optimized for AI inference, will provide high-performance AI inference services to businesses and government organizations in Saudi Arabia and globally, emphasizing industry-leading performance per watt and total cost of ownership (TCO).
Strategic Goal: Combine local models with Qualcomm hardware to create a national AI blueprint
This initiative, considered a key step in the development of Saudi Arabia's tech ecosystem, combines HUMAIN's local infrastructure and comprehensive AI technology stack expertise (including its proprietary ALLaM large-scale language model) with Qualcomm's global leadership in AI and semiconductors. Together, the two parties aim to create a blueprint for building national AI capabilities, encompassing the entire chain from data center operations to commercial AI services.
“Through Qualcomm’s world-class AI infrastructure solutions, we are shaping the foundation of Saudi Arabia’s AI future,” said Tareq Amin, CEO of HUMAIN. “This collaboration combines HUMAIN’s regional insights and unique full AI stack capabilities with Qualcomm’s unmatched semiconductor technology to jointly lead Saudi Arabia in the next wave of global AI and semiconductor innovation.”
“By establishing a state-of-the-art AI datacenter powered by Qualcomm’s leading inference solutions, we are helping Saudi Arabia build a technology ecosystem that can accelerate its AI ambitions,” said Cristiano Amon, president and CEO of Qualcomm. “We are laying the foundation for transformative AI-driven innovation for businesses, governments, and communities across the region and around the world.”
Qualcomm simultaneously unveils details of its AI200/AI250 inference solutions
In conjunction with this collaboration, Qualcomm also unveiled its next-generation solutions designed for AI inference in data centers – the Qualcomm AI200 and AI250 chip accelerator cards, as well as the corresponding rack design.
Qualcomm AI200 (expected to be commercially available in 2026):
• Designed specifically for rack-scale AI inference, targeting large language models (LLMs) and multimodal models (LMMs).
• Emphasize low cost of ownership and optimized performance.
• Each card supports up to 768GB of LPDDR memory, providing high capacity and cost-effectiveness.
Qualcomm AI250 (expected to be commercially available in 2027):
• First to adopt an innovative memory architecture based on “near-memory computing”.
• Claimed to provide more than 10 times the effective memory bandwidth while significantly reducing power consumption.
• Supports disaggregated AI inferencing to improve hardware utilization efficiency.
Both rack-scale solutions utilize direct liquid cooling, support PCIe scale-up and Ethernet scale-out, and include built-in confidential computing capabilities to ensure the security of AI workloads.
Qualcomm emphasized that it offers a rich software stack that is seamlessly compatible with mainstream AI frameworks (such as Hugging Face) and provides tools such as one-click model deployment, aiming to simplify the onboarding process for developers and businesses. Qualcomm also pledged to maintain an annual update cadence and continue to release data center AI inference products.










