Announced in October this yearThe fifth-generation EPYC server processor, codenamed Turin, simultaneously launched the Instinct MI200X artificial intelligence accelerator based on the CDNA 3 architecture to compete with NVIDIA H325, and alsoNew Ryzen AI PRO 300 series processorsTo boost the computing power of workstation models and further strengthen the development layout of embedded products, the AI SOLUTIONS DAY event held in Taiwan showcased its artificial intelligence solutions and advocated for the joint promotion of artificial intelligence application development with market partners.
At the event, AMD and 26 OEMs, ODMs, independent software vendors, and embedded partners showcased how their computing products are driving the application of artificial intelligence technology.
Lin Jiancheng, senior vice president of business affairs of AMD Taiwan's commercial business unit, said that AMD continues to promote end-to-end artificial intelligence infrastructure innovation, and provides a broad solution portfolio for artificial intelligence, cloud, terminal and embedded workloads through computing products such as new EPYC server processors, Instinct accelerators, Pensando DPUs, and Ryzen AI CPUs.
At the same time, AMD continues to advance the open AI industry ecosystem, expanding the AMD ROCm open source AI software stack through new features, tools, optimizations, and support to help more developers get the ultimate performance from Instinct accelerators and meet the current development needs of AI applications.
Rajneesh Gaur, AMD's global vice president and general manager of the Embedded Solutions Group, explained that AMD is driving higher computing performance and execution efficiency in data centers through a broad portfolio of embedded AI solutions and technology architecture.
As edge AI becomes the next wave of innovation, AMD is helping customers bring solutions to market faster with its Versal adaptable system-on-chip and Alveo accelerator cards, while its EPYC embedded and Ryzen embedded processor solutions deliver powerful and efficient computing performance for data-intensive workloads.
The fifth-generation EPYC server processor, officially announced for shipment this year, has been adopted by ODMs and cloud service providers such as Cisco, Dell, HPE, Lenovo, and Supermicro. It offers a wide range of core count specifications, ranging from 8 to 192 cores, emphasizing the balance between performance and energy efficiency. The top-end 192-core processor boasts a performance improvement of 2.7 times that of similar products offered by competitors.
Taking the 192-core EPYC 9965 processor as an example, compared to Intel's Xeon 8592+ processor, it can increase the speed of commercial applications such as video transcoding by up to 4 times, and shorten the time to insight in scientific and high-performance computing applications by up to 3.9 times. In addition, the per-core performance in virtualized infrastructure is improved by up to 1.6 times.
In end-to-end AI workloads such as TPCx-AI (derivatives), the EPYC 9965 processor delivers up to 3.7 times the performance improvement. In small and medium-sized enterprise-level generative AI models such as Meta Llama 3.1-8B, the corresponding data throughput processing performance is 1.9 times higher than similar products launched by competitors.
The fifth-generation EPYC server processor, the 9005 series, will also launch a derivative specification built on the Zen 5c architecture. At the same time, each CPU can correspond to up to 12 channels of DDR5 memory modules, with a maximum memory specification of DDR5-6400 MT/s. It also supports the AVX-512 instruction set with a full 512b data path, and uses a trusted I/O port design corresponding to confidential computing. FIPS certification is currently being carried out for each part of the series to ensure the security of system operation.
The Instinct MI2024X accelerator is expected to enter mass production and shipment in the fourth quarter of 4 and will be introduced by platform manufacturers such as Dell, HPE, Lenovo, Supermicro, Eviden, and Gigabyte for product design starting in the first quarter of 2025. It is claimed to provide higher computing performance and execution efficiency in demanding AI tasks such as basic model training, fine-tuning, and inference, thereby helping customers and partners build more efficient and optimized AI solutions at the system, rack, and data center levels.
AMD also stated that it will boost workstation computing performance through the new Ryzen AI PRO 300 series processors, providing over 50 TOPs of NPU computing power to drive various commercial workloads and connect cloud and terminal computing resources to achieve a more complete artificial intelligence application experience.









