Artificial intelligence (AI) remains at the center of innovations showcased at CES 2026 in Las Vegas. For one, chipmaker AMD outlined how its AI products are being used across data centers, personal computers, and embedded systems, as partners apply the technology to research, healthcare, space, and industrial use cases.

Partners including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci, and Illumina shared how they are using AMD processors and accelerators to support AI training, inference, and data analysis in real-world settings.

“At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone,” said Dr. Lisa Su, chair and CEO of AMD during her opening keynote at CES 2026. “As AI adoption accelerates, we are entering the era of yotta-scale computing, driven by unprecedented growth in both training and inference. The company is building the compute foundation for this next phase of AI through end-to-end technology leadership, open platforms, and deep co-innovation with partners across the ecosystem.”

AMD said global AI computing capacity is expected to grow sharply over the next five years as more organizations deploy large AI models. To support this growth, the company is focusing on large-scale systems that combine processors, accelerators, and networking into unified platforms rather than relying only on faster chips.

One of these systems is the AMD “Helios” rack-scale platform, which the company described as a reference design for large AI infrastructure. A single rack can deliver up to 3 AI exaflops of performance and is designed for training very large models. The platform uses AMD Instinct MI455X accelerators, AMD EPYC “Venice” CPUs, and AMD Pensando “Vulcano” network interface cards, connected through the AMD ROCm software platform.

The company also unveiled the full AMD Instinct MI400 Series accelerator portfolio and previewed the next-generation MI500 Series GPUs. A new product in the MI400 line is the AMD Instinct MI440X GPU, designed for on-premises enterprise AI use. It supports training, fine-tuning, and inference in an eight-GPU setup that fits into existing data center infrastructure.

The MI440X builds on the AMD Instinct MI430X GPUs, which target scientific computing, high-performance computing, and sovereign AI workloads. These GPUs are set to power systems such as the Discovery supercomputer at Oak Ridge National Laboratory and the Alice Recoque system, France’s first exascale supercomputer.

The company said the MI500 Series GPUs, planned for launch in 2027, are designed to deliver a large increase in AI performance compared with earlier generations. The chips are based on AMD’s next-generation CDNA 6 architecture, a 2-nanometer process, and HBM4E memory.

Beyond data centers, AMD highlighted new products for AI-enabled PCs. The Ryzen AI 400 Series and Ryzen AI PRO 400 Series processors include a neural processing unit capable of 60 trillion operations per second and support AMD ROCm for scaling AI workloads from the cloud to local devices. Systems using these chips are expected to ship starting January 2026.

The company also introduced Ryzen AI Embedded processors for edge applications such as automotive systems, healthcare devices, and robotics. The new P100 and X100 Series processors are designed to deliver AI processing in space- and power-constrained environments.

Discover more from Back End News

Subscribe now to keep reading and get access to the full archive.

Continue reading