
AMD/YouTube
LAS VEGAS — Artificial intelligence took center stage at CES 2026 as AMD Chair and CEO Lisa Su delivered the event's first official keynote, outlining the company's vision for an AI-powered future that reaches far beyond data centers and research labs.
"AI is the most important technology of the last 50 years, and I can say it's absolutely our number one priority at AMD," Su said Monday evening on the CES stage. "It's already touching every major industry, whether you're going to talk about health care or science or manufacturing or commerce, and we're just scratching the surface, AI is going to be everywhere over the next few years. And most importantly, AI is for everyone."
Su underscored the rapid pace of adoption with a striking comparison. Since the launch of ChatGPT, AI usage has surged from roughly one million users to more than one billion active users worldwide. According to AMD's projections, that number could exceed five billion active users in the coming years — a growth curve far steeper than the early days of the internet.
While AI adoption is accelerating, Su acknowledged a critical challenge: the world does not yet have enough computing power to support everything AI promises to deliver.
"We don't have nearly enough compute for all the things we want to do with AI," she said, setting the stage for AMD's broader strategy.
Rather than focusing on raw performance alone, Su emphasized the importance of tightly integrated systems — CPUs, GPUs, networking, and software — working together to scale AI infrastructure efficiently. That philosophy is embodied in AMD's Helios rack platform.
Read more: CES 2026: Hyundai Unveils AI Robotics Plans Featuring Boston Dynamics, Google DeepMind Technology
First revealed in 2025 and showcased again at CES, Helios is AMD's open, modular rack design built on the OCP open rack-wide standard and developed in collaboration with Meta.
"Helios is a monster of a rack," Su said. "This is no regular rack. It's a double-wide design, and it weighs nearly 7,000 pounds."
To put that into perspective, Su noted that the rack weighs more than two compact cars combined — a vivid illustration of the physical scale behind today's AI workloads. The system is designed to bring together high-performance computing, advanced accelerators, and networking in a single, scalable platform capable of supporting the next generation of AI models.
AMD CEO Lisa Su was joined on stage by Greg Brockman, president and co-founder of OpenAI, who reinforced a key theme of the keynote: the growing shortage of compute power.
The two briefly joked about Brockman's constant request for more compute before turning serious. Brockman said compute remains one of the biggest barriers to AI's full potential, noting that truly universal AI would require billions of GPUs — far beyond today's infrastructure.
"The world is going to require far more compute than we have right now," Brockman said, underscoring why scaling AI hardware remains a critical challenge for the industry.
Read more: CES 2026: Garmin Reveals Addition of Nutrition Tracking as a Feature for Connect+
Su was later joined by Amit Jain, CEO of Luma AI, who outlined the company's ambition to build multi-modal general intelligence.
"In short, we are modeling and generating worlds," Jain said, as clips from Luma's Ray 3 model played on screen. While the visuals looked like traditional AI video, Jain said Ray 3 can generate clips in 4K and has already been pushed by some customers to produce content as long as a 90-minute feature film.
A new capability, Ray 3 Modify, allows users to edit generated and live-action clips in real time, blending AI output with human performances. Jain also demonstrated how Luma's multi-modal agent can analyze scripts and develop creative ideas, giving individual creators and small teams tools once limited to large Hollywood productions.
According to Jain, around 60% of Luma's growing inference workloads now run on AMD hardware, a partnership he believes gives the company a competitive edge. Looking ahead, he said scaling video models could help simulate real-world physical processes — even supporting advanced engineering tasks such as rocket design. Su added that AMD's next-generation MI500 EPYC CPUs are expected to deliver a 1,000x increase in AI performance over the next four years.
The keynote also featured Ramin Hasani, co-founder and CEO of Liquid AI, who emphasized efficiency as the next frontier of AI development.
Liquid AI is focused on building foundation models that can run across a wide range of hardware platforms. "The goal is to substantially reduce the computational cost of intelligence from first principles," Hasani said.
On stage, he announced Liquid Foundation Model 2.5, a 1.2-billion-parameter model designed for fast, agentic performance. He also previewed LFM 3, set to launch later this year, which will natively support 10 languages, enable real-time audio and visual interaction, and offer enhanced function-calling capabilities.
Together, the guest appearances reinforced AMD's central CES message: scaling AI for everyone will require not just faster hardware, but more efficient models and closer collaboration across the AI ecosystem.
Read more: CES 2026: Lenovo to Unveil 'Tech World Experience' Unlike Anything People Have Seen Before
