AMD to broaden and specialize EPYC CPUs, already working on Zen 7 architecture
15 hour ago / Read about 12 minute
Source:Tomshardware
AMD plans to expand its portfolio of CPUs for data centers that will be targeted at different workloads.

AMD hints at broader Zen 6, Zen 7 server CPU lineup optimized for a broad range of AI and hyperscale workloads.(Image credit: AMD)

  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Email
Share this article
0
Join the conversation
Follow us
Add us as a preferred source on Google
Subscribe to our newsletter

Modern data center workloads are very diverse, and so are the requirements of data center operators for their hardware, which is why virtually all hyperscale cloud service providers nowadays have their own custom silicon programs. In a bid to stay competitive in the coming years, AMD plans to expand its portfolio of CPUs for data centers that will be targeted at different workloads.

With the Zen 4-based 4th Generation EPYC family, AMD offers a variety of processors SKUs aimed at AI, cloud, enterprise, network/edge, and small business/hosted service providers. But with Zen 5, the family is somewhat narrower. AMD's messaging today suggests that the company is moving toward more segmented EPYC products, including workload-specific SKUs, potentially different core/cache/interconnect configurations, and CPUs tailored for inference clusters, orchestration, low-latency AI tasks, and GPU-heavy deployments. Specifically, Su also hinted that this expansion is going beyond Venice, which includes Zen 7 and probably Zen 8 microarchitectures.

Go deeper with TH Premium: AI and data centers

(Image credit: Microsoft)

  • Photonics and high-speed data movement is the next big AI bottleneck
  • The data center cooling state of play
  • Massive AI data center buildouts are squeezing energy supplies
  • Ultra Ethernet: The data center interconnection of tomorrow

"We are working with customers right now on beyond Venice and what we are doing in those architectures," Su said.

Article continues below

 "[The industry] is going to need a broad portfolio of CPUs, not all CPUs are the same," said Lisa Su, chief executive and chairman of AMD, during the company's earnings call with financial analysts and investors. "Frankly, you are going to need different CPUs for whether you are talking about general purpose operations or you are talking about head nodes or you are talking about agentic AI tasks."

During the Q&A, Su repeatedly emphasized that AMD no longer sees server CPUs as a single homogeneous category. Instead, the company now views the market as split into multiple workload-specific segments, including general-purpose compute, CPU head nodes for accelerators, and CPUs optimized for agentic AI workloads. However, AMD plans to offer differentiation even within these categories to address the particular needs of its customers more precisely.

"What we have been focused on is building, not just one type, but […] throughput optimized, power optimized, cost optimized, and AI infrastructure optimized [models] as we have done in the Venice family," Su said.

Indeed, when it comes to AMD's 6th Generation EPYC processors based on the Zen 6 microarchitecture, the company plans to offer its codenamed Venice CPU with up to 256 cores for general-purpose servers as well as codenamed Verona processors for AI infrastructure (previously, AMD only introduced Verano CPUs as the processor that will power its next-generation rack-scale AI solutions). We yet have to learn whether CPUs aimed at agentic AI workloads will use separate silicon configurations or will re-use what was originally intended for general-purpose servers, but with different clocks or cache configurations.

" The Venice family spans a broad set of CPUs optimized for throughput, performance per watt, and performance per dollar, including Verona, our first EPYC CPU purpose-built for AI infrastructure," Su said.

Considering the fact that AMD now expects the server CPU total available market to grow at a 35% compound annual growth rate and reach $120 billion by 2030, development of specialized models may be well justified even though CPU development in general and CPU implementation on leading-edge nodes in particular has become especially expensive in recent years.

So, while AMD did not formally announce any new CPU categories, its chief executive clearly signaled an ongoing expansion and specialization of EPYC offerings around AI infrastructure and other segments of the market.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.