AMD Venice and MI400 debut at CES 2026 with multi-die Zen 6 packaging, delivering up to 256 cores across eight CCDs and a higher-bandwidth datacenter platform.
Tech News Team

Continue your reading
AMD Venice and MI400 Debut at CES 2026 with Multi‑Die Zen 6 Packaging
CES 2026 gave AMD a stage to lift the lid on Venice and the MI400 accelerator family. This is the first public look at silicon for both product lines since AMD laid out the basics at its Advancing AI event in June 2025, and it signals a meaningful shift in how AMD is approaching datacenter compute. Venice isn't just a faster chip design. It also introduces a new packaging approach that moves away from the organic-substrate method used by EPYC Rome and toward a multi-die layout with two IO dies and eight CCDs. The result is a package that can hold up to 256 Zen 6 cores, distributed across eight CCDs that each carry 32 cores.
With eight CCDs, Venice totals 256 cores per package. Each CCD clocks in at around 165 mm^2 of N2 silicon, and AMD’s memory organization would yield 4 MB of L3 cache per core if the ratio holds. That means each CCD would host 32 Zen 6 cores and 128 MB of L3 cache, plus a die-to-die interface for the CCD to IO die communications. Doing the math across all eight CCDs, the on-package L3 climbs toward a gigabyte, showing a cache strategy built to feed large parallel workloads. This is the first public glimpse of the granular scale AMD is aiming for with Zen 6 in a multi-die Venice setup.
The packaging change matters because Venice moves away from the single IO die approach AMD used with EPYC Rome. The new arrangement puts more emphasis on how the CCDs talk to IO dies, a strategy that aligns Venice with high-density, multi-die packaging you’ve seen in top-end accelerators. In public presentations, AMD described Venice as moving away from the Rome-era organic-substrate interconnects toward a more advanced packaging model, one that echoes the die-to-die connectivity seen in Strix Halo or MI250X families. The architectural implications are clear: more surface area for interconnects, potentially higher memory bandwidth to feed the cores, and a platform designed for more aggressive parallelism at the socket level.
Alongside Venice, AMD introduced the MI400 family, its datacenter accelerators for the same generation. The combination of Venice CPUs with MI400 accelerators signals AMD’s push toward tighter CPU–accelerator collaboration in the same platform, a pattern that HP, Dell, and other hyperscalers will likely embrace for HPC and data-center workloads that demand sustained compute throughput. For readers tracking the broader market, this lines up with AMD’s datacenter strategy to fuse compute engines into cohesive platforms rather than relying on discrete, loosely coupled components. You can read more about AMD's overall server strategy and product lines on the official pages for AMD's server processors and accelerators EPYC server processors and Instinct accelerators.
If you’re following the coverage scene, Chips and Cheese already laid out the CES 2026 unveiling in detail, including the Venice core organization and the MI400 strategy. For those who want to read the initial coverage, here’s the direct write-up from Chips and Cheese Chips and Cheese CES 2026 coverage. The reporting highlights the core counts, the multi-die packaging, and the 165 mm^2 CCDs that drive the Venice package. For a broader industry perspective on how these moves fit into the current enterprise server hardware picture, Ars Technica remains a good barometer of server-class hardware shifts Ars Technica.
So what does this mean for developers and system builders? Venice is a platform with a bigger on-die cache and a denser core layout across eight CCDs, but it also introduces a new packaging topology and two IO dies to manage. That translates into potentially higher memory bandwidth and more aggressive parallelism, but also new considerations for motherboard design, socket compatibility, interconnect topology, and thermal/power planning. The MI400 accelerators add another axis to consider: heterogeneous compute is a first-class part of the data-center stack, not an afterthought. In practice, you’ll be watching for platform-level details like PCIe and interconnect bandwidth, memory subsystem integration, and software stacks that can exploit Zen 6 alongside accelerator engines.
Looking ahead, Venice and MI400 set the tone for AMD’s next-gen datacenter ambitions. If the platform ships with the expected performance and power characteristics, it could shift how hyperscalers and enterprise customers design their servers and HPC clusters. The real proof will come in quantified benchmarks and real-world workloads, but the CES 2026 reveal shows AMD pushing a tightly coupled CPU and accelerator strategy with packaging and cache designed to maximize parallel throughput. In the near term, expect motherboard vendors and system integrators to start mapping out Venice-compatible platforms and MI400-enabled configurations as we approach the formal product launch window. The question is how quickly software and compilers can fully exploit Zen 6's multi-die topology and the MI400's accelerator on-ramp, and whether rivals will respond with equivalent packaging and integration innovations.