Cerebras Built a Chip the Size of a Dinner Plate

We built a chip the size of a dinner plate while everybody else was building chips the size of a postage stamp.
Silicon hasn’t fundamentally changed shape in decades. Most AI chips are still bound by the limits of traditional packaging, designed to fit inside server racks and built to scale incrementally. Cerebras Systems broke that convention by building a chip the size of a dinner plate. The move was as functional as it was radical. Cerebras’ Wafer Scale Engine, with 850,000 cores and 2.6 trillion transistors, keeps data on-chip and minimizes memory bottlenecks delivering a performance advantage the company claims is roughly 50 times faster than Nvidia GPUs for inference, the essential workload of AI deployment. Independent benchmarks show that Cerebras’ WSE-3 outperforms the latest Nvidia H100 and B200 GPUs in performance per watt and memory scalability. https://www.youtube.com/watch
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
More from AIM Media House

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.