Silicon hasn’t fundamentally changed shape in decades. Most AI chips are still bound by the limits of traditional packaging, designed to fit inside server racks and built to scale incrementally. Cerebras Systems broke that convention by building a chip the size of a dinner plate.
The move was as functional as it was radical. Cerebras’ Wafer Scale Engine, with 850,000 cores and 2.6 trillion transistors, keeps data on-chip and minimizes memory bottlenecks delivering a performance advantage the company claims is roughly 50 times faster than Nvidia GPUs for inference, the essential workload of AI deployment. Independent benchmarks show that Cerebras’ WSE-3 outperforms the latest Nvidia H100 and B200 GPUs in performance per watt and memory scalability.
https://www.youtube.com/watch
Cerebras Built a Chip the Size of a Dinner Plate
- By Anshika Mathews
- Published on
We built a chip the size of a dinner plate while everybody else was building chips the size of a postage stamp.
