Silicon hasn’t fundamentally changed shape in decades. Most AI chips are still bound by the limits of traditional packaging, designed to fit inside server racks and built to scale incrementally. Cerebras Systems broke that convention by building a chip the size of a dinner plate.
The move was as functional as it was radical. Cerebras’ Wafer Scale Engine, with 850,000 cores and 2.6 trillion transistors, keeps data on-chip and minimizes memory bottlenecks delivering a performance advantage the company claims is roughly 50 times faster than Nvidia GPUs for inference, the essential workload of AI deployment. Independent benchmarks show that Cerebras’ WSE-3 outperforms the latest Nvidia H100 and B200 GPUs in performance per watt and memory scalability.
https://www.youtube.com/watch

Heron Data Targets Manual Backlogs with Vertical AI and $16.6M Series A
Heron says they now process over 350,000 documents a week