AI hardware is at an inflection point, and Andrew Feldman sees the shift with absolute clarity. The founder and CEO of Cerebras Systems has long argued that AI’s demand for compute power cannot be met by conventional GPU-based architectures. As models grow larger and more complex, the industry must move beyond the constraints of traditional chips. His answer? Wafer-scale computing.
GPU's Are Not Built For AI
For years, Nvidia has dominated AI hardware with its GPU-based ecosystem, which thrives on parallel computing. But GPUs were never built for AI; they were adapted for it. The more AI scales, the clearer the inefficiencies become. Moving data on and off the chip—one of the most power-intensive operations in AI processing which creates bottlenecks that hamper both efficiency an
The Future of AI is Wafer Scale Says Andrew Feldman
- By Anshika Mathews
- Published on
Success in wafer-scale computing or any transformative technology demands not just vision and experience, but also the humility to acknowledge when the best path forward wasn’t the one you initially chose.
