The Future of AI is Wafer Scale Says Andrew Feldman

Success in wafer-scale computing or any transformative technology demands not just vision and experience, but also the humility to acknowledge when the best path forward wasn’t the one you initially chose.
AI hardware is at an inflection point, and Andrew Feldman sees the shift with absolute clarity. The founder and CEO of Cerebras Systems has long argued that AI’s demand for compute power cannot be met by conventional GPU-based architectures. As models grow larger and more complex, the industry must move beyond the constraints of traditional chips. His answer? Wafer-scale computing. GPU's Are Not Built For AI For years, Nvidia has dominated AI hardware with its GPU-based ecosystem, which thrives on parallel computing. But GPUs were never built for AI; they were adapted for it. The more AI scales, the clearer the inefficiencies become. Moving data on and off the chip—one of the most power-intensive operations in AI processing which creates bottlenecks that hamper both efficiency an
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM Media House? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.