In AI development, a performance breakthrough often depends less on algorithms than on the code that connects them to the hardware. The specialized “kernel” programs that run on GPUs can determine whether a model operates at full speed or wastes costly computing resources. Writing that code is slow, labor-intensive, and typically done by a small pool of engineers at large tech companies: a constraint that has shaped the economics of AI infrastructure for years.
When Waleed Atallah talks about GPU programming, he frames it as a bottleneck that has shaped, and limited, AI’s infrastructure for years. “GPUs are the workhorses behind modern AI. But writing code for them remains an archaic, manual, and expensive process,” he wrote in announcing his company’s recent seed round. “
Mako Secures $8.5M to Speed GPU Programming Across Hardware Platforms
- By Mukundan Sivaraj
- Published on
We’re building an AI system that writes and continuously tunes the low-level GPU code
