Backed by Nvidia and Snowflake, Reka is scaling its next-gen foundation models with a focus on efficiency over size
In just over a year, Reka has gone from a 20-person team to a billion-dollar valuation, tripling its worth with a new $110 million funding round backed by Nvidia and Snowflake. At a time when investor attention is tilting toward lean, fast-moving AI companies outside the orbit of Big Tech, Reka’s ascent has positioned it as one of the few independent challengers to entrenched players like OpenAI, Meta, and Anthropic.
Founded in 2022 by former researchers from Google Brain and Meta’s AI labs, Reka was built on the premise that smaller, more efficient models, not necessarily the largest or most expensive, would define the next phase of AI. “Very few teams in the world have the capability to build what they’ve built,” said Snowflake’s VP of AI Engineering Vivek Raghunathan to Bloomberg. “Almost everyone at that level of talent is at OpenAI, Meta or Anthropic. Reka is one of the rare independents, and they’ve proven they can compete.”
Efficient Foundations, Multimodal Reach
The cornerstone of Reka’s product line is Reka Flash, a 21-billion parameter multimodal model trained to process video, images, text, and audio. Reka has prioritized optimization, both in training and inference. The company says their quantization library, Reka Quant, enables near-lossless compression of its models to 3.5-bit precision, making them viable in resource-constrained settings such as edge devices.
Reka Flash 3.1, the most recent version, serves as the engine behind Reka’s two key commercial products: Reka Vision and Reka Research. Vision is built for enterprise customers that need scalable video and image reasoning tools. It is already being used by clients such as Shutterstock and Turing Video.
Reka says both platforms rely on highly optimized reinforcement learning workflows to improve reasoning and adaptability. The company also maintains open source components, including its quantization tools and evaluation framework, reinforcing its technical credibility in a field often dominated by opaque systems.
Strategy Over Scale
Reka has stayed faithful to foundational model development. That approach initially drew attention from Snowflake, which entered acquisition talks with Reka last year. The deal, reportedly priced at over $1 billion, ultimately fell through. Both companies opted to remain independent, although Snowflake continues to integrate Reka’s models into its AI offerings.
The decision to remain autonomous appears deliberate. CEO Dani Yogatama, formerly of DeepMind, and Chief Scientist Yi Tay, based in Singapore, have kept the company small, emphasizing direct contribution from senior technical staff over layers of management. This hands-on model mirrors the setup at other “frontier labs,” such as OpenAI and DeepMind, where senior researchers remain deeply involved in core product development.
“We do more coding, less meetings,” Tay told Tech in Asia, reflecting on the company’s internal culture and his broader view of what it takes to make breakthroughs in the current AI landscape. In his view, AI development requires a level of technical depth that doesn’t scale well with bureaucracy or delegation.
Reka plans to use the new capital to expand the reach of its multimodal platforms, continue technical development, and hire more engineering talent. Its headcount has already grown to 50, and the company says it will invest further in infrastructure to support broader enterprise adoption.
Reka’s sustained focus on efficient multimodal systems sets it apart from both older incumbents and newer startups chasing parameter counts or niche functionality. Whether that will be enough to carve out lasting relevance in a market dominated by hyperscalers is yet to be seen.








