Yann LeCun Calls LLMs ‘Token Generators’ While Llama Hits a Billion Downloads

He suggests that intelligence is about efficiency, not scale, and that current AI models are bloated and inefficient.
At Nvidia’s GTC 2025, Meta’s chief AI scientist, Yann LeCun, dismissed large language models (LLMs) as mere "token generators," suggesting that their limitations—stemming from their reliance on discrete tokenized spaces—render them ineffective in the long run. LeCun didn’t just criticize LLMs; he predicted their obsolescence within the next five years before as well saying that nobody in their right mind would use them anymore. LeCun’s skepticism is well-documented. He argues that modern AI systems fail in four key areas: they lack awareness of the physical world, have limited memory and no continuous recall, are incapable of reasoning, and struggle with complex planning. For him, these are fundamental weaknesses that cannot be solved by simply increasing scale. Reinforcemen
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.