At Nvidia’s GTC 2025, Meta’s chief AI scientist, Yann LeCun, dismissed large language models (LLMs) as mere "token generators," suggesting that their limitations—stemming from their reliance on discrete tokenized spaces—render them ineffective in the long run. LeCun didn’t just criticize LLMs; he predicted their obsolescence within the next five years before as well saying that nobody in their right mind would use them anymore.
LeCun’s skepticism is well-documented. He argues that modern AI systems fail in four key areas: they lack awareness of the physical world, have limited memory and no continuous recall, are incapable of reasoning, and struggle with complex planning. For him, these are fundamental weaknesses that cannot be solved by simply increasing scale. Reinforcemen
Yann LeCun Calls LLMs ‘Token Generators’ While Llama Hits a Billion Downloads
- By Anshika Mathews
- Published on
He suggests that intelligence is about efficiency, not scale, and that current AI models are bloated and inefficient.
