Has AI research gone on the wrong path?
I'm a Yann LeCun fan, so you know what I'm about to say.
My viewpoint is one that aligns with Yann LeCun's view on the limitations of current AI research. So, at a deep learning conference, LeCun shared a rather interesting example: a human with no prior knowledge of cars can learn to drive in just 20 hours (I spent 7 hours learning how to drive, lol). Meanwhile, AI requires billions of data points and millions of reinforcement learning iterations in a simulated environment. Even after all that, AI still can’t match the reliability of human driving. Why?
LeCun attributes this to the shortcomings of autoregressive models. These systems rely on statistical predictions and lack a fundamental understanding of the physical world. AI doesn’t truly "know" what traffic lights, crosswalks, or obstacles are—it merely models probabilistic relationships and selects the most likely answer based on training data. AI might *appear* to be quite smart, but it’s essentially just "guessing" answers. In contrast, the human brain doesn’t rely on brute force. It "understands" concepts like traffic lights and crosswalks and activates only a small subset of relevant neurons during driving. This makes it energy-efficient, fast, and remarkably accurate.
To address this gap in understanding and reasoning, LeCun proposed the concept of world models that aim to inject AI with a deeper understanding of the environment but I’m not optimistic, as Symbolic Artificial Intelligence has been an area of study for decades without any major breakthrough. Our brain is optimized through millions of years of evolution. While artificial general intelligence (AGI) doesn’t necessarily have to mimic the brain, it must tackle two challenges: solving known problems (learning) and addressing the unknown (creativity). This requires the ability to discern patterns from little amount of data, because uncharted areas won’t always provide vast dataset for training models. Current models like GPT are not equipped for this particular situation.
Let’s look at another example. Let’s consider a pedestrian carrying a large painting across the street. A human driver can instantly recognize the person and isn’t confused by the painting’s contents, no matter how realistic they are. This ability, known in psychology as intuitive biology, doesn’t take decades to develop. Infants as young as 9–12 months begin to understand that objects in the world are distinct entities with parts that move together. Even if part of an object is out of sight, it’s still perceived as a whole.
How much data does an infant need to learn this? Let’s calculate. A three-year-old child, awake for about 4,000 hours over 12 months, processes visual information at roughly 20 megabytes every second. This equates to about 10^{4} data points. In contrast, large AI models require 100 trillion tokens, or 10^{13} data points—nine orders of magnitude more. Yet even with this vast amount of data, AI cannot achieve the intuitive understanding of an infant.
What it tells us is that there’s a crucial difference: human intelligence relies on efficient algorithms rather than raw computational power or data. The human cerebral cortex has about 14 billion neurons, comparable to a chimpanzee’s 9 billion, an elephant’s 56 billion, or a blue whale’s 146 billion. Humans are not superior in computational capacity but in how they use it. Similarly, an eagle’s vision far surpasses ours, receiving over 100 megabytes of video data per second—enough to locate prey efficiently. Humans lack such sensory capabilities but excel in deriving meaning from limited information.
This is why LeCun argues that scaling GPUs and parameters to build larger models is a dead end. The key to AGI lies in uncovering the algorithms that enable the human brain’s efficiency. As LeCun says, AGI development will depend on scientific breakthroughs, not just more GPUs or data.
Only when AI can solve problems humans cannot, say proving the Goldbach conjecture, we can say that AGI is finally coming. So, how exactly does human brain algorithm works? I have no idea. But it should be what AI researchers’ research focus in the future.