Melanie Mitchell, an AI researcher at the Santa Fe Institute, is also excited about an entirely new approach. “We haven’t seen this come out of the deep learning community very often,” she says. She also agrees with LeCun that large language models cannot be the whole story. “They lack memory and internal models of the world that are really important,” she says.
However, Natasha Jaques, researcher at Google Brain, believes that language models still have a role to play. It’s strange that language is completely missing from LeCun’s proposals, she says: “We know that large language models are super effective and contain a lot of human knowledge.”
Jaques, who is working on ways for AIs to share information and skills, points out that people don’t need to have direct experience with something to learn about it. We can change our behavior simply by hearing something, such as not touching a hot pan. “How do I update this world model that Yann is proposing if I don’t have a language?” she asks.
There is another problem. If they worked, LeCun’s ideas would create a powerful technology that could be as transformative as the Internet. And yet his proposal does not discuss how his model’s behaviors and motivations would be controlled, or who would control them. This is a strange omission, says Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible AI expert at Boston Consulting Group.
“We should think more about what it takes for AI to function properly in a society, including thinking about ethical behavior,” Gupta says.
Still, Jacques notes that LeCun’s proposals are still ideas rather than practical applications. Mitchell says the same thing: “There’s certainly little risk of this becoming a human-level intelligence anytime soon.”
LeCun would agree. His goal is to sow the seeds for a new approach in the hope that others will build on it. “This is something that will take a lot of effort from a lot of people,” he says. “I’m bringing this out because I think it’s the right way to go in the end.” If nothing else, he wants to convince people that big language models and reinforcement learning aren’t the only ways forward.
“I hate to see people wasting their time,” he says.