The world is still a long way off from human-like AI (artificial intelligence), according to experts at a Feb. 24 event by Facebook parent Meta.
“When you train a system with a data set, the data is collected in a particular way in one place. When you deploy it in a different place and time, [however], it breaks down,” said Yoshua Bengio, a professor at Université de Montréal known for his pioneering work in deep learning, a machine learning technique which teaches computers to learn by example.
In contrast, a human who learns how to drive in North America — where drivers keep to the right side of the road — can adapt to left-hand traffic in London.
“The people are the same, the physics is the same… our brains are structured so that we can separate pieces of knowledge, infer our way around [this one change], and retrain our habits so we can do well in London as well,” Mr. Bengio said.
Yann LeCun, Meta vice president and chief AI scientist, said AI does not have the ability of humans and animals to learn how the world works. Teenagers, he expounded, can drive a car with a degree of certainty after hours of training. They also know better than to drive off a cliff — something that a tabula rasa AI machine will not be able to figure out.
“How do we get AI to accumulate the enormous amounts of background knowledge humans accumulate in the first few months of their lives?” he asked. “In my opinion, this is what constitutes the basis for common sense.”
Deep learning is a possible way out of this conundrum. So is self-supervised learning, in which AI is given raw data (instead of being trained on labeled data).
According to Meta Chief Executive Officer Mark Zuckerberg, self-supervised learning seems closer to how the brain learns, because it compels AI to fill in the missing data and learn abstract representations along the way.
“You don’t need to show a kid thousands of pictures of cats to make a kid learn what a cat is,” he said. “This has become the primary method of AI to learn natural text.”
The technical question under self-supervised learning is how to deal with uncertainty and prediction, added Mr. LeCun.
“The type of architecture we have to build is where the prediction doesn’t necessarily have to be at the level [where the system has to predict the next frame by reconstructing all the details], but where useful information is present and irrelevant stuff isn’t,” he said.
Mr. Bengio said that, architecture-wise, AI can take inspiration from how human brains can attend to and reason around new situations, and then integrate this into machine learning.
“We can find out how knowledge is presented in a modular way, and how these pieces of knowledge that are reusable can be used on the fly to solve new tasks,” he said.
Meta is creating what it calls a more immersive version of the Internet through the metaverse. AI has been touted as the most important foundational technology for this ecosystem. — Patricia B. Mirasol