Human babies have enviable abilities. Even though they will have to be completely dependent on their parents for a long time to survive, they can do some amazing things.
Babies have their own understanding of the physical laws of the world, and they can quickly learn new concepts and languages, even if they can absorb very limited information.
*: AI-generated).
Even the most powerful AI systems we have today lack these capabilities. For example, the large language models that power systems like ChatGPT are very good at the next word in a sentence, but are nowhere near as good as babies in terms of common sense.
But what if AI could learn like a baby? AI models are typically trained on huge datasets made up of billions of data points.
Researchers at New York University in the United States wanted to see what these models could do when they were trained on a much smaller dataset.
They use the sights and sounds experienced by a child who is learning to speak as data. Surprisingly, the AI model learns a lot. And that's thanks to a curious baby named Sam.
When Sam was six months old, researchers began occasionally strapping a camera to his head, and for the next year and a half, he wore it for his daily activities.
Cassandra Willyard said the material he collected allowed researchers to teach neural networks to match words to the objects they represented.
Babies have brought us one step closer to teaching computers to learn like humans, and this research is just an attempt to build AI systems that are as intelligent as we are. Babies are keen observers and excellent learners who have inspired researchers for years.
Babies also learn through trial and error, and as we learn more about the world, humans become smarter. Developmental psychologists say babies have an instinct about what will happen next.
For example, they know that the ball is still there even if it is hidden. They also know that the ball is solid and does not change shape suddenly, that it rolls on a road without obstacles and does not suddenly teleport elsewhere.
Researchers at Google's deepmind tried to teach AI systems the same "intuitive sense of physics" by training a model that learns how objects move by focusing on objects (objects) in ** rather than individual pixels. They used hundreds of thousands of training models to understand the behavior of objects.
Theoretically, if a baby is surprised by a ball that suddenly flies out of the window, it is because the way the object moves goes against the baby's understanding of physics.
Researchers at Google's Deepmind also managed to get their AI system to show "surprise" when an object "goes against the grain" of how it should move.
Yann Lecun, a Turing Award winner and chief AI scientist at Meta, believes that teaching AI systems to see the world like children could be the path to smarter systems.
He said that the human brain has a world simulation or "world model" that allows us to intuitively know that the world is three-dimensional and that objects don't really disappear when they leave view.
It can make us appear in a few seconds after a bouncing ball or a speeding bike. Yang is currently busy building new AI architectures inspired by the way humans learn.
Today's AI systems excel at completing specific tasks, such as playing chess or generating text that looks like it was written by a human. But compared to the human brain, these systems are simply weak.
They lack common sense, can't function perfectly in a chaotic world, can't make more complex reasoning, and can't help humans better. Studying how babies learn can help us unleash these abilities.
About the author: Melissa Heikkil is a senior reporter at MIT Technology Review, where she focuses on artificial intelligence and how it is changing our society. Previously, she wrote about AI policy and politics at politico. She also worked for The Economist and worked as a news anchor.
Support: ren
Operation Typesetting: He Chenlong.