Using neuroscience to create learning machines

learning-machinesMost AI systems these days have a learning component to them, and I’ve touched on the ways in which systems learn a few times previously.  One of the more interesting approaches aims to mimic the way humans learn.

Such approaches have their roots in a theory that was first published in 1995, which suggested that learning is a two pronged approach.  The first system acquires knowledge gradually based upon our exposure to new experiences.  The second system then stores each of these experiences so that we can replay them and effectively integrate them.  It has been a fundamental bedrock of subsequent research into neural networks.

One of the authors of that original paper has now teamed up with researchers from Stanford and DeepMind to update the theory based upon the latest thinking on the topic.

“The evidence seems compelling that the brain has these two kinds of learning systems, and the complementary learning systems theory explains how they complement each other to provide a powerful solution to a key learning problem that faces the brain,” they say.

Learning systems

The first system has many similarities with the deep neural networks used in AI today as they contain multiple layers of neurons between the input and output of the network.  The knowledge in the network therefore, is obtained via the connections between the nodes within it.

These connections occur over time based upon the experience of the system, thus allowing the system to do things such as recognize speech, understand language and recognize objects.

The challenge for such systems occurs when new things have to be learned.  A large influx of change into the system can sufficiently distort the network such that it alters the knowledge already stored in the network.

“That’s where the complementary learning system comes in,” the authors say. “By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences.”

By adopting the two-system approach, the researchers believe this shock can be overcome by supporting both immediate learning and then the gradual integration of that learning into the structural knowledge of the system.

“Components of the neural network architecture that succeeded in achieving human-level performance in a variety of computer games like Space Invaders and Breakout were inspired by complementary learning systems theory” the authors say. “As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of game play and replays them in interleaved fashion. This greatly amplifies the use of actual game play experience and avoids the tendency for a particular local run of experience to dominate learning in the system.”

They believe that this extended version of the learning systems theory will be hugely important in future research in both neuroscience and artificial intelligence.

Related

Facebooktwitterredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Captcha loading...