Lab-grown human brain cells learn to play video games faster than AI

0


Scientists have successfully grown human brain cells in a petri dish and taught them to play video games faster than an AI.

Hundreds of thousands of laboratory-grown human brain cells can now play classic retro ‘Pong’ by shooting neurons that would move the racket back and forth depending on the location of the ball in the video game.

Australian scientists at Cortical Labs have created the system called “DishBrain,” which is made up of brain cells that are grown on arrays of microelectrodes that can both stimulate cells.

Scientists were able to train brain cells to play the game in just five minutes, which is significantly faster than artificial intelligence (AI) which resumes the game after 90 minutes.



In August, German scientists unveiled their lab-grown brains

In order to teach cells how to play the game, the team used a solo version of Pong and sent electrical signals to the right or left of the network to indicate where the ball is.

Brett Kagan, scientific director of Cortical Labs, which leads the research, told New Scientist, “We think it’s fair to call them cyborg brains.

“We often refer to them as living in the Matrix. When they are in the game, they believe they are the paddle.



Human brain cells grown in a lab learned to play ‘Pong’ faster than AI

As you play Pong, neural activity patterns are determined by brain cells as the paddle moves left or right.

Thanks to these neurons, the virtual world inside the video game would react accordingly, and the power supply to the electrode helps the mini-brain to learn to paddle.

Kagan notes that while mini-brains can learn the game faster than AI, they’re not as proficient when it comes to actually playing the video game – cels would lose to a computer like DeepMind.



In the end, the AI ​​turned out to be a better player
In the end, the AI ​​turned out to be a better player

For the latest news and amazing stories from TBEN, sign up for our newsletter by clicking here.

While it took 5,000 AI rallies to learn the game, where a rally is a game session that lasts 15 minutes, it only took 10 to 15 rallies for DishBrains.

“Using this DishBrain system, we have demonstrated that a single layer of cortical neurons in vitro can self-organize and display intelligent and sensitive behavior when embodied in a simulated game world,” it read. the study published in bioRxiv.

“We showed that even without substantial filtering of cell activity, statistically robust differences over time and compared to controls could be observed in the behavior of neuronal cultures while adapting to targeted tasks.”

.


Share.

Comments are closed.