Can the study of eye movements help the development of self-driving cars?

29 Sep 2022

Andrea Benucci, an Italian researcher in the field of biological sciences, developed a method of creating artificial neural networks for fast and accurate recognition of objects with his research team. The research took place at the RIKEN Center for Brain Science in Japan, where Benucci is the group leader of the Neural Circuits and Behavior Laboratory.

The working group examines the nervous system bases of sensory processing, including vision. The experimental tools they use are based on methods of optogenetics, optical imaging and electrode fixation.

The details of the research were published in the journal PLOS Computational Biology. The journal is a monthly open access scientific journal, established in 2005, published by the Public Library of Science in collaboration with the International Society for Computational Biology.

The scientific article explains that the results of the research can be successfully applied to machine “vision” and machine learning, the essence of which is that the self-driving vehicle learns how to recognize important features during traffic. The principle is similar to the operation of the human eye, as objects do not become blurred or washed out when moving our head, despite the fact that the physical environment perceived by the eye is constantly changing.

Source: www.pixabay.com

Initially, the results were not very promising – the test was to classify 60 000 black and white images into 10 categories. The eye movement and the resulting visual input were not yet accurate enough, so after training with shifted images, the test had to be performed again – and this test ended with a good result.

Benucci’s latest development is the CNN, i.e. the convolutional neural network, which will help self-driving cars in the stable recognition of various objects. CNN optimizes the classification of objects during the movement of the eye in a visual scene – CNN can actually be considered a learning algorithm that receives an image as input and assigns importance, i.e. learnable weights and biases, to the objects of the image and distinguishes them from each other using this method. This is important in order to minimize the possibility of self-driving cars making mistakes.

According to Benucci, “the benefits of mimicking eye movements and their efferent copies implies that ‘forcing’ a machine-vision sensor to have controlled types of movements, while informing the vision network in charge of processing the associated images about the self-generated movements, would make machine vision more robust, and akin to what is experienced in human vision.”

However, the research does not end here, in the next step the researchers will continue their cooperation with colleagues working with neuromorphic technologies. The idea is to implement actual silicon-based circuits based on the principles highlighted in this paper and test whether they can improve machine vision in real-world applications.

Extra “eye” movements are the key to better self-driving cars – AutoTech News

RELATED POST

Leave a reply