Researchers at the École Polytechnique Fédérale de Lausanne (EPFL) have developed a machine learning framework that can accurately compress image data, surpassing traditional learning-free computation methods. The framework has potential applications in retinal implants and other sensory prostheses.
Sensory encoding is a significant challenge in the development of neural prostheses, as it requires converting information captured by sensors into neural signals that the nervous system can interpret. However, due to the limited number of electrodes in a prosthesis, this environmental input needs to be reduced while maintaining the quality of the transmitted data to the brain.
The team, led by Demetri Psaltis and Christophe Moser at the EPFL’s Optics Lab and Laboratory of Applied Photonics Devices, collaborated with Diego Ghezzi from the Hôpital Opthalmique Jules-Gonin – Fondation Asile des Aveugles to apply machine learning to image data compression, specifically for retinal implants. Currently, downsampling for retinal implants is done through pixel averaging, which is a mathematical process without any learning involved.
By incorporating a learning-based approach into their machine learning framework, the researchers were able to achieve improved results in terms of sensory encoding optimization. Furthermore, their unconstrained neural network autonomously learned to mimic aspects of retinal processing, which was an unexpected outcome.
The framework, known as an actor-model framework, consists of two complementary neural networks. The model network acts as a digital twin of the retina, trained to convert a high-resolution image into a binary neural code similar to that generated by a biological retina. The actor network, on the other hand, is trained to downsample high-resolution images while producing a neural code that closely matches the response of the biological retina to the original image.
To validate their approach, the researchers tested the downsampled images on both the retina digital twin and mouse cadaver retinas. The results showed that the actor-model approach produced images that elicited a more accurate neuronal response compared to images generated using learning-free computation methods.
The team believes that their framework can be expanded beyond retinal prostheses and used in other areas of sensory restoration. They also plan to investigate how much of their model, which was validated using mouse retinas, can be applied to humans.
By compressing images more broadly, the framework can potentially handle multiple visual dimensions simultaneously. It may also be applicable to outputs from other areas of the brain or even linked to other devices like auditory or limb prostheses.
The researchers’ innovative use of the actor-model framework, coupled with in-silico and ex-vivo experiments, sets this study apart in the field of sensory encoding research. It paves the way for the development of more advanced neural prostheses that can provide a better sensory experience for users.
The findings of the study have been published in Nature Communications.