New Machine Learning Framework Enhances Sensory Encoding for Neural Prostheses
A team of researchers from the Optics Lab and the Laboratory of Applied Photonics Devices at the École Polytechnique Fédérale de Lausanne (EPFL) has developed a machine learning framework that can encode images to be transmitted via a retinal prosthesis. The aim of this framework is to improve sensory encoding, the process of transforming environmental information captured by sensors into neural signals that can be interpreted by the nervous system.
Traditionally, downsampling for retinal implants has been achieved by pixel averaging, which is a mathematical process without any learning involved. However, the team led by Demetri Psaltis and Christophe Moser, along with Diego Ghezzi from the Hôpital ophtalmique Jules-Gonin – Fondation Asile des Aveugles, found that applying a learning-based approach could lead to optimized sensory encoding. Furthermore, when using an unconstrained neural network, the framework learned to mimic aspects of retinal processing on its own.
The researchers used an actor-model framework, wherein two neural networks work together to encode the images. The model portion acts as a digital twin of the retina, trained to receive a high-resolution image and generate a binary neural code similar to that produced by a biological retina. The actor network, on the other hand, is trained to downsample the high-resolution image in a way that elicits a neural code similar to that produced by the biological retina.
To validate their model, the team tested the downscaled images on both the retinal digital twin and mouse cadaver retinas that were explanted and placed in a culture medium. The experiments showed that the actor-model approach produced images that elicited a neuronal response more closely resembling the response to the original image compared to traditional pixel-averaging methods.
While the study focused on retinal prostheses, the researchers believe that their actor-model framework has broader applications in sensory encoding beyond vision restoration. The next steps involve exploring how the framework can compress images across multiple visual dimensions simultaneously and potentially applying the retinal model to outputs from other regions of the brain.
Diego Ghezzi emphasizes the importance of the ex-vivo validation of their model using mouse retinas, stating that it provided necessary validation beyond the digital model. This innovation opens up possibilities for the enhancement of other devices like auditory or limb prostheses.
With the development of this machine learning framework, researchers are addressing a major challenge in the field of neural prostheses, and their groundbreaking approach to sensory encoding paves the way for improved quality of data transmission to the brain. As the technology evolves, it could potentially have a significant impact on the lives of individuals with sensory impairments, providing them with improved capabilities for perceiving and interpreting the world around them.