EPFL Researchers Develop Machine Learning Approach for Accurate Image Compression

Date:

Researchers at the École Polytechnique Fédérale de Lausanne (EPFL) have developed a machine learning framework that can accurately compress image data, surpassing traditional learning-free computation methods. The framework has potential applications in retinal implants and other sensory prostheses.

Sensory encoding is a significant challenge in the development of neural prostheses, as it requires converting information captured by sensors into neural signals that the nervous system can interpret. However, due to the limited number of electrodes in a prosthesis, this environmental input needs to be reduced while maintaining the quality of the transmitted data to the brain.

The team, led by Demetri Psaltis and Christophe Moser at the EPFL’s Optics Lab and Laboratory of Applied Photonics Devices, collaborated with Diego Ghezzi from the Hôpital Opthalmique Jules-Gonin – Fondation Asile des Aveugles to apply machine learning to image data compression, specifically for retinal implants. Currently, downsampling for retinal implants is done through pixel averaging, which is a mathematical process without any learning involved.

By incorporating a learning-based approach into their machine learning framework, the researchers were able to achieve improved results in terms of sensory encoding optimization. Furthermore, their unconstrained neural network autonomously learned to mimic aspects of retinal processing, which was an unexpected outcome.

The framework, known as an actor-model framework, consists of two complementary neural networks. The model network acts as a digital twin of the retina, trained to convert a high-resolution image into a binary neural code similar to that generated by a biological retina. The actor network, on the other hand, is trained to downsample high-resolution images while producing a neural code that closely matches the response of the biological retina to the original image.

See also  Advancing Machine Learning in Particle Physics for Fundamental Nature Theories

To validate their approach, the researchers tested the downsampled images on both the retina digital twin and mouse cadaver retinas. The results showed that the actor-model approach produced images that elicited a more accurate neuronal response compared to images generated using learning-free computation methods.

The team believes that their framework can be expanded beyond retinal prostheses and used in other areas of sensory restoration. They also plan to investigate how much of their model, which was validated using mouse retinas, can be applied to humans.

By compressing images more broadly, the framework can potentially handle multiple visual dimensions simultaneously. It may also be applicable to outputs from other areas of the brain or even linked to other devices like auditory or limb prostheses.

The researchers’ innovative use of the actor-model framework, coupled with in-silico and ex-vivo experiments, sets this study apart in the field of sensory encoding research. It paves the way for the development of more advanced neural prostheses that can provide a better sensory experience for users.

The findings of the study have been published in Nature Communications.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Kunal Joshi
Kunal Joshi
Meet Kunal, our insightful writer and manager for the Machine Learning category. Kunal's expertise in machine learning algorithms and applications allows him to provide a deep understanding of this dynamic field. Through his articles, he explores the latest trends, algorithms, and real-world applications of machine learning, making it accessible to all.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.