A new study has revealed that machine-learning models for medical imaging tasks have the potential to outperform clinical experts. However, the models perform poorly in settings different from their training data sets. To address these out-of-distribution performance issues, a new representation-learning strategy called REMEDIS has been introduced. REMEDIS stands for Robust and Efficient Medical Imaging with Self-supervision, and it combines large-scale supervised transfer learning on natural images and intermediate contrastive self-supervised learning on medical images. The new strategy improves model robustness and training efficiency.
REMEDIS requires minimal task-specific customization and was tested for a range of diagnostic imaging tasks from six imaging domains and 15 test datasets. The results showed that REMEDIS improved diagnostic accuracies up to 11.5% for in-distribution results compared to supervised baseline models.
What’s remarkable is that in out-of-distribution settings, REMEDIS required only 1-33% of the data for retraining to match the performance of supervised models retrained using all available data. The researchers have observed that REMEDIS may accelerate the development lifecycle of machine-learning models for medical imaging.
The study has been named Robust and Data–Efficient Generalization of Self-Supervised Machine Learning for Diagnostic Imaging and published in the journal Nature Biomedical Engineering. As machine learning becomes more sophisticated, new techniques, such as REMEDIS, can help improve their applicability and acceptance within the medical community.