Multi-Source Data Integration for Segmentation of Unannotated MRI Images

Document Type

Article

Publication Date

7-2-2024

Publication Title

IEEE J Biomed Health Inform

Abstract

Automatic semantic segmentation of magnetic resonance imaging (MRI) images using deep neural networks greatly assists in evaluating and planning treatments for various clinical applications. However, training these models is conditioned on the availability of abundant annotated data. Even if we annotate enough data, MRI images display considerable variability due to factors such as differences among patients, MRI scanners, and imaging protocols. This variability necessitates retraining neural networks for each specific application domain, which, in turn requires manual annotation by expert radiologists for all new domains. To relax the need for persistent data annotation, we develop a method for unsupervised federated domain adaptation using multiple annotated source domains. Our approach enables the transfer of knowledge from several annotated source domains for use in an unannotated target domain. Initially, we ensure that the target domain data shares similar representations with each source domain in a latent embedding space by minimizing the pair-wise distances between the distributions for the target and the source domains. We then employ an ensemble approach to leverage the knowledge obtained from all domains to build an integrated outcome. We perform experiments on two datasets to demonstrate our method is effective. Our implementation code is publicly available: https://github.com/navapatn/Unsupervised -Federated-Domain-Adaptation-for-Image-Segmentation new.

PubMed ID

38954567

Volume

PP

Share

COinS