Glioblastoma MR Images Synthesis with Generative Adversarial Network

Document Type

Conference Proceeding

Publication Date

10-1-2020

Publication Title

International Journal of Radiation Oncology Biology Physics

Abstract

Background: Automatic delineation of Glioblastoma (GBM) plays an important role in radiation therapy. Recently, segmentation algorithms using supervised deep neural networks (DNN) have shown promising results, but small volumes of annotated data pose challenges on powering them. Current collection of dataset relies on radiologists’ contour as ground truth and is expensive and time-consuming.

Objectives: One possible solution to overcome the limitation of small dataset is to generate synthetic MR images representing different clinical scenario. The aim of this study is to apply a generative adversarial network (GAN) to synthesize highly realistic MR images from manipulated annotations that are able to feed as new training samples for DNNs. Methods: Data was obtained from the BraTS multimodal Brain Tumor Segmentation Challenge 2018. 19 different institutions provided a total of 210 patients. T1WI, T1CE, T2WI, and FLAIR were provided for each patient. 82 patients were used for training and 128 patients for validation. The network consisted of a generator and two discriminators. Image per-pixel loss, perceptual loss and adversarial loss were used. By manipulating on annotations from radiologists, the generator was able to output new synthetic MR images, and boost the size of dataset. The realism of synthetic images was evaluated both quantitatively and qualitatively.

Results: Synthetic image generated from non-manipulated annotation was compared with its corresponding real image. Mean Square Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index (SSIM) for synthetic MR images were 19.246±0.308, 23.375±0.586, 43.068±0.443 and 0.788 ± 0.002; 19.249±0.274, 22.805±0.583, 43.054±0.437 and 0.789 ± 0.004; 19.246±0.290, 23.391±0.400, 43.102±0.45 and 0.784 ± 0.003; 18.930±0.40, 24.119±1.48, 43.126±0.46 and 0.794 ± 0.005, respectively, for T1, T1CE, T2 and Flair. A subset of 9 real and 10 generated patients were assessed by a physician. 8.3%, 41.7%, 50% of real images and 22.5%, 47.5%, 30% of synthetic images were commented as poor, marginal and good quality. The misclassified rate were 26.3%, 10.5%, 26.3% and 26.3% for T1, T1CE, T2 and Flair.

Conclusions: We proposed to apply GAN to synthesize GBM MR images from manipulated annotations to increase the dataset size to train deep learning segmentation models. The evaluation results showed synthetic MRIs had comparable image quality to real MRIs that had potential to be used for DNN training.

Volume

108

Issue

2

First Page

E28

This document is currently not available here.

Share

COinS