Title

A deep dive into understanding tumor foci classification using multiparametric MRI based on convolutional neural network

Document Type

Article

Publication Date

5-25-2020

Publication Title

Medical physics

Abstract

PURPOSE: Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. In this paper, an efficient convolutional neural network (CNN) was developed and the model interpretation at various convolutional layers was systematically analyzed to improve the understanding of how CNN interprets multimodality medical images and the predictive powers of features at each layer. The problem of small sample size was addressed by feeding the intermediate features into a traditional classification algorithm known as weighted extreme learning machine (wELM), with imbalanced distribution among output categories taken into consideration.

METHODS: The training data collection used a retrospective set of prostate MR studies, from SPIE-AAPM-NCI PROSTATEx Challenges held in 2017. Three hundred twenty biopsy samples of lesions from 201 prostate cancer patients were diagnosed and identified as clinically significant (malignant) or not significant (benign). All studies included T2-weighted (T2W), proton density-weighted (PD-W), dynamic contrast enhanced (DCE) and diffusion-weighted (DW) imaging. After registration and lesion-based normalization, a CNN with four convolutional layers were developed and trained on tenfold cross validation. The features from intermediate layers were then extracted as input to wELM to test the discriminative power of each individual layer. The best performing model from the tenfolds was chosen to be tested on the holdout cohort from two sources. Feature maps after each convolutional layer were then visualized to monitor the trend, as the layer propagated. Scatter plotting was used to visualize the transformation of data distribution. Finally, a class activation map was generated to highlight the region of interest based on the model perspective.

RESULTS: Experimental trials indicated that the best input for CNN was a modality combination of T2W, apparent diffusion coefficient (ADC) and DWIb50. The convolutional features from CNN paired with a weighted extreme learning classifier showed substantial performance compared to a CNN end-to-end training model. The feature map visualization reveals similar findings on natural images where lower layers tend to learn lower level features such as edges, intensity changes, etc, while higher layers learn more abstract and task-related concept such as the lesion region. The generated saliency map revealed that the model was able to focus on the region of interest where the lesion resided and filter out background information, including prostate boundary, rectum, etc.

CONCLUSIONS: This work designs a customized workflow for the small and imbalanced data set of prostate mpMRI where features were extracted from a deep learning model and then analyzed by a traditional machine learning classifier. In addition, this work contributes to revealing how deep learning models interpret mpMRI for prostate cancer patients stratification.

PubMed ID

32449176

ePublication

ePub ahead of print

Share

COinS