Characterization of Local Spatial Information Utilized in Training of Deep Convolutional Neural Networks (DCNN) for Automatic Segmentation of the Prostate on CT Images
Recommended Citation
Liu C, Gardner S, Wen N, Mohamed E, Movsas B, and Chetty I. Characterization of Local Spatial Information Utilized in Training of Deep Convolutional Neural Networks (DCNN) for Automatic Segmentation of the Prostate on CT Images. J Med Phys 2019; 46(6):e430.
Document Type
Conference Proceeding
Publication Date
8-2019
Publication Title
J Med Phys
Abstract
Purpose: The importance of spatial image information utilized by a DCNN in automatic segmentation is central toward understanding how to train the DCNN for optimal performance. Here we evaluate the behavior of a DCNN for prostate segmentation on CT images using the Local Interpretable Modelagnostic Explanations (LIME), technique [1], which enables generation of an interpretable model by characterizing local subgroups/clusters contained within the image. Methods: Planning-CT (pCT) datasets for 1104 prostate cancer patients were retrospectively selected. Nine-hundred and sixty-four datasets were used for training/validation and 140 were used for testing. All images were resampled to spatial resolution of 1 ×1 ×1.5 mm, and a DCNN was trained. The top performing DCNN was chosen based on validation results and used to auto-segment the prostate on all testing images. Results were compared between DNN and physician-generated contours using Dice coefficient(DSC). The importance of each subregion was evaluated using forward feature selection following the LIME method, and 100 (of 4K) subregions were used for final classification. Multiple experiments were carried out to select subregions using different numbers of random samples. The optimal parameter was determined when the selected subregions converged. Each of the 140 testing datasets was characterized using the same parameters. Results: Selected subregions converged at > 10K random samples. Altogether 1058 subregions were selected, out of which 775(73%) appeared more than once and 23(2%) appeared more than 70 times (50% of the 140 testing datasets). One subregion was selected 91 times. Highest frequency subregions were observed to closest to the prostate gland, bladder and rectum. Conclusion: The behavior of a DCNN-based prostate segmentation algorithm was characterized/explained using a group of subregions on a per-testing sample basis. Characterization was consistent across datasets. Results showed that DCNN-based segmentation was associated primarily with image information in close vicinity of the prostate gland, bladder, and rectum.
Volume
46
Issue
6
First Page
e430