Pathologist-like explainable AI for interpretable Gleason grading in prostate cancer

Document Type

Article

Publication Date

10-8-2025

Publication Title

Nat Commun

Keywords

Humans, Male, Prostatic Neoplasms, Neoplasm Grading, Artificial Intelligence, Pathologists, Observer Variation, Prostate

Abstract

The aggressiveness of prostate cancer is primarily assessed from histopathological data using the Gleason scoring system. Conventional artificial intelligence (AI) approaches can predict Gleason scores, but often lack explainability, which may limit clinical acceptance. Here, we present an alternative, inherently explainable AI that circumvents the need for post-hoc explainability methods. The model was trained on 1,015 tissue microarray core images, annotated with detailed pattern descriptions by 54 international pathologists following standardized guidelines. It uses pathologist-defined terminology and was trained using soft labels to capture data uncertainty. This approach enables robust Gleason pattern segmentation despite high interobserver variability. The model achieved comparable or superior performance to direct Gleason pattern segmentation (Dice score: 0.713±0.003 vs. 0.691±0.010) while providing interpretable outputs. We release this dataset to encourage further research on segmentation in medical tasks with high subjectivity and to deepen insights into pathologists' reasoning.

Medical Subject Headings

ambulatory care; antimicrobial stewardship; outpatients; rapid diagnostics; upper respiratory tract infections; Humans; Male; Prostatic Neoplasms; Neoplasm Grading; Artificial Intelligence; Pathologists; Observer Variation; Prostate

PubMed ID

41062516

Volume

16

Issue

1

First Page

8959

Last Page

8959

Share

COinS