Egevad L, Swanberg D, Delahunt B, Ström P, Kartasalo K, Olsson H, Berney DM, Bostwick DG, Evans AJ, Humphrey PA, Iczkowski KA, Kench JG, Kristiansen G, Leite KRM, McKenney JK, Oxley J, Pan C, Samaratunga H, Srigley JR, Takahashi H, Tsuzuki T, van der Kwast T, Varma M, Zhou M, Clements M, Eklund M
Virchows Arch. 477 (6) 777-786 [2020-12-00; online 2020-06-15]
The International Society of Urological Pathology (ISUP) hosts a reference image database supervised by experts with the purpose of establishing an international standard in prostate cancer grading. Here, we aimed to identify areas of grading difficulties and compare the results with those obtained from an artificial intelligence system trained in grading. In a series of 87 needle biopsies of cancers selected to include problematic cases, experts failed to reach a 2/3 consensus in 41.4% (36/87). Among consensus and non-consensus cases, the weighted kappa was 0.77 (range 0.68-0.84) and 0.50 (range 0.40-0.57), respectively. Among the non-consensus cases, four main causes of disagreement were identified: the distinction between Gleason score 3 + 3 with tangential cutting artifacts vs. Gleason score 3 + 4 with poorly formed or fused glands (13 cases), Gleason score 3 + 4 vs. 4 + 3 (7 cases), Gleason score 4 + 3 vs. 4 + 4 (8 cases) and the identification of a small component of Gleason pattern 5 (6 cases). The AI system obtained a weighted kappa value of 0.53 among the non-consensus cases, placing it as the observer with the sixth best reproducibility out of a total of 24. AI may serve as a decision support and decrease inter-observer variability by its ability to make consistent decisions. The grading of these cancer patterns that best predicts outcome and guides treatment warrants further clinical and genetic studies. Results of such investigations should be used to improve calibration of AI systems.
PubMed 32542445
DOI 10.1007/s00428-020-02858-w
Crossref 10.1007/s00428-020-02858-w
pmc: PMC7683442
pii: 10.1007/s00428-020-02858-w