AI

Automatic Assessment of Whole-Slide Image Quality and its Impact on AI Cancer Diagnosis

Abstract

Background: The growing adoption of digital pathology enables remote image access or consults and the use of powerful image analysis algorithms. Like specimen processing, the slide digitization process can result in additional artifacts, including out-of-focus (OOF) regions of varying severity and size. OOF regions are often only detected upon attempted high power review of affected regions, potentially triggering rescanning and causing workflow delays. Although scan-time operator screening for whole-slide OOF is feasible, manual screening for OOF areas affecting only parts of the slide is intractable. Methods: In this study, we developed an automated convolutional neural network algorithm (ConvFocus) to exhaustively categorize OOF regions on digitized slides. This algorithm was applied to slides across 11 different tissue types, which were digitized using four different scanners; and its predictions were compared with pathologist-annotated focus quality grades for 500 regions. Results: When compared to pathologist-graded focus quality, the model achieved correlation coefficents above 0.84 across for two different scanners. Conclusions: The availability of whole-slide OOF categorization could enable on-the-spot rescans prior to pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow.