ECP 2023 Abstracts

S39 Virchows Archiv (2023) 483 (Suppl 1):S1–S391 13 between fibrosis and myocardium, and using Cellpose to detect single adipocytes. Results: The accuracy of the pixel classifiers was estimated to be over 96% on the separate set of training images, and the preliminary tests look promising. We will annotate ground-truth regions in five randomly selected samples from different cases and compare them blinded with automated classification. Similarity will be calculated using the Jac- card coefficient, and performance will be measured using the F1-score. The method will be further tested by applying it to five arrhythmogenic cardiomyopathy cases and matched controls in a blinded manner, and the results will be compared with the established diagnostic criteria of arrhythmogenic cardiomyopathy. The results of the study are pending and will be presented at the conference. Conclusion: The expected outcome is an automated method for esti- mating fibrosis, fat, and residual myocytes in Picro Sirius Red-stained myocardial tissue. We expect that this method will contribute to a standardized and reproducible tool that can be used to establish a car- diac phenotype in cardiac pathological research projects and, hopefully, in future daily diagnostics. OFP-10-006 IDH status prediction in gliomas using H&E slides and deep learning Y. Friedmann*, D. Roitman, N. Shelestovich, I. Barshack, S. Ben Amitay *Sheba Medical Center, Israel Background & objectives: Determination of IDH mutation status is essential for glioma diagnosis and management. Yet, the limited availability and time-consuming nature of IDH tests pose significant challenges. We aim to develop a deep learning approach for IDH status prediction using H&E slides. Methods: Our pipeline comprised tissue detection and tile extraction from 1819 H&E slides from Sheba Medical Center and TCGA. We used these tiles to train a self-supervised vision transformer to extract features, which were subsequently used to train a DeepMIL classifier for IDH status prediction. The classifier was trained using 5-fold cross- validation on H&E slides and their corresponding IDH status. Results: The dataset for IDH classification comprised 323 histologi- cally confirmed glioma cases with known IDH status obtained from Sheba MC, consisting of 378 H&E stained slides. The dataset included 302 IDH wildtype and 76 IDH mutant slides, which were divided into training and testing sets. Our DeepMIL classifier achieved a mean AUC score of 0.88 in cross-validation. Further evaluation on the test set achieved an AUC score of 0.94. Adding TCGA slides with the same mutant/wild-type ratio achieved similar AUC scores. To enhance per- formance, we incorporated age and sex information into the model using logistic regression, resulting in AUC scores of 0.94 and 0.95 in cross-validation and test sets, respectively. Conclusion: Our study demonstrates the potential of deep learning in accurately predicting IDH status from H&E slides, achieving high AUC in cross-validation and test sets, which was further validated using TCGA slides. Incorporating patient information improved the model’s performance. This approach could significantly reduce the time and cost associated with traditional molecular testing, particu- larly in resource-limited settings, thus improving patient outcomes. We further plan to expand the dataset, incorporate stain normalization techniques, predict additional biomarkers and evaluate the model’s clinical utility. OFP-10-007 A look at scanner introduced variation in contrast, resolution, and colour across 10 different models of whole slide imaging (WSI) scanner H. Pye*, D. Brettle, D. Kaye, C. Dunn, M. Humphries, D. Treanor *National Pathology Imaging Co-operativel, Leeds Teaching Hospi- tals NHS Trust, Leeds, Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom Background & objectives: Variation in images is introduced across the digital pathology pathway. The WSI scanner is a source of this variation; for example: variation in illumination, colour space, and/ or signal to noise ratio will be reflected in image colour, contrast and resolution. Methods: We measured the range of introduced variation across 10 different makes and models of WSI scanner. Ground truth slides were prepared representing a range of contrast (neutral density filters; rang- ing from 100% transmission to 7% transmission), colour in relation to H&E (stained biopolymer; 3 intensities of each haematoxylin and eosin) and a resolution test pattern (line pairs down to 1um). Results: Max and min contrast were shifted and the linear relation- ship with % transmission stretched differently between resultant images across scanners, in some cases resulting in a logarithmic-style relation- ship. These contrast patterns were reflected in H&E colour variation across all channels, apart from Eosin red which was saturated. Shifts were also seen within the colour space for some, the extent and direc- tion varying across scanners. No scanners could reproduce all the line pairs perfectly, with variation seen both between and within the images. The measured variabilities shown in this work will be reflected in any image data set that uses a range of different scanners, impacting the value of that dataset. Conclusion: Some believe computation techniques may be able to overcome image variability. However, if data is compromised or miss- ing, in many cases it cannot be compensated for afterwards. We believe as a minimum, the variation in image generation must be described to allow users of image processing and artificial intelligence applications to assess any impact. The introduction of quality processes and test tools is key to this measurement and assessment and could be used to reduce image variability where needed. Funding: National Pathology Imaging Co-operative, NPIC (Project no. 104687) supported by a £50m investment from the Data to Early Diagnosis and Precision Medicine challenge, managed and delivered by UK Research and Innovation (UKRI) OFP-10-008 Towards an open-source Transformer-based multiclass segmenta- tion pipeline for basic kidney histology J. He, P. Valkema*, J. Long, J. Li, S. Florquin, M. Naesens, T. Nguygen, S. Meziyerh, A. de Vries, O. de Boer, F. Verbeek, Z. Xiong, J. Kers *Pathology, LUMC, Leiden, Pathology, Amsterdam UMC, The Netherlands Background & objectives: Multiclass segmentation of the micro- anatomy of kidney biopsies is an important and non-trivial task in computational renal pathology, forming the basis for development of more complex tools. In a multicentre study, we tested the performance of a novel Transformer-based workflow. Methods: We densely annotated basic anatomical objects (glomeruli, tubules and vessels) in 261 regions of interest of kidney biopsies from Amsterdam, Utrecht and Leiden (the Netherlands). Test performance was assessed on 24 annotated biopsies from Leuven (Belgium) with subgroup analysis for the extent of fibrosis and inflammation (<25%, 26-50%, >50%). We compared models trained with CNN-based U-Net and the Transformer-based Mask2Former. Results: Mask2Former with the Swin-B encoder (SB-M2F) showed the highest mean (IoU 0.75 vs 0.69) and per-class external test set performance (IoU 0.92 vs 0.88 for glomeruli, 0.89 vs 0.85 for tubules and 0.59 vs 0.48 for vessels) compared to the best-performing U-Net with a ResNet18 encoder (R18-U-Net). SB-M2F compared to R18- U-Net performed particularly better for each class with increasing

RkJQdWJsaXNoZXIy Mzg2Mjgy