Cytokeratin Immunohistochemistry-Supervised Deep Learning for Detecting Breast Cancer Lymph Node Metastases and Evaluation of its Clinical Utility
Article Information
Yueh-Hung Chou1, Chien-Hui Wu2*, 3, Min-Hsiang Chang1, 3, Hsin-Hsiu Tsai4, and Yi-Ting Peng4
1Department of Anatomical Pathology, Far Eastern Memorial Hospital, No. 21, Section 2, Nanya S. Road, Banqiao District, New Taipei City, 220, Taiwan
2Department of Pathology, Taiwan Adventist Hospital, No.424, Sec. 2, Bade Rd., Songshan District, Taipei City 10556, Taiwan
3Li Jen Pathology Clinic, 1F., No. 16, Jinfeng St., Neihu Dist., Taipei City 114063, Taiwan
4AI Lab, Quanta Computer lnc., No. 211, Wenhua 2nd Rd., Guishan Dist., Taoyu an City 333, Taiwan
*Corresponding author:Chien-Hui Wu, Department of Pathology, Taiwan Adventist Hospital, No.424, Sec. 2, Bade Rd., Songshan District, Taipei City 10556, Taiwan
Received: January 27, 2025; Accepted: February 03, 2025; Published: February 10, 2025
Citation: Yueh-Hung Chou, Chien-Hui Wu, Min-Hsiang Chang, Hsin-Hsiu Tsai, and Yi-Ting Peng. Cytokeratin Immunohistochemistry-Supervised Deep Learning for Detecting Breast Cancer Lymph Node Metastases and Evaluation of its Clinical Utility. Journal of Cancer Science and Clinical Therapeutics 9 (2025): 01-08.
Share at FacebookAbstract
Lymph node status is an indispensable examination for breast cancer therapy. To detect small, inconspicuous metastatic carcinomas, pathologists usually require immunohistochemical (IHC) staining for cytokeratin (CK). Here, we proposed an IHC-supervised algorithm to create virtual CK masks in lymph node hematoxyline and eosin (H&E) images and evaluated its clinical utility. We enrolled 194 patients with breast cancer surgery-related axillary lymph nodes containing variously sized metastases. The deep learning network, Unet++ with EfficientNet-B7 as the backbone, was trained with the ground truth extracted from consecutive or re-stained CK. At the pixel level, the model had high accuracy (0.98 on average) and decent recall (0.64 on average) and performed best in macrometastasis, followed by micrometastasis and isolated tumor cells (ITC). At the whole-slide image (WSI) level, all 25 slides with macro-metastases and most micro-metastatic (15/16) were classified correctly. For ITC, 17/19 patients were identified; however, certain benign cells were misrecognized in 18/19 negative patients. In clinical settings, artificial intelligence can help pathologists detect micrometastatic carcinoma and significantly decrease reading time. IHC-supervised deep learning is robust and efficient, providing substantial, high-quality ground truth. The virtual CK masks and augmented WSI system enhanced pathologists’ ability to search for tumors in the lymph nodes.
Keywords
Immunohistochemistry, Artificial intelligence, Breast cancer, Virtual cytokeratin, Lymph node.
Immunohistochemistry articles; Artificial intelligence articles; Breast cancer articles; Virtual cytokeratin articles; Lymph node articles.
Article Details
Introduction
Breast cancer is the most common malignancy among women. According to the World Health Organization, there were 2.3 million women diagnosed with breast cancer and 670,000 deaths worldwide in 2022. To evaluate these patients, axillary sentinel lymph node sampling is critical for determining surgical choice, cancer staging, and treatment strategies [1]. If macrometastasis (> 2 mm) is not observed in the sentinel nodes during surgery, axillary dissection would be waived, and postoperative lymphedema could be prevented. For nodes with small metastatic foci (≦2 mm), such as micrometastases or isolated tumor cells (ITC), patients can be treated with radiotherapy or chemotherapy [2]. However, detecting small metastatic lesions in frozen or permanent sections is challenging for pathologists. Immunohistochemical staining (IHC) with a cytokeratin (CK) antibody is commonly used to reveal these areas, helping pathologists discover missed carcinomatous cells in 12%–29% of negative patients [3]. With deep learning and computing power development in recent decades, studies on digital pathology and image analysis with artificial intelligence (AI) have flourished [4, 5]. Detecting metastatic carcinoma in breast surgery-related lymph nodes is one of the major focuses of this field [6, 7]. In the CAMELYON 16 challenge, research teams used the most advanced neural network structures at the time (such as GoogLeNet, ResNet, and VGG-16) to create algorithms to detect metastatic carcinoma. The top five models competed with pathologists in the datasets provided, and some performed better than humans in patients with micrometastasis. However, a dataset based on human annotations must first be built to train and test these models. As only pathologists can identify lesions, experts must perform tedious labeling, requiring substantial amounts of time and money. Humans cannot precisely delineate the tumor area; because of this, the manual ground truth is not perfect, which makes it difficult for AI to learn from.
To solidify the ground truth, researchers have used IHC images to assist pathologists in labeling data [8, 9]. Phosphorylated histone H3 IHC can help pathologists accurately annotate mitosis and prevent the mislabeling of mimickers. Additionally, some studies have directly extracted IHC-positive areas using color deconvolution and used them as the ground truth to train algorithms [10, 11]. IHC- based ground truth is more precise and accurate (depending on the correlation between the targets and antibodies) than handcrafted data, which mitigates the training data demand and alleviates the dependence on expertise, allowing valuable expert resources reserved for model evaluation and clinical validation. With modern hardware and programming platforms, building deep-learning neural networks is easier if datasets are available. However, an algorithm acts similarly to a car engine and is only a core part of the application software. To assess the clinical benefits, the digital pathology environment and implementation must be considered [4]. Researchers have recently deployed models in clinical workflows and seen benefits [12, 13]. In this study, we used CK IHC images as the ground truth to create a virtual CK staining model that can predict the epithelial area in breast cancer surgery-related axillary lymph node hematoxylin and eosin (H&E) images. In addition, we implemented AI in the user interface and analyzed how it improved diagnosis accuracy and efficiency.
Methods
This study was approved by the Research Ethics Review Committee of the Far Eastern Memorial Hospital, New Taipei City, Taiwan (No. 112230-E/DPAI-009). All slides and blocks in the study were used in a clinical diagnostic setting, retrieved from the repository, and did not contain personal information. All methods were performed in accordance with relevant guidelines and regulations. The requirement for informed patient consent was waived by the Research Ethics Review Committee.
Case collectionIn this study, 194 patients with breast cancer surgery- related axillary lymph nodes were enrolled, and blocks were retrieved from Far Eastern Memorial Hospital (New Taipei City, Taiwan) between January 2020 and December 2023. Twenty-eight patients were benign, and 166 contained metastases of various sizes (Table 1). The patients were separated into training sets, including a consecutive CK group (Set A), a re-stained CK group (Set B), and test sets. Test Set B, a subset of Test Set A, was designed for clinical performance evaluation.
Specimen staining and image acquisitionThe retrieved formalin-fixed paraffin-embedded blocks were cut into 5-μm-thick sections and mounted on hydrophilic slides. For the consecutive CK group, the two nearby slides were stained with H&E and CK antibodies (AE1/AE3/ PCK26, Ventana Medical Systems, Oro Valley, AZ, USA) on a Benchmark Ultra (Ventana Medical Systems). The slides were scanned with a Hamamatsu S210 (Hamamatsu, Iwata City, Suizuoka, Japan) at 40× magnification (0.23 μm per pixel resolution) and saved in NDPI format. In the re- stained CK group, one section was cut from each block and stained with H&E. The slides were then scanned, and the coverslips were removed. After bleaching with potassium permanganate, the slides were re-stained with CK for IHC and scanned.
Table 1: Case enrollment and ground truth diagnosis.
Cases |
Diagnosis |
||||
Macro-metastasis |
Micro-metastasis |
Isolated tumor cells |
Negative |
||
Training Set A (Consecutive) |
108 |
78 |
12 |
11 |
7 |
Training Set B (Re-stained) |
14 |
6 |
2 |
4 |
2 |
Test Set A |
72 |
25 |
16 |
12 |
19 |
(Re-stained) |
|||||
Test Set B |
51 |
22 |
8 |
5 |
16 |
(Re-stained) |
The Unet + + neural network with EfficientNet-b7 as the backbone was pre-trained on ImageNet. The first model was trained using Set A, which consisted of 108 H&E-stained whole-slide images (WSIs) and their corresponding CK images (Figure 1). The ground truth was extracted using color deconvolution after alignment. However, image deviations between consecutive slides still existed. To enhance the first model, another 14 H&E-stained WSIs and their re-stained CK images (Set B) were added for fine-tuning, resulting in the second model. Observing that the predictions of this second model on Set A were already better than the ground truths obtained from consecutive CK images, this second model was used to infer the H&E images of Set A and create pseudo- labels. Finally, the refined Set A and Set B were combined to fine-tune the final model (semi-supervised learning).
Clinical performance evaluation of the lymph node modelTo evaluate how this AI could aid pathologists, we chose 51 lymph node slides (Set B) for two pathologists (Drs. A and B) to read in three different modalities: glass slides, WSI without AI, and WSI with AI. In the first run, the pathologists examined the glass slides using a light microscope. Once the pathologists diagnosed, the slides were scanned, bleached, and re-stained with CK to establish a definite diagnosis. After a washout period (more than 2 weeks) and case shuffling, they examined and diagnosed WSIs of the same cases again. The WSIs were then inferred using our AI within the Smart Pathology System (version 1.6.0; Quanta Computer, Taoyuan, Taiwan), where suspicious epithelial areas were highlighted with green masks; the software automatically measured mask sizes and classified the tumors into four categories: macrometastasis (> 2 mm), micrometastasis (≦ 2 mm), ITC (≦ 0.2 mm) and negative [3]. Finally, after another washout period, both doctors read the augmented WSIs using virtual epithelial masks. Diagnostic accuracy and time were recorded for performance evaluation.
Statistical analysisPython (ver. 3.8.13; https://www.python.org/) was used to calculate model performance data, including the intersection of union (IoU), recall, precision, and area under the curve (AUC). Excel in Office 365 (Microsoft Corp., Redmond, WA, USA) was used for statistical analysis. The read time difference between modalities was analyzed using a paired t-test, and statistical significance was set at p < 0.05.
Results
Model performance at pixel-level
The model was evaluated for pixel-level classification using 72 cases in Set A (Figure 2 and Table 2). Macrometastasis showed the highest performance in terms of the predicted CK-stained area (average IoU: 0.66; recall: 0.81; AUC: 0.99), followed by micrometastasis (average IoU: 0.48; recall: 0.68; AUC: 0.99). ITC displayed lower scores (average IoU: 0.15; recall: 0.24; AUC: 0.96), owing to its small size.
Model performance at WSI-levelIn the WSI-level classification (Figure 3), all 25 patients with macrometastasis were successfully categorized; 15 of the 16 patients with micrometastasis were classified correctly, while one was incorrectly identified as ITC. In the patients with ITC, 17/19 were correctly identified. However, owing to the high sensitivity of our model, scattered benign cell aggregates (macrophages and endothelial cells) were misrecognized as small tumor nests in 18/19 negative patients.
Table 2: Model performance in pixel-level.
IoU (95% CI) |
Recall (95% CI) |
Precision (95% CI) |
Accuracy (95% CI) |
AUC (95% CI) |
|
Macro-metastasis |
0.6635 ± 0.051 |
0.8141 ± 0.047 |
0.7739 ± 0.035 |
0.9709 ± 0.007 |
0.9872 ± 0.003 |
Micro-metastasis |
0.4848 ± 0.121 |
0.6817 ± 0.141 |
0.5888 ± 0.131 |
0.9993 ± 0.001 |
0.9942 ± 0.004 |
Isolated tumor cells |
0.1475 ± 0.140 |
0.2445 ± 0.213 |
0.2546 ± 0.210 |
0.9998 ± 0.001 |
0.9558 ± 0.031 |
Figure 2: Metastatic carcinoma predicted by the virtual CK. The algorithm inferred H&E images and detected carcinomatous cells. The prediction masks were consistent with CK-positive areas in patient with macro- and micro-metastasis. However, some isolated tumor cells (the fourth column) were missing.
Figure 3: Confusion matrix of model performance at whole slide image (WSI)-level. The model was over-sensitive to suspicious tumor cells; as a result, most negative cases were classified as isolated tumor cells (ITC). The red line separates macro-metastasis from other lesions. The model displayed perfect accuracy in macro- metastasis.
Using glass slides and microscopy, both pathologists could identify all patients with macrometastasis and negative cases, whereas only half of the micrometastases were correctly diagnosed, and all five patients with ITC were missed. When reading the WSIs, they diagnosed all macrometastases and negative lymph nodes; more micrometastases (six for Dr. A and five for Dr. B) were diagnosed, but all patients with ITC were still missed. With AI assistance (Figure 4), even more micrometastatic lesions were correctly recognized (seven for Dr. A and all eight patients for Dr. B), and one patient with ITC was noticed by Dr. B. Although a few benign mimickers, such as macrophages and endothelial cells, were mistaken by the AI in the negative cases, both pathologists excluded these false-positive objectives, which prevented overdiagnosis. Overall, pathologists had better accuracy with AI assistance, particularly in detecting small metastatic foci.
Both pathologists took less time to diagnose macrometastases with glass slides, followed by micrometastases and ITC (Table 3). The negative cases required the longest time because they had to check every corner of the sections. With WSIs, reading time maintained a similar pattern, but one of the pathologists (Dr. A) needed more time to check for micrometastases (22 s more on average), ITC (12 s), and negative cases (17 s), compared to using a microscope. However, when pathologists used pre-inferenced virtual CK masks (Figure 4) and automatic measurement, the time required to confirm micrometastasis dropped significantly (39 and 60 s less for Drs. A and B, respectively). The reading times for the ITC and negative cases decreased for Dr. B.
Figure 4: Virtual CK masks for pathologists. The software highlighted the predicted area in light green, which users easily noticed. The patient with ITC in column 2 was missed in the glass slide and WSI reading, but Dr. B recognized the lesion with the AI-aided system. A few macrophages and endothelial cells might deceive the AI, but pathologists could exclude them correctly.
Table 3: Diagnosis accuracy and average read time (sec) in different modalities.
![]() |
* WSI compared to glass slides, p < 0.05
** WSI with AI compared to WSI only, p <0.05
*** All 5 cases were classified as ITC by AI, but tumor cells were correctly located in only 3 cases.
**** In 16 negative cases, 14 and 1 were classified as ITC and micro-metastasis respectively.
Discussion
In histopathological examinations, it is essential to identify the epithelial component by grading dysplasia in cervical specimens [14], defining cancer areas in gastrointestinal biopsy [15], and screening for metastatic carcinoma in lymph nodes [3]. Year-long training is required for pathologists to recognize various epithelial components; however, even for experienced experts, it is still impossible to identify all types of epithelium on slides. Because of this, CK IHC is usually required to reveal these cells; it is an accurate and reliable tool for detecting specific proteins in epithelium-derived cells; however, it costs 10–20 US dollars per CK stain. Because of this, performing CK-IHC on each slide is impractical and uneconomical, increasing the financial and labor burden on the laboratory and delaying the turnaround time of reports. We wish to see if a virtual CK-stained AI could meet these demands. In this study, instead of manual annotation, we used consecutive and re-stained CK IHC to build a virtual CK algorithm that can predict metastatic carcinoma in lymph node H&E images. The IHC-supervised deep learning model achieved high accuracy in detecting variable-sized lesions; more importantly, it improved diagnosis accuracy and efficiency. Virtual CK algorithms, or AI models that can segment epithelial areas, have been studied for years. Initially, researchers used image textures such as local binary patterns to segment different areas in H&E [16]. Subsequently, deep learning was introduced and evolved from a simple convoluted neural network to a sophisticated transformer [14, 17, 18]. However, human annotations are inevitably required as ground truth to train and evaluate these models regardless of how advanced neural networks are applied. Recently, studies have begun using IHC images as the ground truth to train models [10, 11]. IHC-supervised machine learning offers several advantages. IHC highlights target cells by their nature (proteins) rather than morphology, which pathologists rely heavily on, making the ground truth extracted from IHC more accurate. Additionally, IHC images can provide extremely precise and exquisite annotations at the pixel level, far beyond what humans are capable of. Lastly, research teams can harvest ground truth from IHC images regardless of their pathology expertise, which saves time and money. As a result, IHC images can offer economical, substantial, and high-quality datasets with a limited number of patients (122 training cases in our study, less than half of 270 cases in CAMELYON16), allowing AI to learn efficiently.
IHC-supervised learning is efficient and can be applied to other immunostaining and study topics [19]. For example, if we replaced CK with CD45 to highlight lymphocytes, we could build a neural network to predict tumor-infiltrating lymphocyte areas and reveal the spatial information of intratumoral immune responses. Another possibility is to use desmin and S-100 antibodies to visualize muscle and neural tissues in gastrointestinal specimens, allowing the algorithms to segment the muscle layers and ganglions to facilitate the evaluation of tumor invasion depth and perineural permeation. Using generative AI [15, 19, 20], such as generative adversarial networks, computers can generate vivid images reminiscent of genuine IHC that corresponded perfectly to the original H&E stain without additional alignment. These alternative images are cheaper than real IHCs, reducing costs to the health insurance system. Instead of performing routine IHC, precious tissues (especially small biopsy specimens) can be preserved for molecular or genetic examination. IHC- supervised AI or virtual IHC could be alternative assessment methods; however, they are not technically perfect. In this study, our model achieved a high accuracy (0.98 on average) and decent recall (0.64 on average) at the pixel level. In terms of the WSI level, the AI was oversensitive, and most negative cases were misclassified as ITCs as it mistook some macrophages and endothelial cells as tumor nests. Regardless of the size of these lesions, the AI classifies cases into ITC categories. The model had difficulty detecting discohesive tumor cells in a few patients with ITC (false negatives). For patients containing a small number of cancer cells (less than 100), the model did not find the actual lesion but only classified them as ITC by benign mimickers. We attempted an overfitting test for these difficult cases. Even if engineers simplified the dataset to force the model to be overfitted for difficult ITCs by adding target data to the training set, they still failed to detect discrete tumor cells, though this was impossible for pathologists. With the neural network structures used here, we assumed that more data might not improve performance, indicating that IHC-supervised AI or virtual IHC cannot replace the actual IHC staining.
Although AI cannot function as a real CK IHC, this study demonstrated that digital pathology combined with AI enhances pathologists’ diagnostic sensitivity and efficiency. Both pathologists had similar accuracy when reading the glass slides and WSI. They made better classifications for micrometastatic lesions because the measurement was more precise for WSI viewers; however, some may require extra time (less than 1 min) to navigate the WSI and measure the lesion. Both pathologists, aided by AI, observed more micrometastatic cancers in the lymph nodes. Because the software pointed out the suspicious foci in the WSIs, they could check these suspicious areas directly. In addition, the software quantified the tumor area and classified it into four categories, saving time as the pathologists did not have to manually measure it. However, the benefits of ITC are limited. Although the model correctly predicted parts of the tumor nests, pathologists could not confirm these by morphology. In terms of negative cases, the model falsely labeled scattered non-tumor cells, but the pathologists used less time because they did not need to examine whole lymph node sections. In summary, neither humans nor standalone AI performed perfectly; however, humans performed better with AI assistance. Rather than obsessing over the accuracy of AI, it is more realistic to focus on how it can support pathologists in clinical work [13]. This study had a few limitations. Most lymph node tissues were formalin-fixed after intraoperative examination (frozen sections), meaning that their morphology differed from that of frozen sections, which are commonly used for sentinel lymph node diagnosis. Frozen section images are unstandardized and contain various artifacts, such as nuclear retraction and chatter marks, which hinder model inference [7]. Obtaining restrained CK images for IHC- supervised training is difficult. Additionally, the dataset was obtained from a single medical center. Because the quality of H&E staining differs among laboratories, the model will require fine-tuning and customization before being applied to other institutions. Finally, we analyzed the WSIs on a standalone computer, and the average inference time for one WSI file was approximately 5 min (using an NVIDIA A6000 GPU). To minimize the waiting time (from doctor’s order to complete inference), it will be necessary to integrate AI into a digital pathology information system; as a result, candidate cases can be pre-inferred before pathologists read the H&E stain.
In conclusion, we developed a CK IHC-supervised algorithm to detect metastatic carcinoma in lymph node H&E-stained images. The training pipeline was efficient and economical owing to its high-quality ground truth. The virtual CK AI was highly accurate for both macro- and micro- metastases but was oversensitive. Despite this, it exhibited clinical benefits in supporting pathologists in achieving a better and quicker diagnosis. In the future, it will be necessary to incorporate diverse AI models into the digital workflow of pathologists.
Declaration of Competing Interest
The authors declare no competing interests
Acknowledgments
The authors thank Dr. Chu Hsiu-Yi for her support.
Author contributions
Study Design: MHC, HHT and CHW. Data acquisition: MHC, CHW and YHC. Creation of software used in this study: HHT. Data analysis: HHT and MHC. Manuscript writing: YHC, MHC, HHT, and PYT. All authors have read and approved the final manuscript.
Data availability statement
The datasets used and analysed in the current study are available from the corresponding author upon reasonable request.
Competing interests
The authors declare no competing interests.
Funding
The project was funded by Quanta Medical Technology Foundation (QMTF).
References
- Zhu H, and Dogan B E. American Joint Committee on Cancer's Staging System for Breast Cancer, Eighth Edition: Summary for Eur. J. Breast Health 17 (2021): 234-238.
- Luo S, W Fu, J Lin, et al. Prognosis and local treatment strategies of breast cancer patients with different numbers of micrometastatic lymph World J. Surg. Oncol 21 (2023): 202.
- Apple K. Sentinel lymph node in breast cancer: review article from a pathologist's point of view. J. Pathol. Transl. Med 50 (2016): 83-95.
- Zarella, D. et al. Artificial intelligence and digital pathology: clinical promise and deployment considerations. J. Med. Imaging 10, 051802 (2023).
- Litjens G, C I Sánchez, N Timofeeva, et al. Deep learning as a tool for increased accuracy and efficiency of histopathological Sci. Rep 6 (2016): 26286.
- Ehteshami Bejnordi B, M Veta, Paul Johannes D, et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318 (2017): 2199-2210.
- Kim Y G, I H Song, H Lee, et Challenge for diagnostic assessment of deep learning algorithm for metastases classification in sentinel lymph nodes on frozen tissue section digital slides in women with breast cancer. Cancer Res. Treat 52 (2020): 1103-1111.
- Turkki R, Linder N, Kovanen P E, et al. Antibody- supervised deep learning for quantification of tumor- infiltrating immune cells in hematoxylin and eosin stained breast cancer samples. J. Pathol. Inform 7 (2016): 38.
- Ibrahim A, Toss M S, Makhlouf S, et al. Improving mitotic cell counting accuracy and efficiency using phosphohistone-H3 (PHH3) antibody counterstained with haematoxylin and eosin as part of breast cancer grading. Histopathology 82 (2023): 393-406.
- Valkonen M, Jorma I, Onni Y, et al. Cytokeratin- supervised deep learning for automatic recognition of epithelial cells in breast cancers stained for ER, PR, and Ki-67. IEEE Trans. Med. Imaging 39 (2020): 534-542.
- Brázdil T, Gallo M, Nenutil R, et al. Automated annotations of epithelial cells and stroma in hematoxylin- eosin-stained whole-slide images using cytokeratin re- staining. J. Pathol. Clin. Res 8 (2022): 129-142.
- Steiner D F, Robert M D, Yun L, et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am. J. Surg. Pathol 42 (2018): 1636-1646.
- Retamero J A, Gulturk E, Bozkurt A, et al. Artificial intelligence helps pathologists increase diagnostic accuracy and efficiency in the detection of breast cancer lymph node metastases. Am. J. Surg. Pathol 48 (2024): 846-854.
- Sornapudi S, Hagerty J, Stanley R J, et al. EpithNet: Deep regression for epithelium segmentation in cervical histology images. J. Pathol. Inform 11 (2020): 10.
- Hong Y, Heo Y J, Kim B, et al. Deep learning-based virtual cytokeratin staining of gastric carcinomas to measure tumor-stroma ratio. Sci. Rep 11 (2021): 19255.
- Linder N, Konsti J, Turkki R, et Identification of tumor epithelium and stroma in tissue microarrays using texture analysis. Diagn. Pathol 7 (2012): 22.
- Wu Y, Koyuncu C F, Toro P, et al. A machine learning model for separating epithelial and stromal regions in oral cavity squamous cell carcinomas using H&E-stained histology images: A multi-center, retrospective study. Oral Oncol 131 (2022): 105942.
- Chen R J, Ding T, Lu M Y, et Towards a general-purpose foundation model for computational pathology. Nat. Med 30 (2024): 850-862.
- de Haan K, Zhang Y, Zuckerman J E, et Deep learning- based transformation of H&E stained tissues into special stains. Nat. Commun 12 (2021): 4884.
- Martino F, Ilardi G, Varricchio S, et al. A deep learning model to predict Ki-67 positivity in oral squamous cell carcinoma. J. Pathol. Inform 15 (2024): 100354.