Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures

Automatic classification of colon and lung cancer images is crucial for early detection and accurate diagnostics. However, there is room for improvement to enhance accuracy, ensuring better diagnostic precision. This study introduces two novel dense architectures (D1 and D2) and emphasizes their eff...

Full description

Saved in:
Bibliographic Details
Main Authors: Uddin, A. Hasib, Chen, Yen-Lin, Akter, Miss Rokeya, Ku, Chin Soon, Yang, Jing, Por, Lip Yee
Format: Article
Published: Elsevier 2024
Subjects:
Online Access:http://eprints.um.edu.my/45236/
https://doi.org/10.1016/j.heliyon.2024.e30625
Tags: Add Tag
No Tags, Be the first to tag this record!
id my.um.eprints.45236
record_format eprints
institution Universiti Malaya
building UM Library
collection Institutional Repository
continent Asia
country Malaysia
content_provider Universiti Malaya
content_source UM Research Repository
url_provider http://eprints.um.edu.my/
topic QA75 Electronic computers. Computer science
R Medicine
spellingShingle QA75 Electronic computers. Computer science
R Medicine
Uddin, A. Hasib
Chen, Yen-Lin
Akter, Miss Rokeya
Ku, Chin Soon
Yang, Jing
Por, Lip Yee
Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
description Automatic classification of colon and lung cancer images is crucial for early detection and accurate diagnostics. However, there is room for improvement to enhance accuracy, ensuring better diagnostic precision. This study introduces two novel dense architectures (D1 and D2) and emphasizes their effectiveness in classifying colon and lung cancer from diverse images. It also highlights their resilience, efficiency, and superior performance across multiple datasets. These architectures were tested on various types of datasets, including NCT-CRC-HE-100K (set of 100,000 non -overlapping image patches from hematoxylin and eosin (H &E) stained histological images of human colorectal cancer (CRC) and normal tissue), CRC-VAL-HE-7K (set of 7180 image patches from N = 50 patients with colorectal adenocarcinoma, no overlap with patients in NCTCRC-HE-100K), LC25000 (Lung and Colon Cancer Histopathological Image), and IQ-OTHNCCD (Iraq -Oncology Teaching Hospital/National Center for Cancer Diseases), showcasing their effectiveness in classifying colon and lung cancers from histopathological and Computed Tomography (CT) scan images. This underscores the multi -modal image classification capability of the proposed models. Moreover, the study addresses imbalanced datasets, particularly in CRC-VAL-HE7K and IQ-OTHNCCD, with a specific focus on model resilience and robustness. To assess overall performance, the study conducted experiments in different scenarios. The D1 model achieved an impressive 99.80 % accuracy on the NCT-CRC-HE-100K dataset, with a Jaccard Index (J) of 0.8371, a Matthew ` s Correlation Coefficient (MCC) of 0.9073, a Cohen ` s Kappa (Kp) of 0.9057, and a Critical Success Index (CSI) of 0.8213. When subjected to 10 -fold cross -validation on LC25000, the D1 model averaged (avg) 99.96 % accuracy (avg J, MCC, Kp, and CSI of 0.9993, 0.9987, 0.9853, and 0.9990), surpassing recent reported performances. Furthermore, the ensemble of D1 and D2 reached 93 % accuracy (J, MCC, Kp, and CSI of 0.7556, 0.8839, 0.8796, and 0.7140) on the IQ-OTHNCCD dataset, exceeding recent benchmarks and aligning with other reported results. Efficiency evaluations were conducted in various scenarios. For instance, training on only 10 % of LC25000 resulted in high accuracy rates of 99.19 % (J, MCC, Kp, and CSI of 0.9840, 0.9898, 0.9898, and 0.9837) (D1) and 99.30 % (J, MCC, Kp, and CSI of 0.9863, 0.9913, 0.9913, and 0.9861) (D2). In NCT-CRC-HE-100K, D2 achieved an impressive 99.53 % accuracy (J, MCC, Kp, and CSI of 0.9906, 0.9946, 0.9946, and 0.9906) with training on only 30 % of the dataset and testing on the remaining 70 %. When tested on CRC-VAL-HE-7K, D1 and D2 achieved 95 % accuracy (J, MCC, Kp, and CSI of 0.8845, 0.9455, 0.9452, and 0.8745) and 96 % accuracy (J, MCC, Kp, and CSI of 0.8926, 0.9504, 0.9503, and 0.8798), respectively, outperforming previously reported results and aligning closely with others. Lastly, training D2 on just 10 % of NCT-CRC-HE-100K and testing on CRC-VAL-HE-7K resulted in significant outperformance of InceptionV3, Xception, and DenseNet201 benchmarks, achieving an accuracy rate of 82.98 % (J, MCC, Kp, and CSI of 0.7227, 0.8095, 0.8081, and 0.6671). Finally, using explainable AI algorithms such as Grad -CAM, Grad -CAM ++, Score -CAM, and Faster Score -CAM, along with their emphasized versions, we visualized the features from the last layer of DenseNet201 for histopathological as well as CT -scan image samples. The proposed dense models, with their multi -modality, robustness, and efficiency in cancer image classification, hold the promise of significant advancements in medical diagnostics. They have the potential to revolutionize early cancer detection and improve healthcare accessibility worldwide.
format Article
author Uddin, A. Hasib
Chen, Yen-Lin
Akter, Miss Rokeya
Ku, Chin Soon
Yang, Jing
Por, Lip Yee
author_facet Uddin, A. Hasib
Chen, Yen-Lin
Akter, Miss Rokeya
Ku, Chin Soon
Yang, Jing
Por, Lip Yee
author_sort Uddin, A. Hasib
title Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
title_short Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
title_full Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
title_fullStr Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
title_full_unstemmed Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
title_sort colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures
publisher Elsevier
publishDate 2024
url http://eprints.um.edu.my/45236/
https://doi.org/10.1016/j.heliyon.2024.e30625
_version_ 1811682106875576320
spelling my.um.eprints.452362024-09-30T02:13:27Z http://eprints.um.edu.my/45236/ Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures Uddin, A. Hasib Chen, Yen-Lin Akter, Miss Rokeya Ku, Chin Soon Yang, Jing Por, Lip Yee QA75 Electronic computers. Computer science R Medicine Automatic classification of colon and lung cancer images is crucial for early detection and accurate diagnostics. However, there is room for improvement to enhance accuracy, ensuring better diagnostic precision. This study introduces two novel dense architectures (D1 and D2) and emphasizes their effectiveness in classifying colon and lung cancer from diverse images. It also highlights their resilience, efficiency, and superior performance across multiple datasets. These architectures were tested on various types of datasets, including NCT-CRC-HE-100K (set of 100,000 non -overlapping image patches from hematoxylin and eosin (H &E) stained histological images of human colorectal cancer (CRC) and normal tissue), CRC-VAL-HE-7K (set of 7180 image patches from N = 50 patients with colorectal adenocarcinoma, no overlap with patients in NCTCRC-HE-100K), LC25000 (Lung and Colon Cancer Histopathological Image), and IQ-OTHNCCD (Iraq -Oncology Teaching Hospital/National Center for Cancer Diseases), showcasing their effectiveness in classifying colon and lung cancers from histopathological and Computed Tomography (CT) scan images. This underscores the multi -modal image classification capability of the proposed models. Moreover, the study addresses imbalanced datasets, particularly in CRC-VAL-HE7K and IQ-OTHNCCD, with a specific focus on model resilience and robustness. To assess overall performance, the study conducted experiments in different scenarios. The D1 model achieved an impressive 99.80 % accuracy on the NCT-CRC-HE-100K dataset, with a Jaccard Index (J) of 0.8371, a Matthew ` s Correlation Coefficient (MCC) of 0.9073, a Cohen ` s Kappa (Kp) of 0.9057, and a Critical Success Index (CSI) of 0.8213. When subjected to 10 -fold cross -validation on LC25000, the D1 model averaged (avg) 99.96 % accuracy (avg J, MCC, Kp, and CSI of 0.9993, 0.9987, 0.9853, and 0.9990), surpassing recent reported performances. Furthermore, the ensemble of D1 and D2 reached 93 % accuracy (J, MCC, Kp, and CSI of 0.7556, 0.8839, 0.8796, and 0.7140) on the IQ-OTHNCCD dataset, exceeding recent benchmarks and aligning with other reported results. Efficiency evaluations were conducted in various scenarios. For instance, training on only 10 % of LC25000 resulted in high accuracy rates of 99.19 % (J, MCC, Kp, and CSI of 0.9840, 0.9898, 0.9898, and 0.9837) (D1) and 99.30 % (J, MCC, Kp, and CSI of 0.9863, 0.9913, 0.9913, and 0.9861) (D2). In NCT-CRC-HE-100K, D2 achieved an impressive 99.53 % accuracy (J, MCC, Kp, and CSI of 0.9906, 0.9946, 0.9946, and 0.9906) with training on only 30 % of the dataset and testing on the remaining 70 %. When tested on CRC-VAL-HE-7K, D1 and D2 achieved 95 % accuracy (J, MCC, Kp, and CSI of 0.8845, 0.9455, 0.9452, and 0.8745) and 96 % accuracy (J, MCC, Kp, and CSI of 0.8926, 0.9504, 0.9503, and 0.8798), respectively, outperforming previously reported results and aligning closely with others. Lastly, training D2 on just 10 % of NCT-CRC-HE-100K and testing on CRC-VAL-HE-7K resulted in significant outperformance of InceptionV3, Xception, and DenseNet201 benchmarks, achieving an accuracy rate of 82.98 % (J, MCC, Kp, and CSI of 0.7227, 0.8095, 0.8081, and 0.6671). Finally, using explainable AI algorithms such as Grad -CAM, Grad -CAM ++, Score -CAM, and Faster Score -CAM, along with their emphasized versions, we visualized the features from the last layer of DenseNet201 for histopathological as well as CT -scan image samples. The proposed dense models, with their multi -modality, robustness, and efficiency in cancer image classification, hold the promise of significant advancements in medical diagnostics. They have the potential to revolutionize early cancer detection and improve healthcare accessibility worldwide. Elsevier 2024-05 Article PeerReviewed Uddin, A. Hasib and Chen, Yen-Lin and Akter, Miss Rokeya and Ku, Chin Soon and Yang, Jing and Por, Lip Yee (2024) Colon and lung cancer classification from multi-modal images using resilient and efficient neural network architectures. Heliyon, 10 (9). e30625. ISSN 2405-8440, DOI https://doi.org/10.1016/j.heliyon.2024.e30625 <https://doi.org/10.1016/j.heliyon.2024.e30625>. https://doi.org/10.1016/j.heliyon.2024.e30625 10.1016/j.heliyon.2024.e30625
score 13.214268