Document Type : Original Article
Authors
1
Information systems , Faculty of Computers and Information, Kafrelsheikh University, Egypt
2
Information systems , Faculty of Computers and Information, Menoufia University, Egypt.
3
Department of Computer Science, Faculty of Computers and Information, Kafr El-Sheikh University, Kafr El-Sheikh 33511, Egypt
4
Department of Information Systems, Faculty of Computers and Information, Menoufia University, Menoufia, Egypt.
Abstract
Abstract
Glaucoma, caused by damage to the optic nerve brought on by high intraocular pressure (IOP), is one of the primary causes of lifelong blindness. Though expensive and time-consuming, specialist examination of optic nerve pictures is frequently necessary for an early and precise diagnosis. Deep learning has shown significant potential in automating glaucoma detection; however, challenges such as class imbalance, variability in image quality, and efficient feature extraction remain unresolved. To address these issues, we propose a novel method that integrates fundus imaging data with numerical cup-to-disc ratio (CDR) measurements. Our approach employs a feed-forward neural network to process the CDR and uses a pre-trained EfficientNet-B0 to extract high-quality features from fundus images. Hyperparameters are optimized using the Manta Ray Foraging Optimization (MRFO) algorithm, and dropout regularization is applied to enhance model generalization. To ensure balanced learning and improve model reliability, class imbalance is tackled through a combination of oversampling and data augmentation, with data representation for minority classes being effectively enhanced. Advanced image preprocessing techniques, including gamma correction, CLAHE, and morphological operations, are also applied to mitigate the impact of noise and poor lighting conditions, further improving data quality and diagnostic accuracy. This approach provides a very dependable tool for early glaucoma detection, achieving outstanding accuracy rates of 99.13% on the ORIGA dataset, 99.44% on the RIM-ONE DL dataset, and 100% on the DRISHTI-GS dataset.
Keywords