•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.3906/elk-2105-36

Abstract

The detection of diabetic retinopathy (DR) in millions of diabetic patients across the globe is a challenging problem. Diagnosis of retinopathy is a lengthy and tedious process, requiring a medical professional to assess the individual fundus images of a patient's retina. This process can be automated by applying deep learning (DL) technology given a huge dataset. The problems associated with DL are the unavailability of a large dataset and their higher training time. The DL model's best performance is achieved using set of optimal hyperparameters (OHPs) obtained by performing costly iterations of hyperparameter optimization (HPO). These problems can be addressed by using transfer learning (TL) technique in both DL model training and HPO. TL in HP tuning is the focus of this work. The authors study the applicability of EyePACS DR dataset's OHPs to other DR datasets, forming the basis of the research question addressed in this work. The DR classification is performed using a ResNet model trained on the EyePACS (kaggle) and Indian diabetic retinopathy image dataset (IDRiD) datasets. Various HPs tuned in this work are data augmentation configuration, number of layers, optimizers, data samplers, learning rate, and momentum. The authors demonstrate that EyePACS dataset's OHPs are suitable for training with IDRiD dataset without needing to tune HPs for IDRiD dataset from scratch. The OHPs for a task and their reusability is poorly reported in the literature. Therefore, the EyePACS DR dataset's OHPs reported here can be used by other researchers. Moreover, the researchers working on other DR datasets can also apply the same OHPs since they are reusable and no iterations of HPO are required. The OHPs are provided for both EyePAC and IDRiD datasets after being tuned from scratch, which can be used as starting point for HPO by others.

Keywords

Hyperparameter optimization, transfer learning, diabetic retinopathy, augmentation, ResNet, Bayesian optimization

First Page

2824

Last Page

2839

Share

COinS