The Empirical Comparison of Deep Neural Network Optimizers for Binary Classification of OCT Images

Main Article Content

R. Loganathan, S. Latha

Abstract

To determine the optimum solution to a problem, optimizers are employed in a variety of subjects, including statistical analysis, mathematics, and computing. For the past few years, the Adaptive Moment Estimation, sometimes known as Adam, has been more popular for use as an optimizer in deep learning models. It is efficient and utilizes minimal memory. The algorithm "gradient descent with momentum" and "Root Mean Square Propagation (RMSP)" are intuitively combined in this method. To classify normal vs. AMD binary data on the optical coherence tomography (OCT) image dataset, the main objective of this work is to determine which optimizer yields the minimum loss and maximum accuracy throughout the training and testing phases of CNN, DNN, and VGG16 models. SGD, SGD with momentum, Agadrad, Adam, and RMS prop are the optimizers under investigation at different learning rates. Optimizing the learning rate and other hyperparameters enhances the deep learning model's accuracy and loss according to experiments. Thus, the results demonstrate that the Adam optimizer provides superior performance over all optimizers. The Adam optimizer could train all binary convolutional neural networks based on these results.

Article Details

Section
Articles