ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Effect of Local Dynamic Learning Rate Adaptation on Mini-Batch Gradient Descent for Training Deep Learning Models with Back Propagation Algorithm

Journal: International Journal of Science and Research (IJSR) (Vol.9, No. 4)

Publication Date:

Authors : ; ;

Page : 339-342

Keywords : Mini-batch gradient descent; Learning rate; Backpropagation;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Backpropagation has gained much popularity for training neural network models including Deep learning models. Despite its popularity, ordinary backpropagation suffers from low convergence rate due existence of constant learning rate. Since error surface is not smooth learning rate needs to be dynamically adapted to speed up rate of convergence. Much work has been done showing benefits of employing dynamic learning rate on backpropagation algorithm to speed up the rate of convergence focusing on global learning rate adaption and local adaptive learning rate on batch gradient descent. In this work we tried to observe the effect of local dynamic learning rate adaptation using improved iRprop- algorithm with mini batch gradient descent to improve convergence rate of back propagation. Experiment was conducted in python using cifar 10 dataset. Results shows the proposed algorithm outperform the ordinary back propagation algorithm in terms of speed when batch size.

Last modified: 2021-06-28 17:03:45