ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Adaptation to Best Fit Learning Rate in Batch Gradient Descent

Journal: International Journal of Science and Research (IJSR) (Vol.3, No. 8)

Publication Date:

Authors : ; ;

Page : 16-20

Keywords : Batch training; On-line training; Epoch; Learning Rate; Gradient Descent; Hypothesis; Cost Function;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

The efficient training of a supervised machine learning system is commonly viewed as minimization of a cost function that depends on the parameters of the proposed hypothesis. This perspective gives some advantages to the development of better training algorithm, as the problem of minimization of a function is well known in many fields. Typically many learning system use gradient descent to minimize the cost function but has a problem of slow convergence and exploding on choosing learning rate beyond a threshold. We often make mistake on choosing learning rate as there is no known generic way of choosing learning rate and we need to do it manually. In this paper a method for the adaptation of learning rate is presented and also solving the problem of slow convergence and exploding of the algorithm. The main feature of proposed algorithm is that it speeds up the convergence rate and does not affect the result or number of epochs requires to converge on changing the learning rate.

Last modified: 2021-06-30 21:05:59