Adaptation to Best Fit Learning Rate in Batch Gradient Descent
Journal: International Journal of Science and Research (IJSR) (Vol.3, No. 8)Publication Date: 2014-08-05
Authors : Mohan Pandey; B. L. Pal;
Page : 16-20
Keywords : Batch training; On-line training; Epoch; Learning Rate; Gradient Descent; Hypothesis; Cost Function;
Abstract
The efficient training of a supervised machine learning system is commonly viewed as minimization of a cost function that depends on the parameters of the proposed hypothesis. This perspective gives some advantages to the development of better training algorithm, as the problem of minimization of a function is well known in many fields. Typically many learning system use gradient descent to minimize the cost function but has a problem of slow convergence and exploding on choosing learning rate beyond a threshold. We often make mistake on choosing learning rate as there is no known generic way of choosing learning rate and we need to do it manually. In this paper a method for the adaptation of learning rate is presented and also solving the problem of slow convergence and exploding of the algorithm. The main feature of proposed algorithm is that it speeds up the convergence rate and does not affect the result or number of epochs requires to converge on changing the learning rate.
Other Latest Articles
- Study on Renal Artery Segmental Branching Pattern in South Indian Population
- Study on Aberrant Renal Arteries in South Indian Population
- Prospects for Development of Small Scales Industries (SISs) in North East India: An Evaluative Study
- System Balancing and Cost Management in Cloud Environment
- Design&Analysis of Modified Conditional Data Mapping Flip-Flop to Ultra Low Power and High Speed Applications
Last modified: 2021-06-30 21:05:59