ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Convergence of batch gradient training-based smoothing L1regularization via adaptive momentum for feedforward neural networks

Journal: International Journal of Advanced Technology and Engineering Exploration (IJATEE) (Vol.11, No. 116)

Publication Date:

Authors : ; ;

Page : 1005-1019

Keywords : Convergence; Smoothing L_1 regularization; Adaptive momentum; Feedforward neural network; Batch gradient training.;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Momentum has been extensively researched for regularization, and it is a widely used method to quicken the convergence of practical training. Unfortunately, no such effective acceleration method for L1 regularization has yet to be presented. For the purpose of training and pruning feedforward neural networks, the convergence of batch gradient training-based smoothing L_1 regularization via adaptive momentum (BGSL_1AM) was developed. In doing so, the usual L_1 regularization is foremost nonsmooth; it generates oscillations in the computation and makes convergence analysis difficult. To overcome this issue, a smoothing function approximation L_1 regularizer at the origin is proposed. Numerical simulation results on a range of function approximations and pattern classification problems demonstrate the effectiveness of BGTSL_1AM algorithm. This process progressively reduces weights pending training and allows for their removal afterwards. Together with the significance of the suggested approach and some weak and strong convergence analyses, the convergence conditions are also offered. The suggested learning method has good convergence qualities and accuracy in function approximation, as demonstrated by the simulation results.

Last modified: 2024-08-05 15:11:57