ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Batch Gradient Method with Smoothing L1/2 Regularization and Momentum for Pi-sigma Networks

Journal: International Journal of Science and Research (IJSR) (Vol.3, No. 9)

Publication Date:

Authors : ; ; ; ; ;

Page : 2002-2012

Keywords : Pi-sigma Network; Batch Gradient Method; smoothing L1/2 Regularization; Momentum; boundedness; convergence;

Source : Downloadexternal Find it from : Google Scholarexternal


: In this paper, we study the Batch gradient method with Smoothing L1/2 Regularization and Momentum for Pi-sigma Networks, assuming that the training samples are permuted stochastically in each cycle of iteration. The usual L1/2 Regularization term involves absolute value and is not differentiable at the origin, which typically causes oscillation of the gradient method of the error function during the training. However, using the Smoothing approximation techniques, the deficiency of the norm L1/2 Regularization term can be addressed. Corresponding convergence results of Smoothing L1/2 Regularization are proved, that is, the weak convergence result is proved under the uniformly boundedness assumption of the activation function and its derivatives.

Last modified: 2021-06-30 21:07:44