ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Training Data Expansion and Boosting of Convolutional Neural Networks for Reducing the MNIST Dataset Error Rate

Journal: Naukovi Visti NTUU KPI (Vol.19, No. 6)

Publication Date:

Authors : ;

Page : 29-34

Keywords : MNIST; Convolutional neural network; Error rate; Training data expansion; Boosting;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

Background. Due to that the preceding approaches for improving the MNIST image dataset error rate do not have a clear structure which could let repeat it in a strengthened manner, the formalization of the performance improvement is considered. Objective. The goal is to strictly formalize a strategy of reducing the MNIST dataset error rate. Methods. An algorithm for achieving the better performance by expanding the training data and boosting with ensembles is suggested. The algorithm uses the designed concept of the training data expansion. Coordination of the concept and the algorithm defines a strategy of the error rate reduction. Results. In relative comparison, the single convolutional neural network performance on the MNIST dataset has been bettered almost by 30 %. With boosting, the performance is 0.21 % error rate meaning that only 21 handwritten digits from 10,000 are not recognized. Conclusions. The training data expansion is crucial for reducing the MNIST dataset error rate. The boosting is ineffective without it. Application of the stated approach has an impressive impact for reducing the MNIST dataset error rate, using only 5 or 6 convolutional neural networks against those 35 ones in the benchmark work.

Last modified: 2017-01-23 17:48:31