Training Data Expansion and Boosting of Convolutional Neural Networks for Reducing the MNIST Dataset Error RateJournal: Naukovi Visti NTUU KPI (Vol.19, No. 6)
Publication Date: 2016-12-27
Authors : Vadim V. Romanuke;
Page : 29-34
Keywords : MNIST; Convolutional neural network; Error rate; Training data expansion; Boosting;
Background. Due to that the preceding approaches for improving the MNIST image dataset error rate do not have a clear structure which could let repeat it in a strengthened manner, the formalization of the performance improvement is considered. Objective. The goal is to strictly formalize a strategy of reducing the MNIST dataset error rate. Methods. An algorithm for achieving the better performance by expanding the training data and boosting with ensembles is suggested. The algorithm uses the designed concept of the training data expansion. Coordination of the concept and the algorithm defines a strategy of the error rate reduction. Results. In relative comparison, the single convolutional neural network performance on the MNIST dataset has been bettered almost by 30 %. With boosting, the performance is 0.21 % error rate meaning that only 21 handwritten digits from 10,000 are not recognized. Conclusions. The training data expansion is crucial for reducing the MNIST dataset error rate. The boosting is ineffective without it. Application of the stated approach has an impressive impact for reducing the MNIST dataset error rate, using only 5 or 6 convolutional neural networks against those 35 ones in the benchmark work.
Other Latest Articles
Last modified: 2017-01-23 17:48:31