ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Using topological data analysis for building Bayesan neural networks

Journal: Scientific and Technical Journal of Information Technologies, Mechanics and Optics (Vol.23, No. 6)

Publication Date:

Authors : ;

Page : 1187-1197

Keywords : Bayesian neural networks; persistent homology; normalized persistent entropy; embedding; barcode;

Source : Downloadexternal Find it from : Google Scholarexternal

Abstract

For the first time, a simplified approach to constructing Bayesian neural networks is proposed, combining computational efficiency with the ability to analyze the learning process. The proposed approach is based on Bayesianization of a deterministic neural network by randomizing parameters only at the interface level, i.e., the formation of a Bayesian neural network based on a given network by replacing its parameters with probability distributions that have the parameters of the original model as the average value. Evaluations of the efficiency metrics of the neural network were obtained within the framework of the approach under consideration, and the Bayesian neural network constructed through variation inference were performed using topological data analysis methods. The Bayesianization procedure is implemented through graded variation of the randomization intensity. As an alternative, two neural networks with identical structure were used — deterministic and classical Bayesian networks. The input of the neural network was supplied with the original data of two datasets in versions without noise and with added Gaussian noise. The zero and first persistent homologies for the embeddings of the formed neural networks on each layer were calculated. To assess the quality of classification, the accuracy metric was used. It is shown that the barcodes for embeddings on each layer of the Bayesianized neural network in all four scenarios are between the corresponding barcodes of the deterministic and Bayesian neural networks for both zero and first persistent homologies. In this case, the deterministic neural network is the lower bound, and the Bayesian neural network is the upper bound. It is shown that the structure of data associations within a Bayesianized neural network is inherited from a deterministic model, but acquires the properties of a Bayesian one. It has been experimentally established that there is a relationship between the normalized persistent entropy calculated on neural network embeddings and the accuracy of the neural network. For predicting accuracy, the topology of embeddings on the middle layer of the neural network model turned out to be the most revealing. The proposed approach can be used to simplify the construction of a Bayesian neural network from an already trained deterministic neural network, which opens up the possibility of increasing the accuracy of an existing neural network without ensemble with additional classifiers. It becomes possible to proactively evaluate the effectiveness of the generated neural network on simplified data without running it on a real dataset, which reduces the resource intensity of its development.

Last modified: 2023-12-20 18:14:36