ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

Adaptive neural network method for multidimensional integration in arbitrary subdomains

Journal: Discrete and Continuous Models and Applied Computational Science (Vol.33, No. 4)

Publication Date:

Authors : ; ; ; ;

Page : 374-388

Keywords : neural network integration; adaptive data generation; Levenberg--Marquardt optimization; multidimensional integrals;

Source : Download Find it from : Google Scholarexternal

Abstract

Multidimensional integration is a fundamental problem in computational mathematics with numerous applications in physics, engineering, and data science. Traditional numerical methods such as Gauss–Legendre quadrature [1] and Monte Carlo techniques face significant challenges in high-dimensional spaces due to the curse of dimensionality, often requiring substantial computational resources and suffering from accuracy degradation. This study proposes an adaptive neural network-based method for efficient multidimensional integration over arbitrary subdomains. The approach optimizes training sample composition through a balancing parameter $rho $, which controls the proportion of points generated via a Metropolis–Hastings inspired method versus uniform sampling. This enables the neural network to effectively capture complex integrand behaviors, particularly in regions with sharp variations. A key innovation of the method is its ``train once, integrate anywhere'' capability: a single neural network trained on a large domain can subsequently compute integrals over any arbitrary subdomain without retraining, significantly reducing computational overhead. Experiments were conducted on three function types---quadratic, Corner Peak, and sine of sum of squares---across dimensions 2D to 6D. Integration accuracy was evaluated using the Correct Digits (CD) metric. Results show that the neural network method achieves comparable or superior accuracy to traditional methods (Gauss–Legendre, Monte Carlo, Halton) for complex functions, while substantially reducing computation time. Optimal $rho $ ranges were identified: 0.0--0.2 for smooth functions, and 0.3--0.5 for functions with sharp features. In multidimensional scenarios (4D, 6D), the method demonstrates stability at $rho = 0.2text {--}0.6$, outperforming stochastic methods though slightly less accurate than Latin hypercube sampling [2]. The proposed method offers a scalable, efficient alternative to classical integration techniques, particularly beneficial in high-dimensional settings and applications requiring repeated integration over varying subdomains.

Last modified: 2025-12-07 19:31:16