TY - JOUR T1 - The Neural Network shifted-proper orthogonal decomposition: A machine learning approach for non-linear reduction of hyperbolic equations JF - Computer Methods in Applied Mechanics and Engineering Y1 - 2022 A1 - Davide Papapicco A1 - Nicola Demo A1 - Michele Girfoglio A1 - Giovanni Stabile A1 - Gianluigi Rozza KW - Advection KW - Computational complexity KW - Deep neural network KW - Deep neural networks KW - Linear subspace KW - Multiphase simulations KW - Non linear KW - Nonlinear hyperbolic equation KW - Partial differential equations KW - Phase space methods KW - Pre-processing KW - Principal component analysis KW - reduced order modeling KW - Reduced order modelling KW - Reduced-order model KW - Shifted-POD AB -

Models with dominant advection always posed a difficult challenge for projection-based reduced order modelling. Many methodologies that have recently been proposed are based on the pre-processing of the full-order solutions to accelerate the Kolmogorov N−width decay thereby obtaining smaller linear subspaces with improved accuracy. These methods however must rely on the knowledge of the characteristic speeds in phase space of the solution, limiting their range of applicability to problems with explicit functional form for the advection field. In this work we approach the problem of automatically detecting the correct pre-processing transformation in a statistical learning framework by implementing a deep-learning architecture. The purely data-driven method allowed us to generalise the existing approaches of linear subspace manipulation to non-linear hyperbolic problems with unknown advection fields. The proposed algorithm has been validated against simple test cases to benchmark its performances and later successfully applied to a multiphase simulation. © 2022 Elsevier B.V.

VL - 392 UR - https://www.scopus.com/inward/record.uri?eid=2-s2.0-85124488633&doi=10.1016%2fj.cma.2022.114687&partnerID=40&md5=12f82dcaba04c4a7c44f8e5b20101997 ER - TY - JOUR T1 - A combination between the reduced basis method and the ANOVA expansion: On the computation of sensitivity indices JF - Comptes Rendus Mathematique. Volume 351, Issue 15-16, August 2013, Pages 593-598 Y1 - 2013 A1 - Denis Devaud A1 - Andrea Manzoni A1 - Gianluigi Rozza KW - Partial differential equations AB -

We consider a method to efficiently evaluate in a real-time context an output based on the numerical solution of a partial differential equation depending on a large number of parameters. We state a result allowing to improve the computational performance of a three-step RB-ANOVA-RB method. This is a combination of the reduced basis (RB) method and the analysis of variations (ANOVA) expansion, aiming at compressing the parameter space without affecting the accuracy of the output. The idea of this method is to compute a first (coarse) RB approximation of the output of interest involving all the parameter components, but with a large tolerance on the a posteriori error estimate; then, we evaluate the ANOVA expansion of the output and freeze the least important parameter components; finally, considering a restricted model involving just the retained parameter components, we compute a second (fine) RB approximation with a smaller tolerance on the a posteriori error estimate. The fine RB approximation entails lower computational costs than the coarse one, because of the reduction of parameter dimensionality. Our result provides a criterion to avoid the computation of those terms in the ANOVA expansion that are related to the interaction between parameters in the bilinear form, thus making the RB-ANOVA-RB procedure computationally more feasible.

PB - Elsevier UR - http://hdl.handle.net/1963/7389 U1 - 7434 U2 - Mathematics U4 - 1 U5 - MAT/05 ANALISI MATEMATICA ER -