Abstract: Physics Informed Machine Learning with Smooth Particle Hydrodynamics: Machine Learning Lagrangian Turbulence
Turbulence in fluids is a ubiquitous phenomenon and obtaining efficient and accurate reduced order models remains an active research topic due to its many potential impacts on science and technology. Fluid turbulence is characterized by strong coupling across a broad range of scales, and the computational cost of explicitly resolving the full set of scales, both spatial and temporal, with Direct Numerical Simulation (DNS) of high-Reynolds number Navier-Stokes equations is extremely expensive and is often prohibitive in applications. This motivates us to build and explore a learn-able hierarchy of parameterized and "physics-explainable" in search of optimal reduced Lagrangian models of turbulence (which is less explored than its Eulerian counterpart). Starting from more flexible and expressive Neural Network (NN) based models, this hierarchy gradually incorporates (along with NNs) the structure of Smoothed Particle Hydrodynamics (SPH); a mesh-free Lagrangian method for obtaining approximate numerical solutions of the equations of fluid dynamics, which has been widely applied to weakly- and strongly compressible flows in astrophysics and engineering applications. We train this hierarchy on large scales with DNS data and explore the effects of incorporating more SPH structure (which constrains the model, but increases its interpretability). A priori it is not known what the effects of constraining the models within the framework of SPH will have on its ability to fit Eularian DNS turbulence data, and generalize to different flows, since including more SPH structure decreases the flexibility of the expressiveness of the model compared to that of using NN based parameterizations. We show that adding SPH structure improves generalizability (over larger ranges of time scales and Mach numbers), preservation of physical symmetries (corresponding to conservation of linear and angular momentum), and requires less training data. Furthermore, we develop a learning algorithm that incorporates a mixed mode approach, mixing forward and reverse mode automatic differentiation with local sensitivity analyses to efficiently perform gradient based optimization. We train this hierarchy on both weakly compressible SPH and DNS data, and show that this learning method is capable of: (a) solving inverse problems over the physically interpretable parameter space, as well as over the space of NN parameters; (b) learning Lagrangian statistics of turbulence (interpolation); (c) combining Lagrangian trajectory based, probabilistic, and Eulerian field based loss functions; (d) extrapolating beyond training sets into more complex regimes of interest; (e) learning new parameterized smoothing kernels better suited to weakly compressible DNS turbulence data.