Boersma Abstract: Learning closure models for turbulence

We present a new data-driven methodology, using Machine-Learning (ML) techniques, to develop, test and optimize turbulence closure models. The proposed methodology is validated by using it to automatically tune and calibrate the system of parameter coefficients in the BHR 3.1 [1] turbulence closure model against reference statistics from direct numerical simulations (DNS) data of canonical turbulent flows including homogeneous variable density turbulence (HVDT) [2] and turbulent Rayleigh-Taylor (RT) flows. Los Alamos National Laboratory has been a champion of a family of Reynolds-Averaged Navier-Stokes (RANS) closure models, collectively referred to as BHR after the original version proposed by Bernard, Harlow and Rauenzahn [3]. Just like most other closure models, BHR consists of a system of (nonlinear) differential equations with prescribed form that relies on a set of parameter coefficients. These coefficients, historically, have been calibrated by hand based on available DNS data of the canonical turbulence flows.

Two approaches are discussed in this work: i) a static approach, and ii) a dynamic approach. The static approach minimizes the instantaneous rate of deviation of the model from the DNS data, while the dynamic approach considers the deviation over a finite time interval. The static approach reduces to a large overdetermined system of (typically linear) equations that is solved using least squares. In this approach, for the BHR models we observe decoupling of equations which further simplifies the practical implementation of the method and the analysis of the results. In contrast, in the dynamic approach we express the BHR model in the form of a residual neural network, which, starting from a set of initial conditions at a time t0, predicts the evolution of the model’s dynamical variables up to a later time t1. An objective loss function representing the difference between the network prediction and the DNS statistics is minimized by adjusting the BHR coefficients in an optimization problem. To this end, the gradient of the loss function with respect to the model coefficients are computed using automatic differentiation of the computational graph of the neural network, and then used to adjust the model coefficients in the next iteration.

The static method was found approximate the reference DNS data reasonably well only for the short (instantaneous) times limit of the dynamics, while the dynamic approach was found to well approximate the reference for longer times. We will contrast results obtained using the static approach with those based on the dynamical approach, discuss the merits of each approach together with their limitations and suggest possible remedies. We will also discuss challenges and decisions that were made to mitigate problems associated with network depth due to CLF restrictions, as well as the different training strategies adopted to enable joint training on heterogeneous models.