Shape and topology optimization via a level-set based mesh evolution method
The purpose of this presentation is to introduce a robust front-tracking method for dealing with arbitrary motions of shapes, even dramatic ones (e.g. featuring topo-logical changes); although this method is illustrated in the particular context of shape optimization, it naturally applies to a wide range of inverse problems and reconstruction algorithms. The presented method combines two different means of representing shapes: on the one hand, they are meshed explicitly, which allows for efficient mechanical calculations by means of any standard Finite Element solver; on the other hand, they are represented by means of the level set method, a format under which it is easy to track their evolution. The cornerstone of our method is a pair of efficient algorithms for switching from either of these representations to the other. Several numerical examples are discussed in two and three space dimensions, in the ‘classical’ physical setting of linear elastic structures, but also in more involved situations involving e.g. fluid-structure interactions. This is a joint work with G. Allaire, F. Feppon and P. Frey.
Opportunities for science graduates at national laboratories: an informal discussion
Please join us for an informal discussion with Srideep Musuvathy and Frances Chance, Department of Emerging and Cognitive Computing at Sandia National Laboratories for a discussion of what it’s like to work at a national laboratory, student internships, and career opportunities for current students as well as new graduates. Drs. Musuvathy and Chance will also be discussing research opportunities in machine learning and neural-inspired computing.
Frances will present in person and Srideep will be online.
Modeling Coordinate Transformations in Neural and Neuromorphic Systems
Animals excel at a wide range behaviors, many of which are essential for survival. For example, dragonflies are aerial predators, known for both their speed and high success rate, that must perform fast, accurate, and efficient calculations to survive. I will present a neural network model, inspired by the dragonfly nervous system, that calculates turning for successful prey interception. The model relies upon a coordinate transformation from eye-coordinates to body-coordinates, an operation that must be performed by almost any animal nervous system relying upon sensory information to interact with the external world. I will discuss how I and collaborators are combining neuroscience experiments, modeling studies, and exploration of neuromorphic architectures to understand how the biological dragonfly nervous system performs coordinate transformations and to develop novel approaches for efficient neural-inspired computation.
Recently, there has been a growing interest in approximating nonlinear functions and PDEs on tensor manifolds. The reason is simple: tensors can drastically reduce the computational cost of high-dimensional problems when the solution has a low-rank structure. In this talk, I will review recent developments on rank-adaptive algorithms for temporal integration of PDEs on tensor manifolds. Such algorithms combine functional tensor train (FTT) series expansions, operator splitting time integration, and an appropriate criterion to adaptively add or remove tensor modes from the FTT representation of the PDE solution as time integration proceeds. I will also present a new tensor rank reduction method that leverages coordinate flows. The idea is very simple: given a multivariate function, determine a coordinate transformation so that the function in the new coordinate system has a smaller tensor rank. I will restrict the analysis to linear coordinate transformations, which give rise to a new class of functions that we refer to as tensor ridge functions. Numerical applications are presented and discussed for linear and nonlinear advection equations, and for the Fokker-Planck equation.
SHORT BIO: Dr. Venturi is a professor of Applied Mathematics in the Baskin School of Engineering at UC Santa Cruz. He received his MS in Mechanical Engineering in 2002, and his PhD in Applied Physics in 2006 from the University of Bologna (Italy). His research activity has been recently focused on the numerical approximation of PDEs on tensor manifolds, including high-dimensional PDEs arising from the discretization of functional differential equations (infinite-dimensional PDEs).
Scalable reinforcement learning for multi-agent networked systems
We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we present our framework that exploits the network structure to conduct reinforcement learning in a scalable manner. The key feature in our framework is that we prove spatial decay properties for the Q function and the policy, meaning their dependence on faraway agents decays when the distance increases. Such spatial decay properties enable approximations by truncating the Q functions and policies to local neighborhoods, hence drastically reducing the dimension and avoiding the exponential blow-up in the number of agents.