Abstract: AI Foundation Models for Science

AI Foundation Models for Science

Abstract: Can artificial intelligence (AI) accelerate scientific discovery? One of the most promising AI approaches for scientific analysis and prediction are foundation models. Foundation models are data-driven models, capable of ingesting massive amounts of information, that are trained on a general task which allows them to be quickly adapted to new tasks, with far less data and computation. While foundation models have shown incredible success in image and text problems where they have exhibited capabilities beyond anything anticipated in their training; they present challenges for their use in science where accuracy and uncertainty quantification are extremely important. I will give an overview of the foundation model research being done as part of the ArtIMis (Artificial Intelligence for Mission) project at LANL to adapt and develop methods for foundation models for scientific applications where they hold promise for predicting behavior of new systems/materials, and accelerating scientific simulations.

Bio: Diane Oyen is a Scientist and the Artificial Intelligence Team Leader at Los Alamos National Laboratory. She received her B.S. degree in Electrical Engineering from Carnegie Mellon University and her Ph.D. in Computer Science from the University of New Mexico. Diane develops machine learning methods for scientific analysis; with particular focus in quantifying uncertainty, transfer learning, and robust machine learning. She leads projects that extend the latest machine learning methods, including foundation models and generative models, for use in novel applications such as pattern recognition and scientific discovery in ChemCam observations on Mars, accelerating physics simulations, and computer vision for technical images.