Skip to main content

Events

Computational and Applied Mathematics Seminar: Ke Chen, Maryland, Towards efficient deep operator learning for forward and inverse PDEs: theory and algorithms

Zoom

Deep neural networks (DNNs) have been a successful model across diverse machine learning tasks, increasingly capturing the interest for their potential in engineering problems where PDEs have long been the dominant model. This talk delves into efficient training for PDE operator learning in both the forward and the inverse problems setting. Firstly, we address the curse…

Computational and Applied Mathematics Seminar: Gregory Ongie, Marquette University, A function space view of infinite-width neural networks

Zoom

It is well-known that nearly any function can be approximated arbitrarily-well by a neural network with non-linear activations. However, one cannot guarantee that the weights in the neural network remain bounded in norm as the approximation error goes to zero, which is an important consideration when practically training neural networks. This raises the question: What…

Computational and Applied Mathematics Seminar: Antoine Blanchard, Verisk, A Multi-Scale Deep Learning Framework for Projecting Weather Extremes

Zoom

Extreme weather events are of growing concern for societies because under climate change their frequency and intensity are expected to increase significantly. Unfortunately, general circulation models (GCMs)--currently the primary tool for climate projections--cannot characterize weather extremes accurately. Here, we report on advances in the application of a multi-scale deep learning framework, trained on reanalysis data,…

Computational and Applied Mathematics Seminar: Gabriel P. Langlois, Courant Institute, An exact and efficient algorithm for the Lasso regression problem based on a Hamilton-Jacobi PDE formulation

Zoom

The Basis Pursuit Denoising problem, also known as the least absolute shrinkage and selection operator (Lasso) problem, is a cornerstone of compressive sensing, statistics and machine learning. In high-dimensional problems, recovering an exact sparse solution requires robust and efficient optimization algorithms. State-of-the-art algorithms for the Basis Pursuit Denoising problem, however, were not traditionally designed to…

Computational and Applied Mathematics: Jingwei Hu, University of Washington, Structure-Preserving Particle Method for the Vlasov-Maxwell-Landau Equation

SAS 4201

The Vlasov-Maxwell-Landau equation is often regarded as the first-principle physics model for plasmas. We introduce a novel particle method for this equation that collectively models particle transport, electromagnetic field effects, and particle collisions. The method arises from a regularization of the variational formulation of the Landau collision operator, leading to a discretization of the operator…

Computational and Applied Mathematics Seminar: Alexander Kurganov, SUSTech, Central-Upwind Schemes with Reduced Numerical Dissipation

SAS 4201

Central-upwind schemes are Riemann-problem-solver-free Godunov-type finite-volume schemes, which are, in fact, non-oscillatory central schemes with a certain upwind flavor: derivation of the central-upwind numerical fluxes is based on the one-sided local speeds of propagation, which can be estimated using the largest and smallest eigenvalues of the Jacobian. I will introduce two new classes of central-upwind…

Computational and Applied Mathematics Seminar: Maria Lukacova, the University of Mainz, Uncertainty Quantification for Low Mach Number Flows

SAS 4201

We consider weakly compressible flows coupled with a cloud system that models the dynamics of warm clouds. Our goal is to explicitly describe the evolution of uncertainties that arise due to unknown input data, such as model parameters and initial or boundary conditions. The developed stochastic Galerkin method combines the space-time approximation obtained by a…

Computational and Applied Mathematics Seminar: Hongkai Zhao, Duke University, Numerical understanding of neural networks: from representation to learning dynamics

SAS 4201

In this talk we present both numerical analysis and experiments to study a few basic computational issues in practice: (1) the numerical error one can achieve given a finite machine precision, (2) the learning dynamics and computation cost to achieve a given accuracy, and (3) stability with respect to perturbations. These issues are addressed for…

Computational and Applied Mathematics Seminar: Vakhtang Putkaradze, University of Alberta, Lie-Poisson Neural Networks (LPNets): Data-Based Computing of Hamiltonian Systems with Symmetries

SAS 4201

Physics-Informed Neural Networks (PINNs) have received much attention recently due to their potential for high-performance computations for complex physical systems, including data-based computing, systems with unknown parameters, and others. The idea of PINNs is to approximate the equations and boundary and initial conditions through a loss function for a neural network. PINNs combine the efficiency…

Computational and Applied Mathematics – Differential Equations and Nonlinear Analysis Seminar: Alexey Miroshnikov, Discover Financial Services, Stability theory of game-theoretic group feature explanations for machine learning models.

SAS 4201

In this article, we study feature attributions of Machine Learning (ML) models originating from linear game values and coalitional values defined as operators on appropriate functional spaces. The main focus is on random games based on the conditional and marginal expectations. The first part of our work formulates a stability theory for these explanation operators…

Computational and Applied Mathematics Seminar: Wenjing Liao, Georgia Institute of Technology, Exploiting Low-Dimensional Data Structures in Deep Learning

SAS 4201

In the past decade, deep learning has made astonishing breakthroughs in various real-world applications. It is a common belief that deep neural networks are good at learning various geometric structures hidden in data sets. One of the central interests in deep learning theory is to understand why deep neural networks are successful, and how they…

Computational and Applied Mathematics Seminar: Saviz Mowlavi, MERL, Model-based and data-driven prediction and control of spatio-temporal systems

Zoom

Spatio-temporal dynamical systems, such as fluid flows or vibrating structures, are prevalent across various applications, from enhancing user comfort and reducing noise in HVAC systems to improving cooling efficiency in electronic devices. However, these systems are notoriously hard to optimize and control due to the infinite dimensionality and nonlinearity of their governing partial differential equations…

Computational and Applied Mathematics Seminar: Jian-Guo Liu, Duke University, Optimal Control for Transition Path Problems in Markov Jump Processes

SAS 4201

 Transition paths connecting metastable states are significant in science and engineering, such as in biochemical reactions. In this talk, I will present a stochastic optimal control formulation for transition path problems over an infinite time horizon, modeled by Markov jump processes on Polish spaces. An unbounded terminal cost at a stopping time and a running…

Computational and Applied Mathematics: Daniel Serino, Los Alamos National Lab, Structure-Preserving Machine Learning for Dynamical Systems

Zoom

Developing robust and accurate data-based models for dynamical systems originating from plasma physics and hydrodynamics is of paramount importance. These applications pose several challenges, including the presence of multiple scales in time and space and a limited number of data, which is often noisy or inconsistent. The aim of structure-preserving ML is to strongly enforce…

Computational and Applied Mathematics: Wei Zhu, Georgia Institute of Technology, Symmetry-Preserving Machine Learning: Theory and Applications

SAS 4201

Symmetry is prevalent in a variety of machine learning and scientific computing tasks, including computer vision and computational modeling of physical and engineering systems. Empirical studies have demonstrated that machine learning models designed to integrate the intrinsic symmetry of their tasks often exhibit substantially improved performance. Despite extensive theoretical and engineering advancements in symmetry-preserving machine…

Computational and Applied Mathematics: Erik Bollt, Clarkson University, Next-Generation Reservoir Computing, and On Explaining the Surprising Success of a Random Neural Network for Forecasting Chaos

SAS 4201

Machine learning has become a widely popular and successful paradigm, including for data-driven science. A major application is forecasting complex dynamical systems. Artificial neural networks (ANN) have evolved as a clear leading approach, and recurrent neural networks (RNN) are considered to be especially well suited. Reservoir computers (RC) have emerged for simplicity and computational advantages.…

Computational and Applied Mathematics: Isaac Harris, Purdue University, Transmission Eigenvalue Problems for a Scatterer with a Conductive Boundary

SAS 4201

In this talk, we will investigate the acoustic transmission eigenvalue problem associated with an inhomogeneous media with a conductive boundary. These are a new class of eigenvalue problems that are not elliptic, not self-adjoint, and nonlinear, which gives the possibility of complex eigenvalues. The talk will consider the case of an Isotropic and Anisotropic scatterer.…

Computational and Applied Mathematics: Shixu Meng, Virginia Tech, Exploring low rank structures for inverse scattering problems

SAS 4201

Inverse problems are pivotal in a variety of applications, such as target identification, non-destructive testing, and parameter estimation. Among these, the inverse scattering problem in inhomogeneous media poses significant challenges, as it seeks to estimate unknown parameters from available measurement data. To understand the mathematics of machine learning approaches for inverse scattering, we develop a…

Computational and Applied Mathematics: Jaeyong Lee, Chung-Ang University, Real-Time Solutions to PDEs with Neural Operators in SciML

SAS 4201

Recent advancements in deep learning have led to a surge in research focused on solving scientific problems under the "AI for Science." Among these efforts, Scientific Machine Learning (SciML) aims to address domain-specific data challenges and extract insights from scientific datasets through innovative methodological solutions. A particularly active area within SciML involves using neural operators…

Computational and Applied Mathematics: Shriram Srinivasan, Los Alamos National Laboratory, Hierarchical Network Partitioning for Efficient Solution of Steady-State Nonlinear Network Flow Equations

SAS 4201

Natural gas production and distribution in the US is interconnected continent-wide, and hence the simulation of fluid flow in pipeline networks is a problem of scientific interest. While the problem of steady, unidirectional flow of fluid in a single pipeline is simple, it ceases to be so when we consider fluid flow in a large…