Skip to main content

Events

Numerical Analysis Seminar: Pejman Sanaei, Georgia State University, On mathematical modeling and simulation of flight stability of objects, tissue engineering and droplets

SAS 4201

In this talk, I will present 3 problems on fluid structure interaction: 1) Flight stability of wedges: Recent experiments have shown that cones of intermediate apex angles display orientational stability with apex leading in flight. Here we show in experiments and simulations that analogous results hold in the two-dimensional context of solid wedges or triangular prisms in planar…

Numerical Analysis Seminar: Koffi Enakoutsa and Xinghao Dong , UCLA, The Morrey Conjecture: Insights from Numerical Simulations on Quasi-Convexity and Rank-One Convexity.

Zoom

The Morrey Conjecture concerns quasi-convexity and rank-one convexity of functions. While the former implies the latter, it's unclear if the converse is true. Sverak proved the conjecture in 3D, but it remains unresolved in the planar case. Analyzing these properties analytically is difficult, especially for vector-valued functions, hence we perform numerical simulations using example functions…

Numerical Analysis Seminar: Elizabeth Newman, Emory University, How to Train Better: Exploiting the Separability of Deep Neural Networks

SAS 4201

Deep neural networks (DNNs) have gained undeniable success as high-dimensional function approximators in countless applications. However, there is a significant hidden cost behind triumphs - the cost of training. Typically, DNN training is posed as a stochastic optimization problem with respect to the learnable DNN weights. With millions of weights, a non-convex and non-smooth objective…

Numerical Analysis Seminar: Elizabeth Newman, Emory University, Diving Deep Learning I

SAS 4201

Deep learning is one of the most universal techniques in the modern big data era, achieving remarkable success across imaging, healthcare, natural language processing, and more.  As applications begin to rely more heavily on deep learning, it is crucial that we understand how these algorithms make predictions and how we can make them better (e.g.,…

Numerical Analysis Seminar: Elizabeth Newman, Emory University, Diving Deep Learning II

SAS 4201

Deep learning is one of the most universal techniques in the modern big data era, achieving remarkable success across imaging, healthcare, natural language processing, and more.  As applications begin to rely more heavily on deep learning, it is crucial that we understand how these algorithms make predictions and how we can make them better (e.g.,…

Computational and Applied Mathematics Seminar: Shira Faigenbaum-Golovin, Duke University, Reconstruction, denoising, and studying the geometry of the base manifold in high-dimension space

SAS 4201

It is common to assume that the data was sampled from a low-dimensional manifold in a high-dimensional space. In real life, neither the dimension of this manifold nor its geometry is known, and the data is often contaminated with noise and outliers. In this talk, we first present a method for denoising and reconstructing a…

Computational and Applied Mathematics Seminar: Dimitris Giannakis, Dartmouth, Quantum Information Science for Modeling Classical Dynamics

Zoom

Over the past three decades, a fruitful approach for analysis and data-driven modeling of dynamical systems has been to consider the action of (nonlinear) dynamics in state space on linear spaces of observables. These methods leverage the linearity of the associated evolution operators, namely the Koopman and transfer operators, to carry out tasks such as…

Computational and Applied Mathematics Seminar:Tibor Illés, Corvinus University, Budapest, Hungary, Sufficient linear complementarity problems: pivot versus interior point algorithms

SAS 4201

Linear complementarity problems (LCP) generalizes some fundamental problems of mathematical optimization like linear programming (LP) problem, linearly constrained quadratic programming (LQP) problem and some others. It admits an enormous number of applications in economics, engineering, science, and many other fields. After all these, it is not surprising that LCPs are usually NP-complete problems (S.J. Chung,…

Computational and Applied Mathematics Seminar: Ke Chen, Maryland, Towards efficient deep operator learning for forward and inverse PDEs: theory and algorithms

Zoom

Deep neural networks (DNNs) have been a successful model across diverse machine learning tasks, increasingly capturing the interest for their potential in engineering problems where PDEs have long been the dominant model. This talk delves into efficient training for PDE operator learning in both the forward and the inverse problems setting. Firstly, we address the curse…

Computational and Applied Mathematics Seminar: Gregory Ongie, Marquette University, A function space view of infinite-width neural networks

Zoom

It is well-known that nearly any function can be approximated arbitrarily-well by a neural network with non-linear activations. However, one cannot guarantee that the weights in the neural network remain bounded in norm as the approximation error goes to zero, which is an important consideration when practically training neural networks. This raises the question: What…

Computational and Applied Mathematics Seminar: Antoine Blanchard, Verisk, A Multi-Scale Deep Learning Framework for Projecting Weather Extremes

Zoom

Extreme weather events are of growing concern for societies because under climate change their frequency and intensity are expected to increase significantly. Unfortunately, general circulation models (GCMs)--currently the primary tool for climate projections--cannot characterize weather extremes accurately. Here, we report on advances in the application of a multi-scale deep learning framework, trained on reanalysis data,…

Computational and Applied Mathematics Seminar: Gabriel P. Langlois, Courant Institute, An exact and efficient algorithm for the Lasso regression problem based on a Hamilton-Jacobi PDE formulation

Zoom

The Basis Pursuit Denoising problem, also known as the least absolute shrinkage and selection operator (Lasso) problem, is a cornerstone of compressive sensing, statistics and machine learning. In high-dimensional problems, recovering an exact sparse solution requires robust and efficient optimization algorithms. State-of-the-art algorithms for the Basis Pursuit Denoising problem, however, were not traditionally designed to…

Computational and Applied Mathematics Seminar: Alexander Kurganov, SUSTech, Central-Upwind Schemes with Reduced Numerical Dissipation

SAS 4201

Central-upwind schemes are Riemann-problem-solver-free Godunov-type finite-volume schemes, which are, in fact, non-oscillatory central schemes with a certain upwind flavor: derivation of the central-upwind numerical fluxes is based on the one-sided local speeds of propagation, which can be estimated using the largest and smallest eigenvalues of the Jacobian. I will introduce two new classes of central-upwind…

Computational and Applied Mathematics Seminar: Maria Lukacova, the University of Mainz, Uncertainty Quantification for Low Mach Number Flows

SAS 4201

We consider weakly compressible flows coupled with a cloud system that models the dynamics of warm clouds. Our goal is to explicitly describe the evolution of uncertainties that arise due to unknown input data, such as model parameters and initial or boundary conditions. The developed stochastic Galerkin method combines the space-time approximation obtained by a…

Computational and Applied Mathematics Seminar: Hongkai Zhao, Duke University, Numerical understanding of neural networks: from representation to learning dynamics

SAS 4201

In this talk we present both numerical analysis and experiments to study a few basic computational issues in practice: (1) the numerical error one can achieve given a finite machine precision, (2) the learning dynamics and computation cost to achieve a given accuracy, and (3) stability with respect to perturbations. These issues are addressed for…

Computational and Applied Mathematics Seminar: Vakhtang Putkaradze, University of Alberta, Lie-Poisson Neural Networks (LPNets): Data-Based Computing of Hamiltonian Systems with Symmetries

SAS 4201

Physics-Informed Neural Networks (PINNs) have received much attention recently due to their potential for high-performance computations for complex physical systems, including data-based computing, systems with unknown parameters, and others. The idea of PINNs is to approximate the equations and boundary and initial conditions through a loss function for a neural network. PINNs combine the efficiency…

Computational and Applied Mathematics – Differential Equations/Nonlinear Analysis Seminar: Alexey Miroshnikov, Discover Financial Services, Stability theory of game-theoretic group feature explanations for machine learning models.

SAS 4201

In this article, we study feature attributions of Machine Learning (ML) models originating from linear game values and coalitional values defined as operators on appropriate functional spaces. The main focus is on random games based on the conditional and marginal expectations. The first part of our work formulates a stability theory for these explanation operators…

Computational and Applied Mathematics Seminar: Wenjing Liao, Georgia Institute of Technology, Exploiting Low-Dimensional Data Structures in Deep Learning

SAS 4201

In the past decade, deep learning has made astonishing breakthroughs in various real-world applications. It is a common belief that deep neural networks are good at learning various geometric structures hidden in data sets. One of the central interests in deep learning theory is to understand why deep neural networks are successful, and how they…

Computational and Applied Mathematics Seminar: Saviz Mowlavi, MERL, Model-based and data-driven prediction and control of spatio-temporal systems

Zoom

Spatio-temporal dynamical systems, such as fluid flows or vibrating structures, are prevalent across various applications, from enhancing user comfort and reducing noise in HVAC systems to improving cooling efficiency in electronic devices. However, these systems are notoriously hard to optimize and control due to the infinite dimensionality and nonlinearity of their governing partial differential equations…

Computational and Applied Mathematics Seminar: Jian-Guo Liu, Duke University, Optimal Control for Transition Path Problems in Markov Jump Processes

SAS 4201

 Transition paths connecting metastable states are significant in science and engineering, such as in biochemical reactions. In this talk, I will present a stochastic optimal control formulation for transition path problems over an infinite time horizon, modeled by Markov jump processes on Polish spaces. An unbounded terminal cost at a stopping time and a running…