Andrew Papanicolaou
Bio
I am an associate professor in the Department of Mathematics at North Carolina State University. My PhD is in applied mathematics from Brown University. I also hold an MS in Financial Mathematics from the University of Southern California, and a BS in Mathematical Sciences from the University of California at Santa Barbara.
Education
PhD Applied Mathematics Brown University 2010
MS Financial Mathematics University of Southern California 2007
BS Mathematical Sciences UC Santa Barbara 2003
Area(s) of Expertise
My research interests are computational finance and stochastic systems for control and optimization. Currently I am working on problems involving non-Markovian and high-dimensional optimizations. These problems were previously unsolvable due to the immensity of their computational demands. The applications of this work include financial data analysis and the challenges associate with these highly complex data sets. My background is in probability theory and nonlinear filtering. Among the newer problems that I am considering, are issues related to financial data and how machine learning methods can be applied.
Grants
Machine learning (ML) methods have recently gained considerable attention as a set of tools that are very effective for solving large-scale optimization problems in artificial intelligence and data science. The neural network architecture that is present in many of these methods has some universal approximation properties that allow users to apply software tools with minimal preprocessing of data or tailoring of algorithms to the specifications of the problem. However, in most instances where ML works well, mathematical analysis does not yet offer a satisfactory answer to the fundamental question: Why is a machine learning method so effective for solving this problem? The aim of this project is to investigate how ML methods can be applied to solving non-Markov dynamic programs (DPs), and to answer this fundamental question for some specific problems in this area. Graduate students participate in the research of the project. The investigator analyzes a new method for solving non-Markov DPs, wherein a policy-approximation function is obtained by training a system of neural networks. The main idea is similar to recently-developed methods for solving high-dimensional backward stochastic differential equations (BSDEs), wherein the so-called Deep BSDE Solver learns a function of a high-dimensional input to approximate the optimal control for a Markovian DP. The aims of this project differ from those of other work because the focus is non-Markov DPs and the hurdles that come from path dependence. Issues that are explored in depth include the level of accuracy needed in the training set generated by a Monte Carlo particle method, and the role that implicit regularization plays in the TensorFlow algorithms. The project also considers more conventional theoretical concepts that may provide proof of this method's general effectiveness, such as the Kolmogorov-Arnold representation for continuous functions and the types of sigmoidal functions used in neural networks. The results contribute to an improved theoretical understanding of the mathematics behind Deep BSDE when applied to DPs with nonlinear filtering, and help answer important questions regarding the method's effectiveness. Graduate students participate in the research of the project.