Prof Lars Grune

Lars Grüne

Lars Grüne has been Professor for Applied Mathematics at the University of Bayreuth, Germany, since 2002. He received his Diploma and Ph.D. in Mathematics in 1994 and 1996, respectively, from the University of Augsburg and his habilitation from the J.W. Goethe University in Frankfurt/M in 2001. He held or holds visiting positions at the Universities of Rome Sapienza (Italy), Padova (Italy), Melbourne (Australia), Paris IX – Dauphine (France), Newcastle (Australia) and IIT Bombay (India). Prof. Grüne was General Chair of the 25th International Symposium on Mathematical Theory on Networks and Systems (MTNS 2022), he is Editor-in-Chief of the journal Mathematics of Control, Signals and Systems (MCSS) and is or was Associate Editor of various other journals, including the Journal of Optimization Theory and Applications (JOTA), Mathematical Control and Related Fields (MCRF) and the IEEE Control Systems Letters (CSS-L). His research interests lie in the area of mathematical systems and control theory with a focus on numerical and optimization-based methods for nonlinear systems.

Abstract

Title: Optimization-based control for large-scale or complex systems: When and why does it work?

Model Predictive Control (MPC) and Reinforcement Learning (RL) are two of the most prominent methods for computing control laws based on optimization. In both cases, the resulting controllers approximate infinite-horizon optimal controllers, where the objective of the optimization may range from stabilization of a set-point to energy efficiency to yield maximization. However, for both methods the computational effort may make their application infeasible for large-scale or complex problems. In this talk we explain the basic functioning of both methods and then present situations in which the methods probably work well, by identifying beneficial structures of the solutions of optimal control problems. In the case of MPC we focus on the so-called turnpike property of optimal trajectories, while for Deep RL (i.e., RL with deep neural networks as approximators) we look at the compositional structure of optimal value functions. Examples from academia and from industry illustrate the theoretical findings.