[Dottorcomp] Fwd: [sciences-ljll-seminaire] Leçons Jacques-Louis Lions 2023 : Andrew Stuart (12--15 décembre 2023) ; 2ème annonce

Lorenzo Tamellini tamellini a imati.cnr.it
Gio 16 Nov 2023 08:03:12 CET


Buongiorno a tutt*,

ricevo e inoltro il seguente avviso.

Cordiali saluti,
Lorenzo Tamellini

---------- Forwarded message ---------
Da: francois murat <francois.murat a sorbonne-universite.fr>
Date: lun 13 nov 2023 alle ore 23:38
Subject: [sciences-ljll-seminaire] Leçons Jacques-Louis Lions 2023 : Andrew
Stuart (12--15 décembre 2023) ; 2ème annonce
To: <sciences-ljll-seminaire a listes.sorbonne-universite.fr>


Données par *Andrew Stuart* (Institut de technologie de Californie
(Caltech)),
les *Leçons Jacques-Louis Lions 2023 *consisteront en :

-- un *mini-cours* intitulé :
* Ensemble Kalman filter: Algorithms, analysis and applications *
*3 séances, les mardi 12, mercredi 13 et jeudi 14 décembre 2023 de 11h à
12h30,*
Salle du séminaire du Laboratoire Jacques-Louis Lions,
barre 15-16, 3ème étage, salle 09 (15-16-3-09),
Sorbonne Université, 4 place Jussieu, Paris 5ème,

-- et un *colloquium* intitulé :
*Operator learning: Acceleration and discovery of computational models*
*le vendredi 15 décembre 2023 de 14h à 15h,*
Amphithéâtre 25,
entrée face à la tour 25, niveau dalle Jussieu,
Sorbonne Université, 4 place Jussieu, Paris 5ème.

*Page web *
https://www.ljll.math.upmc.fr/lecons-jacques-louis-lions-2023-andrew-stuart

*Tous les exposés seront donnés en présence et retransmis en temps réel par
Zoom.*

Le lien Zoom pour suivre à distance la Leçon du jour sera diffusé chaque
matin par email à la liste de diffusion du Séminaire du Laboratoire
Jacques-Louis Lions.
Ce lien sera également mis en ligne chaque matin sur les pages web
https://www.ljll.math.upmc.fr/lecons-jacques-louis-lions-2023-andrew-stuart
https://www.ljll.math.upmc.fr/seminaire-du-laboratoire
https://www.ljll.math.upmc.fr/seminaire-du-laboratoire/seminaires-de-l-annee-2023


*Attention* : ce lien Zoom sera différent chaque jour.


*Résumé du mini-cours *
*Ensemble Kalman filter: Algorithms, analysis and applications *
In 1960 Rudolph Kalman [1] published what is arguably the first paper to
develop a systematic, principled approach to the use of data to improve the
predictive capability of dynamical systems. As our ability to gather data
grows at an enormous rate, the importance of this work continues to grow
too. Kalman's paper is confined to linear dynamical systems subject to
Gaussian noise; the work of Geir Evensen [2] in 1994 opened up far wider
applicability of Kalman's ideas by introducing the ensemble Kalman filter.
The ensemble Kalman filter applies to the setting in which nonlinear and
noisy observations are used to make improved predictions of the state of a
Markov chain. The algorithm results in an interacting particle system
combining elements of the Markov chain and the observation process. In
these lectures I will introduce a unifying mean-field perspective on the
algorithm, derived in the limit of an infinite number of interacting
particles. I will then describe how the methodology can be used to study
inverse problems, opening up diverse applications beyond prediction in
dynamical systems. Finally I will describe analysis of the accuracy of the
methodology, both in terms of accuracy and uncertainty quantification;
despite its widespread adoption in applications, a complete mathematical
theory is lacking and there are many opportunities for analysis in this
area.

Lecture 1: The algorithm
Lecture 2: Inverse problems and applications
Lecture 3: Analysis of accuracy and uncertainty quantification

[1] R. Kalman, *A new approach to linear filtering and prediction problems.*
Journal of Basic Engineering, 82:35–45, 1960.
[2] G. Evensen, *Sequential data assimilation with a nonlinear
quasi-geostrophic model using Monte Carlo methods to forecast error
statistics.*
Journal of Geophysical Research: Oceans, 99(C5):10143–10162, 1994.


*Résumé du colloquium *
*Operator learning: Acceleration and discovery of computational models *
Neural networks have shown great success at learning function approximators
between spaces X and Y, in the setting where X is a finite dimensional
Euclidean space and where Y is either a finite dimensional Euclidean space
(regression) or a set of finite cardinality (classification); the neural
networks learn the approximator from N data pairs (x_n, y_n). In many
problems arising in physics it is desirable to learn maps between spaces of
functions X and Y; this may be either for the purposes of scientific
discovery, or to provide cheap surrogate models which accelerate
computations. New ideas are needed to successfully address this learning
problem in a scalable, efficient manner.
In this talk I will overview the methods that have been introduced in this
area and describe theoretical results underpinning the emerging
methodologies. Illustrations will be given from a variety of PDE-based
problems including learning the solution operator for dissipative PDEs,
learning the homogenization operator in various settings, and learning the
smoothing operator in data assimilation.
-------------- parte successiva --------------
Un allegato HTML è stato rimosso...
URL: http://ipv01.unipv.it/pipermail/dottorcomp/attachments/20231116/060a6483/attachment.htm 


Maggiori informazioni sulla lista Dottorcomp