Senza oggetto


Mar 26 Maggio 2020 10:47:54 CEST



Teaching Modality:

Online lectures in a MS Teams session. Links will be shared to the
registered participants.



Registration:

Students interested in attending the PhD Course are welcome to register at

https://docs.google.com/forms/d/1BUCCmjAWDwDO5WCiQEBcDNGiSlcLrx2TeRhz-YhLVt=
4/viewform?edit_requested=3Dtrue



There are no registration fees (free attendance, compulsory registration).



Exam:

PhD students from Politecnico di Milano and affiliated PhD programs are
entitled to undergo an oral exam for ECTS accrutal. Attendance certificates
will be provided upon requests.



Contacts:

For any question or enquiry, contact Giacomo Boracchi,
giacomo.boracchi a polimi.it or refer to the course webpage
https://boracchi.faculty.polimi.it/teaching/Non-Matrix.htm





Invited Speakers:



Alessandro Giusti

Senior Researcher at IDSIA, Lugano

Title: Self-supervised Learning and Domain Adaptation

Tuesday June 23rd  14:30 - 18:30



Alessandro Lazaric

Facebook Paris

Title: Reinforcement Learning And Application Of Deep-learning Models In RL

Wednesday June 24th 9:00 - 13:00



Mark Carman

Politecnico di Milano

Title: Deep Learning Models For Text Mining And Analysis

Wednesday June 24th 14:30 - 18:30



Jonathan Masci

NNAISENSE SA

Title: Machine Learning And Deep Learning Models For Handling Graphs

Thursday June 25th 9:00 - 13:00



Maks Ovsjanikov

Laboratoire d'Informatique (LIX), =C3=89cole Polytechnique, France

Title: 3d Shape Matching and Registration

Thursday June 25th 14:30 - 18:30



Gianluca Bontempi

Universit=C3=A8 Libre de Bruxelles, Belgium

Title: Machine Learning To Infer Causality And Its Application In
Bioinformatics.

Friday June 26th 9:00 - 13:00



Schedule and Abstracts



Tuesday June 23rd  14:30 - 18:30

Title: Self-supervised Learning and Domain Adaptation

Alessandro Giusti

Senior Researcher at Dalle Molle Institute for Artificial Intelligence
(IDSIA, USI-SUPSI), Switzerland



Abstract:

Real-world applications of machine learning often face challenges due to
two main issues which recur in many application scenarios: the cost of
acquiring reliable, large, labeled training datasets; and the difficulty in
generalizing trained models to the deployment domain. The tutorial will
cover a set of state-of-the-art techniques to overcome these issues.

First, we discuss several successful examples of self-supervised learning,
a classic approach in robotics which consists in the automated acquisition
of ground truth labels by exploiting multiple sensors during the robot's
operation; more recently, a related but broader line of research has grown
in the field of deep learning, which aims to use the data itself as a
supervisory signal, based on simple, intuitive ideas with compelling
results.

Then, we delve into domain adaptation techniques, which tackle the issue of
handling differences between the training and the deployment domains; this
is a key challenge in many practical applications, where large datasets are
available (or cheap to acquire) in some domain (e.g. in simulations), but
models must be deployed in a different domain (e.g. the real world) where
labeled training data is expensive.  This section of the tutorial will
feature hands-on experiments by implementing state-of-the-art techniques.





Wednesday, June 24th, 9.00 - 13:00

Title: Reinforcement Learning and application of deep-learning models in RL

Alessandro Lazaric

Facebook, France



Abstract:

Reinforcement learning (RL) focuses on designing agents that are able to
learn how to maximize reward in unknown dynamic environments. This very
general framework is motivated by a wide variety of applications ranging
from recommendation systems to robotics, from treatment optimization to
computer games. Unlike in other fields of machine learning, an RL agent
needs to learn without a direct supervision of the best actions to take and
it solely relies on the interaction with the environment and a (possibly
sparse and sporadic) reward signal that implicitly defines the task to
solve. Solving this problem poses several challenges such as credit
assignment (understand which actions performed in the past are responsible
for achieving high reward in the future), efficient exploration of the
environment (to discover how the environment behaves and where most of
reward is), approximation and generalization (to generalize the experience
collected in some parts of the environment towards the rest of it). In the
lecture, we will mostly focus on the first and last challenge. In
particular, we will study how deep learning techniques can be effectively
integrated into "standard" RL algorithm to be able to learn representations
of the state of the environment that allow for generalization. Some of
these techniques, such as DQN and TRPO, are nowadays at the core of the
major successes of RL such as achieving super-human performance in games
(e.g., Atari, StarCraft, Dota, and Go) as well simulated and real robotic
tasks.





Wednesday, June 24th, 14.30 - 18:30

Title: Deep Learning Models for Text Mining
Mark Carman

Politecnico di Milano, Italy



Abstract:

Deep Learning has revolutionised the area of text processing recently. Up
until to a few years ago, it was inconceivable that one might try to train
a classifier on text in one language and then apply it directly to text in
another language (without any form of training on the latter). Now it is
commonplace to do so. This is possible through the use of powerful language
models that have been pre-trained on large multilingual corpora. The
application of sophisticated unsupervised pre-training thus provides the
ability to easily transfer knowledge from one domain (or natural language)
to another.

In this talk I'll run through a brief history of language and sequence
modelling techniques. I'll describe the state-of-the-art transformer
architectures that are used to build famous models like GPT-2 and BERT.
We'll discuss how these models can be used for various types of prediction
problems, and describe some interesting applications to problems in
multilingual classification, image question answering, data integration,
bioinformatics.





Thursday, June 25th, 9.00 - 13:00

Title: Deep Learning on Graphs and Structured Computation Models

Jonathan Masci

NNAISENSE SA, Switzerland



Deep learning methods have achieved unprecedented performance in computer
vision, natural language processing and speech analysis, enabling many
industry-first applications. Autonomous driving, image synthesis and deep
reinforcement learning are just few examples of what is now possible on
grid structured data with deep learning at scale on GPUs and dedicated
hardware.



However, tasks for which data comes arranged on grids and sequences cover
only a small fraction of the fundamental problems of interest. Most of the
interesting problems have, in fact, to deal with data that lie on
non-Euclidean domains for which deep learning methods were not originally
designed. The need to operate powerful non-linear data driven models on
this data led to the creation of Geometric Deep Learning, a new and rapidly
growing area of research that focuses on methods and applications for graph
and manifold structured data.



Despite the field being still in its infancy, it can already list numerous
breakthroughs on classic graph theory problems such as graph matching, 3D
shape analysis and registration, fMRI and structural connectivity networks,
scene reconstruction and parsing, drug design and protein synthesis. At the
core of this new wave of deep learning successes is the ability of models
to directly deal with non-Euclidean data through generalization of
convolution and sub sampling operators and, more generally, thanks to
models that can use structure to induce computation in what I call the
Structured Computation Model.



The lecture will start with a pragmatic introduction to graph convolutions,
both in the spectral and spatial domain, and to the message passing
framework.  Applications and recent achievements in the field will then
follow, starting with node and graph classification in the inductive and
transductive setting, and progressing to finally realize that popular
methods in meta-learning, one-shot and few-shot learning, structured latent
space models are particular cases of the Structured Computational Model. I
will show how such a general and unified framework can help the
cross-fertilization of different disciplines to achieve better results,
faster.

The lecture will finally give an outlook of where the field is going and of
new and exiting research directions and industrial applications that are
waiting to be revolutionized.





Thursday, June 25th, 14.30 - 18:30

Title: Deep learning-based approaches for 3D shape matching and comparison

Maks Ovsjanikov

Laboratoire d'Informatique (LIX), =C3=89cole Polytechnique, France



Abstract: In this talk, I will describe several learning-based techniques
shape comparison and computing correspondences between non-rigid 3D. This
problem arises in many areas of shape processing, from statistical shape
analysis to anomaly detection and deformation transfer among others.
Traditionally this problem has been tackled with purely axiomatic methods,
but with recent availability of large-scale datasets, new approaches have
been proposed that exploit learning for finding dense maps
(correspondences) between 3D shapes. In this talk, I will give an overview
of recent successful methods in this area and will especially highlight how
geometric information and principles can be injected into the learning
pipeline resulting in robust and effective matching methods (both
supervised and unsupervised). Finally, time permitting, I will also
describe a link between matching and 3D shape synthesis, pointing out how
similar methods can be used to achieve both tasks.





Friday, June 26th, 9.00 - 13:00

Title: From supervised learning to causal inference in large dimensional
settings

Gianluca Bontempi

Universit=C3=A8 Libre de Bruxelles, Belgium



We are drowning in data and starving for knowledge=E2=80=9D is an old adage=
 of data
scientists that nowadays should be rephrased into =E2=80=9Dwe are drowning =
in
associations and starving for causality=E2=80=9D. The democratization of ma=
chine
learning software and big data platforms is increasing the risk of
ascribing causal meaning to simple and sometimes brittle associations. This
risk is particularly evident in settings (like bioinformatics, social
sciences, economics) characterised by high dimension, multivariate
interactions, dynamic behaviour where direct manipulation is not only
unethical but also impractical. The conventional ways to recover a causal
structure from observational data are score-based and constraint-based
algorithms. Their limitations, mainly in high dimension, opened the way to
alternative learning algorithms which pose the problem of causal inference
as the classification of probability distributions. The rationale of those
algorithms is that the existence of a causal relationship induces a
constraint on the observational multivariate distribution. In other words,
causality leaves footprints in the data distribution that can be hopefully
used to reduce the uncertainty about the causal structure. This first part
of the presentation will introduce some basics of causal inference and will
discuss the state-of-the-art on machine learning for causality (notably
causal feature selection) and some application to bioinformatics. The
second part of the talk will focus on the D2C approach  which featurizes
observed data by means of information theory asymmetric measures to extract
meaningful hints about the causal structure. The D2C algorithm performs
three steps to predict the existence of a directed causal link between two
variables in a multivariate setting: (i) it estimates the Markov Blankets
of the two variables of interest and ranks its components in terms of their
causal nature, (ii) it computes a number of asymmetric descriptors and
(iii) it learns a classifier (e.g. a Random Forest) returning the
probability of a causal link given the descriptors value. The final part of
the presentation is more prospective and will introduce some recent work to
implement counterfactual prediction in a data driven setting.


--=20
Giacomo Boracchi, PhD
Dipartimento Elettronica, Informazione e Bioingegneria
DEIB, Politecnico di Milano
Via Ponzio, 34/5 20133 Milano, Italy.

http://home.dei.polimi.it/boracchi/

--00000000000041b75405a7062fd3
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Cari Dottorandi,<div>Per gli interessati, giro il programm=
a del corso intensivo su=C2=A0

 ML for Non-matrix data del prof.=C2=A0

Giacomo Boracchi del PoliMI.</div><div>Cordiali saluti,</div><div>=C2=A0 =
=C2=A0 Luca Pavarino<br><br><div class=3D"gmail_quote"><div dir=3D"ltr" cla=
ss=3D"gmail_attr">---------- Forwarded message ---------<br></div><div dir=
=3D"ltr"><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">F=
rom: <strong class=3D"gmail_sendername" dir=3D"auto">Giacomo Boracchi</stro=
ng> <span dir=3D"auto">&lt;<a href=3D"mailto:giacomo.boracchi a polimi.it" ta=
rget=3D"_blank">giacomo.boracchi a polimi.it</a>&gt;</span><br>Date: Mon, 1 J=
un 2020 at 01:51<br>Subject: programma del corso ML for Non-matrix dataGiac=
omo</div><div dir=3D"ltr"><span id=3D"m_-6606505526755386769m_4843140266069=
786671gmail-docs-internal-guid-2245146e-7fff-1b20-00fa-8545c9a8bd9a"><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span =
style=3D"font-size:11pt;font-family:Arial;background-color:transparent;font=
-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;vert=
ical-align:baseline;white-space:pre-wrap"><br></span></p><p dir=3D"ltr" sty=
le=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"fon=
t-size:11pt;font-family:Arial;background-color:transparent;font-weight:700;=
font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:b=
aseline;white-space:pre-wrap">PhD Course on Machine Learning for Non-Matrix=
 Data,</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-variant-numeric:normal;font-variant-east-asian:norm=
al;vertical-align:baseline;white-space:pre-wrap">Politecnico di Milano</spa=
n></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom=
:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:tran=
sparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical=
-align:baseline;white-space:pre-wrap">Organizers: Giacomo Boracchi, Cesare =
Alippi, Matteo Matteucci</span></p><p dir=3D"ltr" style=3D"line-height:1.38=
;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-h=
eight:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;=
font-family:Arial;background-color:transparent;font-weight:700;font-variant=
-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;whit=
e-space:pre-wrap">Overview:</span></p><p dir=3D"ltr" style=3D"line-height:1=
.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-fa=
mily:Arial;background-color:transparent;font-variant-numeric:normal;font-va=
riant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Deep =
learning models have proven to be very successful in multiple fields in sci=
ence and engineering, ranging from autonomous driving to human machine inte=
raction. Deep networks and data-driven models have often outperformed tradi=
tional hand-crafted algorithms and achieved super-human performance in solv=
ing many complex tasks, such as image recognition.</span></p><p dir=3D"ltr"=
 style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D=
"font-size:11pt;font-family:Arial;background-color:transparent;font-variant=
-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;whit=
e-space:pre-wrap">The vast majority of these methods, however, are still me=
ant for numerical input data represented as vectors or matrices, like image=
s. More recently, the deep-learning paradigm has been successfully extended=
 to cover non-matrix data, which are challenging due to their sparse and sc=
attered nature (e.g., point clouds or 3D meshes) or presence of relational =
information (e.g., graphs). Neural-based architectures have been proposed t=
o process input data such as graphs and point clouds: such extensions were =
not straightforward, and indicate one of the most interesting research dire=
ctions in computer vision and pattern recognition.</span></p><p dir=3D"ltr"=
 style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p d=
ir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><spa=
n style=3D"font-size:11pt;font-family:Arial;background-color:transparent;fo=
nt-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;ve=
rtical-align:baseline;white-space:pre-wrap">Mission and goal:</span></p><p =
dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><sp=
an style=3D"font-size:11pt;font-family:Arial;background-color:transparent;f=
ont-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:ba=
seline;white-space:pre-wrap">This course aims at presenting data-driven met=
hods for handling non-matrix data, i.e., data that are not represented as a=
rrays. The course will give an overview of machine learning and deep learni=
ng models for handling graphs, point clouds, texts and data in bioinformati=
cs. Moreover, most relevant approaches in reinforcement learning and self-s=
upervised learning will be presented.</span></p><p dir=3D"ltr" style=3D"lin=
e-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" st=
yle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"fo=
nt-size:11pt;font-family:Arial;background-color:transparent;font-weight:700=
;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:=
baseline;white-space:pre-wrap">Dates and Teaching Modality</span></p><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span =
style=3D"font-size:11pt;font-family:Arial;background-color:transparent;font=
-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:basel=
ine;white-space:pre-wrap">From June 23rd to June 26th, 6 seminars of 4 hour=
s each.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;m=
argin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin=
-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial=
;background-color:transparent;font-weight:700;font-variant-numeric:normal;f=
ont-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap"=
>Teaching Modality:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;marg=
in-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Ari=
al;background-color:transparent;font-variant-numeric:normal;font-variant-ea=
st-asian:normal;vertical-align:baseline;white-space:pre-wrap">Online lectur=
es in a MS Teams session. Links will be shared to the registered participan=
ts.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margi=
n-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top=
:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;bac=
kground-color:transparent;font-weight:700;font-variant-numeric:normal;font-=
variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Reg=
istration:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0p=
t;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;backgr=
ound-color:transparent;font-variant-numeric:normal;font-variant-east-asian:=
normal;vertical-align:baseline;white-space:pre-wrap">Students interested in=
 attending the PhD Course are welcome to register at</span></p><p dir=3D"lt=
r" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><a href=3D"h=
ttps://docs.google.com/forms/d/1BUCCmjAWDwDO5WCiQEBcDNGiSlcLrx2TeRhz-YhLVt4=
/viewform?edit_requested=3Dtrue" style=3D"text-decoration-line:none" target=
=3D"_blank"><span style=3D"font-size:11pt;font-family:Arial;background-colo=
r:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;te=
xt-decoration-line:underline;vertical-align:baseline;white-space:pre-wrap">=
https://docs.google.com/forms/d/1BUCCmjAWDwDO5WCiQEBcDNGiSlcLrx2TeRhz-YhLVt=
4/viewform?edit_requested=3Dtrue</span></a></p><p dir=3D"ltr" style=3D"line=
-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" sty=
le=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"fon=
t-size:11pt;font-family:Arial;background-color:transparent;font-variant-num=
eric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-sp=
ace:pre-wrap">There are no registration fees (free attendance, compulsory r=
egistration).</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top=
:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial;background-color:transparent;font-weight:700;font-variant-numeric:no=
rmal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre=
-wrap">Exam:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:=
0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;back=
ground-color:transparent;font-variant-numeric:normal;font-variant-east-asia=
n:normal;vertical-align:baseline;white-space:pre-wrap">PhD students from Po=
litecnico di Milano and affiliated PhD programs are entitled to undergo an =
oral exam for ECTS accrutal. Attendance certificates will be provided upon =
requests.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt=
;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;marg=
in-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Ari=
al;background-color:transparent;font-weight:700;font-variant-numeric:normal=
;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wra=
p">Contacts:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:=
0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;back=
ground-color:transparent;font-variant-numeric:normal;font-variant-east-asia=
n:normal;vertical-align:baseline;white-space:pre-wrap">For any question or =
enquiry, contact Giacomo Boracchi, </span><a href=3D"mailto:giacomo.boracch=
i a polimi.it" style=3D"text-decoration-line:none" target=3D"_blank"><span st=
yle=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-v=
ariant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:u=
nderline;vertical-align:baseline;white-space:pre-wrap">giacomo.boracchi a pol=
imi.it</span></a><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-variant-numeric:normal;font-variant-east-asian:norm=
al;vertical-align:baseline;white-space:pre-wrap"> or refer to the course we=
bpage </span><a href=3D"https://boracchi.faculty.polimi.it/teaching/Non-Mat=
rix.htm" style=3D"text-decoration-line:none" target=3D"_blank"><span style=
=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-vari=
ant-numeric:normal;font-variant-east-asian:normal;text-decoration-line:unde=
rline;vertical-align:baseline;white-space:pre-wrap">https://boracchi.facult=
y.polimi.it/teaching/Non-Matrix.htm</span></a></p><p dir=3D"ltr" style=3D"l=
ine-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" =
style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p di=
r=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"font-size:11pt;font-family:Arial;background-color:transparent;fon=
t-weight:700;font-variant-numeric:normal;font-variant-east-asian:normal;ver=
tical-align:baseline;white-space:pre-wrap">Invited Speakers:</span></p><p d=
ir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=
=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-botto=
m:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:tra=
nsparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertica=
l-align:baseline;white-space:pre-wrap">Alessandro Giusti</span></p><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span =
style=3D"font-size:11pt;font-family:Arial;background-color:transparent;font=
-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:basel=
ine;white-space:pre-wrap">Senior Researcher at IDSIA, Lugano</span></p><p d=
ir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><spa=
n style=3D"font-size:11pt;font-family:Arial;background-color:transparent;fo=
nt-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:bas=
eline;white-space:pre-wrap">Title: Self-supervised Learning and Domain Adap=
tation</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-variant-numeric:normal;font-variant-east-asian:norm=
al;vertical-align:baseline;white-space:pre-wrap">Tuesday June 23rd=C2=A0 14=
:30 - 18:30</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0=
pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;ma=
rgin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:A=
rial;background-color:transparent;font-variant-numeric:normal;font-variant-=
east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Alessandro =
Lazaric</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;m=
argin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;backgroun=
d-color:transparent;font-variant-numeric:normal;font-variant-east-asian:nor=
mal;vertical-align:baseline;white-space:pre-wrap">Facebook Paris</span></p>=
<p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=
<span style=3D"font-size:11pt;font-family:Arial;background-color:transparen=
t;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align=
:baseline;white-space:pre-wrap">Title: Reinforcement Learning And Applicati=
on Of Deep-learning Models In RL</span></p><p dir=3D"ltr" style=3D"line-hei=
ght:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;fo=
nt-family:Arial;background-color:transparent;font-variant-numeric:normal;fo=
nt-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">=
Wednesday June 24th 9:00 - 13:00</span></p><p dir=3D"ltr" style=3D"line-hei=
ght:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=
=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-=
size:11pt;font-family:Arial;background-color:transparent;font-variant-numer=
ic:normal;font-variant-east-asian:normal;vertical-align:baseline;white-spac=
e:pre-wrap">Mark Carman</span></p><p dir=3D"ltr" style=3D"line-height:1.38;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial;background-color:transparent;font-variant-numeric:normal;font-varian=
t-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Politecni=
co di Milano</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:=
0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;back=
ground-color:transparent;font-variant-numeric:normal;font-variant-east-asia=
n:normal;vertical-align:baseline;white-space:pre-wrap">Title: Deep Learning=
 Models For Text Mining And Analysis</span></p><p dir=3D"ltr" style=3D"line=
-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11p=
t;font-family:Arial;background-color:transparent;font-variant-numeric:norma=
l;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wr=
ap">Wednesday June 24th 14:30 - 18:30</span></p><p dir=3D"ltr" style=3D"lin=
e-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" st=
yle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"fo=
nt-size:11pt;font-family:Arial;background-color:transparent;font-variant-nu=
meric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-s=
pace:pre-wrap">Jonathan Masci</span></p><p dir=3D"ltr" style=3D"line-height=
:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-=
family:Arial;background-color:transparent;font-variant-numeric:normal;font-=
variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">NNA=
ISENSE SA</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt=
;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;backgro=
und-color:transparent;font-variant-numeric:normal;font-variant-east-asian:n=
ormal;vertical-align:baseline;white-space:pre-wrap">Title: Machine Learning=
 And Deep Learning Models For Handling Graphs</span></p><p dir=3D"ltr" styl=
e=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font=
-size:11pt;font-family:Arial;background-color:transparent;font-variant-nume=
ric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-spa=
ce:pre-wrap">Thursday June 25th 9:00 - 13:00</span></p><p dir=3D"ltr" style=
=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"=
ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span styl=
e=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-var=
iant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;=
white-space:pre-wrap">Maks Ovsjanikov</span></p><p dir=3D"ltr" style=3D"lin=
e-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11=
pt;font-family:Arial;background-color:transparent;font-variant-numeric:norm=
al;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-w=
rap">Laboratoire d&#39;Informatique (LIX), =C3=89cole Polytechnique, France=
</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-b=
ottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color=
:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;ver=
tical-align:baseline;white-space:pre-wrap">Title: 3d Shape Matching and Reg=
istration</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt=
;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;backgro=
und-color:transparent;font-variant-numeric:normal;font-variant-east-asian:n=
ormal;vertical-align:baseline;white-space:pre-wrap">Thursday June 25th 14:3=
0 - 18:30</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt=
;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;marg=
in-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Ari=
al;background-color:transparent;font-variant-numeric:normal;font-variant-ea=
st-asian:normal;vertical-align:baseline;white-space:pre-wrap">Gianluca Bont=
empi</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;marg=
in-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-c=
olor:transparent;font-variant-numeric:normal;font-variant-east-asian:normal=
;vertical-align:baseline;white-space:pre-wrap">Universit=C3=A8 Libre de Bru=
xelles, Belgium</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-t=
op:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;b=
ackground-color:transparent;font-variant-numeric:normal;font-variant-east-a=
sian:normal;vertical-align:baseline;white-space:pre-wrap">Title: Machine Le=
arning To Infer Causality And Its Application In Bioinformatics.</span></p>=
<p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=
<span style=3D"font-size:11pt;font-family:Arial;background-color:transparen=
t;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align=
:baseline;white-space:pre-wrap">Friday June 26th 9:00 - 13:00</span></p><p =
dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=
=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-botto=
m:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:tra=
nsparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asia=
n:normal;vertical-align:baseline;white-space:pre-wrap">Schedule and Abstrac=
ts</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin=
-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:=
0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;back=
ground-color:transparent;font-style:italic;font-variant-numeric:normal;font=
-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Tu=
esday June 23rd=C2=A0 14:30 - 18:30</span></p><p dir=3D"ltr" style=3D"line-=
height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt=
;font-family:Arial;background-color:transparent;font-style:italic;font-vari=
ant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;w=
hite-space:pre-wrap">Title: Self-supervised Learning and Domain Adaptation<=
/span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bo=
ttom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:=
transparent;font-style:italic;font-variant-numeric:normal;font-variant-east=
-asian:normal;vertical-align:baseline;white-space:pre-wrap">Alessandro Gius=
ti</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin=
-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-col=
or:transparent;font-style:italic;font-variant-numeric:normal;font-variant-e=
ast-asian:normal;vertical-align:baseline;white-space:pre-wrap">Senior Resea=
rcher at Dalle Molle Institute for Artificial Intelligence (IDSIA, USI-SUPS=
I), Switzerland</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-t=
op:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.3=
8;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-fami=
ly:Arial;background-color:transparent;font-style:italic;font-variant-numeri=
c:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space=
:pre-wrap">Abstract:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;mar=
gin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Ar=
ial;background-color:transparent;font-style:italic;font-variant-numeric:nor=
mal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-=
wrap">Real-world applications of machine learning often face challenges due=
 to two main issues which recur in many application scenarios: the cost of =
acquiring reliable, large, labeled training datasets; and the difficulty in=
 generalizing trained models to the deployment domain. The tutorial will co=
ver a set of state-of-the-art techniques to overcome these issues.</span></=
p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt=
"><span style=3D"font-size:11pt;font-family:Arial;background-color:transpar=
ent;font-style:italic;font-variant-numeric:normal;font-variant-east-asian:n=
ormal;vertical-align:baseline;white-space:pre-wrap">First, we discuss sever=
al successful examples of self-supervised learning, a classic approach in r=
obotics which consists in the automated acquisition of ground truth labels =
by exploiting multiple sensors during the robot&#39;s operation; more recen=
tly, a related but broader line of research has grown in the field of deep =
learning, which aims to use the data itself as a supervisory signal, based =
on simple, intuitive ideas with compelling results.</span></p><p dir=3D"ltr=
" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=
=3D"font-size:11pt;font-family:Arial;background-color:transparent;font-styl=
e:italic;font-variant-numeric:normal;font-variant-east-asian:normal;vertica=
l-align:baseline;white-space:pre-wrap">Then, we delve into domain adaptatio=
n techniques, which tackle the issue of handling differences between the tr=
aining and the deployment domains; this is a key challenge in many practica=
l applications, where large datasets are available (or cheap to acquire) in=
 some domain (e.g. in simulations), but models must be deployed in a differ=
ent domain (e.g. the real world) where labeled training data is expensive.=
=C2=A0 This section of the tutorial will feature hands-on experiments by im=
plementing state-of-the-art techniques.</span></p><p dir=3D"ltr" style=3D"l=
ine-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" =
style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p di=
r=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"font-size:11pt;font-family:Arial;background-color:transparent;fon=
t-style:italic;font-variant-numeric:normal;font-variant-east-asian:normal;v=
ertical-align:baseline;white-space:pre-wrap">Wednesday, June 24th, 9.00 - 1=
3:00</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;marg=
in-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-c=
olor:transparent;font-style:italic;font-variant-numeric:normal;font-variant=
-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Title: Rei=
nforcement Learning and application of deep-learning models in RL</span></p=
><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"=
><span style=3D"font-size:11pt;font-family:Arial;background-color:transpare=
nt;font-style:italic;font-variant-numeric:normal;font-variant-east-asian:no=
rmal;vertical-align:baseline;white-space:pre-wrap">Alessandro Lazaric</span=
></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:=
0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:trans=
parent;font-style:italic;font-variant-numeric:normal;font-variant-east-asia=
n:normal;vertical-align:baseline;white-space:pre-wrap">Facebook, France</sp=
an></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-botto=
m:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-style:italic;font-variant-numeric:normal;font-varia=
nt-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Abstract=
:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-=
bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-colo=
r:transparent;font-style:italic;font-variant-numeric:normal;font-variant-ea=
st-asian:normal;vertical-align:baseline;white-space:pre-wrap">Reinforcement=
 learning (RL) focuses on designing agents that are able to learn how to ma=
ximize reward in unknown dynamic environments. This very general framework =
is motivated by a wide variety of applications ranging from recommendation =
systems to robotics, from treatment optimization to computer games. Unlike =
in other fields of machine learning, an RL agent needs to learn without a d=
irect supervision of the best actions to take and it solely relies on the i=
nteraction with the environment and a (possibly sparse and sporadic) reward=
 signal that implicitly defines the task to solve. Solving this problem pos=
es several challenges such as credit assignment (understand which actions p=
erformed in the past are responsible for achieving high reward in the futur=
e), efficient exploration of the environment (to discover how the environme=
nt behaves and where most of reward is), approximation and generalization (=
to generalize the experience collected in some parts of the environment tow=
ards the rest of it). In the lecture, we will mostly focus on the first and=
 last challenge. In particular, we will study how deep learning techniques =
can be effectively integrated into &quot;standard&quot; RL algorithm to be =
able to learn representations of the state of the environment that allow fo=
r generalization. Some of these techniques, such as DQN and TRPO, are nowad=
ays at the core of the major successes of RL such as achieving super-human =
performance in games (e.g., Atari, StarCraft, Dota, and Go) as well simulat=
ed and real robotic tasks.</span></p><p dir=3D"ltr" style=3D"line-height:1.=
38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line=
-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" sty=
le=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"fon=
t-size:11pt;font-family:Arial;background-color:transparent;font-style:itali=
c;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align=
:baseline;white-space:pre-wrap">Wednesday, June 24th, 14.30 - 18:30</span><=
/p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0p=
t"><span style=3D"font-size:11pt;font-family:Arial;background-color:transpa=
rent;font-style:italic;font-variant-numeric:normal;font-variant-east-asian:=
normal;vertical-align:baseline;white-space:pre-wrap">Title: Deep Learning M=
odels for Text Mining</span><span style=3D"font-size:11pt;font-family:Arial=
;background-color:transparent;font-style:italic;font-variant-numeric:normal=
;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wra=
p"><br></span><span style=3D"font-size:11pt;font-family:Arial;background-co=
lor:transparent;font-style:italic;font-variant-numeric:normal;font-variant-=
east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Mark Carman=
</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-b=
ottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-color=
:transparent;font-style:italic;font-variant-numeric:normal;font-variant-eas=
t-asian:normal;vertical-align:baseline;white-space:pre-wrap">Politecnico di=
 Milano, Italy</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-to=
p:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38=
;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-famil=
y:Arial;background-color:transparent;font-style:italic;font-variant-numeric=
:normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:=
pre-wrap">Abstract:</span></p><p dir=3D"ltr" style=3D"line-height:1.38;marg=
in-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Ari=
al;background-color:transparent;font-style:italic;font-variant-numeric:norm=
al;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-w=
rap">Deep Learning has revolutionised the area of text processing recently.=
 Up until to a few years ago, it was inconceivable that one might try to tr=
ain a classifier on text in one language and then apply it directly to text=
 in another language (without any form of training on the latter). Now it i=
s commonplace to do so. This is possible through the use of powerful langua=
ge models that have been pre-trained on large multilingual corpora. The app=
lication of sophisticated unsupervised pre-training thus provides the abili=
ty to easily transfer knowledge from one domain (or natural language) to an=
other.</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-style:italic;font-variant-numeric:normal;font-varia=
nt-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">In this =
talk I&#39;ll run through a brief history of language and sequence modellin=
g techniques. I&#39;ll describe the state-of-the-art transformer architectu=
res that are used to build famous models like GPT-2 and BERT. We&#39;ll dis=
cuss how these models can be used for various types of prediction problems,=
 and describe some interesting applications to problems in multilingual cla=
ssification, image question answering, data integration, bioinformatics.=C2=
=A0=C2=A0</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt=
;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height:1.38;marg=
in-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line-height=
:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-=
family:Arial;background-color:transparent;font-style:italic;font-variant-nu=
meric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-s=
pace:pre-wrap">Thursday, June 25th, 9.00 - 13:00</span></p><p dir=3D"ltr" s=
tyle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"f=
ont-size:11pt;font-family:Arial;background-color:transparent;font-style:ita=
lic;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-ali=
gn:baseline;white-space:pre-wrap">Title: Deep Learning on Graphs and Struct=
ured Computation Models</span></p><p dir=3D"ltr" style=3D"line-height:1.38;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial;background-color:transparent;font-style:italic;font-variant-numeric:=
normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:p=
re-wrap">Jonathan Masci</span></p><p dir=3D"ltr" style=3D"line-height:1.38;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial;background-color:transparent;font-style:italic;font-variant-numeric:=
normal;font-variant-east-asian:normal;vertical-align:baseline;white-space:p=
re-wrap">NNAISENSE SA, Switzerland</span></p><p dir=3D"ltr" style=3D"line-h=
eight:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=
=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-=
size:11pt;font-family:Arial;background-color:transparent;font-style:italic;=
font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:b=
aseline;white-space:pre-wrap">Deep learning methods have achieved unprecede=
nted performance in computer vision, natural language processing and speech=
 analysis, enabling many industry-first applications. Autonomous driving, i=
mage synthesis and deep reinforcement learning are just few examples of wha=
t is now possible on grid structured data with deep learning at scale on GP=
Us and dedicated hardware.</span></p><p dir=3D"ltr" style=3D"line-height:1.=
38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=3D"line=
-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11p=
t;font-family:Arial;background-color:transparent;font-style:italic;font-var=
iant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;=
white-space:pre-wrap">However, tasks for which data comes arranged on grids=
 and sequences cover only a small fraction of the fundamental problems of i=
nterest. Most of the interesting problems have, in fact, to deal with data =
that lie on non-Euclidean domains for which deep learning methods were not =
originally designed. The need to operate powerful non-linear data driven mo=
dels on this data led to the creation of Geometric Deep Learning, a new and=
 rapidly growing area of research that focuses on methods and applications =
for graph and manifold structured data.</span></p><p dir=3D"ltr" style=3D"l=
ine-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" =
style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"=
font-size:11pt;font-family:Arial;background-color:transparent;font-style:it=
alic;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-al=
ign:baseline;white-space:pre-wrap">Despite the field being still in its inf=
ancy, it can already list numerous breakthroughs on classic graph theory pr=
oblems such as graph matching, 3D shape analysis and registration, fMRI and=
 structural connectivity networks, scene reconstruction and parsing, drug d=
esign and protein synthesis. At the core of this new wave of deep learning =
successes is the ability of models to directly deal with non-Euclidean data=
 through generalization of convolution and sub sampling operators and, more=
 generally, thanks to models that can use structure to induce computation i=
n what I call the Structured Computation Model.</span></p><p dir=3D"ltr" st=
yle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span =
style=3D"font-size:11pt;font-family:Arial;background-color:transparent;font=
-style:italic;font-variant-numeric:normal;font-variant-east-asian:normal;ve=
rtical-align:baseline;white-space:pre-wrap">The lecture will start with a p=
ragmatic introduction to graph convolutions, both in the spectral and spati=
al domain, and to the message passing framework.=C2=A0 Applications and rec=
ent achievements in the field will then follow, starting with node and grap=
h classification in the inductive and transductive setting, and progressing=
 to finally realize that popular methods in meta-learning, one-shot and few=
-shot learning, structured latent space models are particular cases of the =
Structured Computational Model. I will show how such a general and unified =
framework can help the cross-fertilization of different disciplines to achi=
eve better results, faster.</span></p><p dir=3D"ltr" style=3D"line-height:1=
.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-fa=
mily:Arial;background-color:transparent;font-style:italic;font-variant-nume=
ric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-spa=
ce:pre-wrap">The lecture will finally give an outlook of where the field is=
 going and of new and exiting research directions and industrial applicatio=
ns that are waiting to be revolutionized.</span></p><p dir=3D"ltr" style=3D=
"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr=
" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p =
dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><sp=
an style=3D"font-size:11pt;font-family:Arial;background-color:transparent;f=
ont-style:italic;font-variant-numeric:normal;font-variant-east-asian:normal=
;vertical-align:baseline;white-space:pre-wrap">Thursday, June 25th, 14.30 -=
 18:30</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background=
-color:transparent;font-style:italic;font-variant-numeric:normal;font-varia=
nt-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Title: D=
eep learning-based approaches for 3D shape matching and comparison</span></=
p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt=
"><span style=3D"font-size:11pt;font-family:Arial;background-color:transpar=
ent;font-style:italic;font-variant-numeric:normal;font-variant-east-asian:n=
ormal;vertical-align:baseline;white-space:pre-wrap">Maks Ovsjanikov</span><=
/p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0p=
t"><span style=3D"font-size:11pt;font-family:Arial;background-color:transpa=
rent;font-style:italic;font-variant-numeric:normal;font-variant-east-asian:=
normal;vertical-align:baseline;white-space:pre-wrap">Laboratoire d&#39;Info=
rmatique (LIX), =C3=89cole Polytechnique, France</span></p><p dir=3D"ltr" s=
tyle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span =
style=3D"font-size:11pt;font-family:Arial;background-color:transparent;font=
-style:italic;font-variant-numeric:normal;font-variant-east-asian:normal;ve=
rtical-align:baseline;white-space:pre-wrap">Abstract: In this talk, I will =
describe several learning-based techniques shape comparison and computing c=
orrespondences between non-rigid 3D. This problem arises in many areas of s=
hape processing, from statistical shape analysis to anomaly detection and d=
eformation transfer among others. Traditionally this problem has been tackl=
ed with purely axiomatic methods, but with recent availability of large-sca=
le datasets, new approaches have been proposed that exploit learning for fi=
nding dense maps (correspondences) between 3D shapes. In this talk, I will =
give an overview of recent successful methods in this area and will especia=
lly highlight how geometric information and principles can be injected into=
 the learning pipeline resulting in robust and effective matching methods (=
both supervised and unsupervised). Finally, time permitting, I will also de=
scribe a link between matching and 3D shape synthesis, pointing out how sim=
ilar methods can be used to achieve both tasks.</span></p><p dir=3D"ltr" st=
yle=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=
=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0=
</p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0=
pt"><span style=3D"font-size:11pt;font-family:Arial;background-color:transp=
arent;font-style:italic;font-variant-numeric:normal;font-variant-east-asian=
:normal;vertical-align:baseline;white-space:pre-wrap">Friday, June 26th, 9.=
00 - 13:00</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0p=
t;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;backgr=
ound-color:transparent;font-style:italic;font-variant-numeric:normal;font-v=
ariant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Titl=
e: From supervised learning to causal inference in large dimensional settin=
gs</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin=
-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-col=
or:transparent;font-style:italic;font-variant-numeric:normal;font-variant-e=
ast-asian:normal;vertical-align:baseline;white-space:pre-wrap">Gianluca Bon=
tempi</span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;mar=
gin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial;background-=
color:transparent;font-style:italic;font-variant-numeric:normal;font-varian=
t-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Universit=
=C3=A8 Libre de Bruxelles, Belgium</span></p><p dir=3D"ltr" style=3D"line-h=
eight:1.38;margin-top:0pt;margin-bottom:0pt">=C2=A0</p><p dir=3D"ltr" style=
=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-=
size:11pt;font-family:Arial;background-color:transparent;font-style:italic;=
font-variant-numeric:normal;font-variant-east-asian:normal;vertical-align:b=
aseline;white-space:pre-wrap">We are drowning in data and starving for know=
ledge=E2=80=9D is an old adage of data scientists that nowadays should be r=
ephrased into =E2=80=9Dwe are drowning in associations and starving for cau=
sality=E2=80=9D. The democratization of machine learning software and big d=
ata platforms is increasing the risk of ascribing causal meaning to simple =
and sometimes brittle associations. This risk is particularly evident in se=
ttings (like bioinformatics, social sciences, economics) characterised by h=
igh dimension, multivariate interactions, dynamic behaviour where direct ma=
nipulation is not only unethical but also impractical. The conventional way=
s to recover a causal structure from observational data are score-based and=
 constraint-based algorithms. Their limitations, mainly in high dimension, =
opened the way to alternative learning algorithms which pose the problem of=
 causal inference as the classification of probability distributions. The r=
ationale of those algorithms is that the existence of a causal relationship=
 induces a constraint on the observational multivariate distribution. In ot=
her words, causality leaves footprints in the data distribution that can be=
 hopefully used to reduce the uncertainty about the causal structure. This =
first part of the presentation will introduce some basics of causal inferen=
ce and will discuss the state-of-the-art on machine learning for causality =
(notably causal feature selection) and some application to bioinformatics. =
The second part of the talk will focus on the D2C approach=C2=A0 which feat=
urizes observed data by means of information theory asymmetric measures to =
extract meaningful hints about the causal structure. The D2C algorithm perf=
orms three steps to predict the existence of a directed causal link between=
 two variables in a multivariate setting: (i) it estimates the Markov Blank=
ets of the two variables of interest and ranks its components in terms of t=
heir causal nature, (ii) it computes a number of asymmetric descriptors and=
 (iii) it learns a classifier (e.g. a Random Forest) returning the probabil=
ity of a causal link given the descriptors value. The final part of the pre=
sentation is more prospective and will introduce some recent work to implem=
ent counterfactual prediction in a data driven setting.</span></p><br></spa=
n><div><br></div>-- <br><div dir=3D"ltr" data-smartmail=3D"gmail_signature"=
><div dir=3D"ltr"><div><div dir=3D"ltr"><div>Giacomo Boracchi, PhD<br>Dipar=
timento Elettronica, Informazione e Bioingegneria<br>DEIB, Politecnico di M=
ilano<br>Via Ponzio, 34/5 20133 Milano, Italy.<br><br><a href=3D"http://hom=
e.dei.polimi.it/boracchi/" target=3D"_blank">http://home.dei.polimi.it/bora=
cchi/<br></a><br></div></div></div></div></div></div>
</div></div>
</div><div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_si=
gnature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><br></div></div></div>=
</div></div></div></div>

--00000000000041b75405a7062fd3--


Maggiori informazioni sulla lista Dottorcomp