[Dottorcomp] Fwd: corso "Learning Sparse Representations for Image And Signal Modeling"

Luca Franco Pavarino luca.pavarino a unipv.it
Sab 30 Gen 2021 10:48:20 CET


Cari Dottorandi,

Vi segnaliamo questo corso di dottorato, che puo' essere seguito anche
online.

---------- Forwarded message ---------
From: Giacomo Boracchi <giacomo.boracchi a polimi.it>
Date: Tue, 26 Jan 2021 at 16:17
Subject: corso "Learning Sparse Representations for Image And Signal
Modeling"
To: Stefano Gualandi <stefano.gualandi a unipv.it>

Quest'anno terrò la terza edizione del corso di dottorato "Learning Sparse
Representations for Image and Signal Modeling", ne avevamo parlato e magari
potrebbe interessare qualcuno dei vostri studenti.
Sarà anche streamed nella mia stanza Cisco Webex, quindi magari è ancora
più comodo.

Il corso si articola in 6 lezioni da 4 ore ciascuna che comprendono sia una
parte di spiegazione "tradizionale", sia una parte di "laboratorio" in
Matlab in cui fornirò dei brevi codici da "riempire" nelle parti che
riguardano gli algoritmi visti a lezione.

Sito:
http://home.deib.polimi.it/boracchi/teaching/LearningSparse.htm
e
https://www11.ceda.polimi.it/schedaincarico/schedaincarico/controller/scheda_pubblica/SchedaPublic.do?&evn_default=evento&c_classe=746348&polij_device_category=DESKTOP&__pj0=0&__pj1=8be18d617f9bbeab6cad76de3aed4947

*Calendario*:
- 09/02/2021 from 14:00 til 18:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,
- 12/02/2021 from 14:00 til 18:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,
- 16/02/2021 from 14:00 til 18:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,
- 24/02/2021 from 14:00 til 18:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,
- 26/02/2021 from 14:00 til 18:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,
- 03/03/2021 from 14:00 til 19:00 Sala Conferenze Emilio Gatti Ed. 20 -
Piano Terra,

*Subject of the course*
A large amount of multidisciplinary research has been conducted on sparse
representations i.e., models that describe signals/images as a linear
combination of a few atoms belonging to a dictionary. Sparse representation
theory builds on a solid mathematical ground and has been applied in very
diverse domains. Most importantly, these techniques nowadays represent one
of the leading tools for solving ill-posed inverse problems in many areas,
including image/signal processing, statistics, machine learning, computer
vision, pattern recognition. In many cases, sparsity is a key technique to
achieve state of the art performance in applications such as denoising,
deconvolution, inpainting, superresolution, and some image classification
problems.

The course presents the basic theory of sparse representations and
dictionary learning, and illustrates the most successful applications of
sparse models in imaging/signal processing. Particular emphasis will be
given to modern proximal methods for solving convex optimization problems
such as ℓ1 regularisation (e.g. BPDN, LASSO). Recent developments of sparse
models will be also overviewed.

*Mission and Goals*
The main goal of this course is to provide the student with an
understanding of the most important aspects of the theory underlying sparse
representation and, more in general, of sparsity as a form of
regularization in learning problems. Students will have the opportunity to
develop and understand the main algorithms for:
1) learning sparse models,
2) computing sparse representations w.r.t. the learned model,
3) solve optimization problems involving sparsity as a regularization prior.

These methods have wide applicability in computer science, and these will
be a useful background for their research.
In particular, this course aims at:
- Presenting the most important aspects of the theory underlying sparse
representations, and in particular the sparse-coding and
dictionary-learning problems.
- Illustrating the main algorithms for sparse coding and dictionary
learning, with a particular emphasis on solutions of convex optimization
problems that are widely encountered in engineering.
- Providing a broad overview of the applications involving sparse
representations, in particular those in the imaging field.
- Providing students with a solid understanding of sparse representations
by means of guided computer laboratory hours, where the presented
algorithms will be implemented and tested in imaging applications.
- Providing an overview of sparsity as a general regularization prior and
introduce recent developments such as convolutional sparsity

*Teaching Organization*
The course consists of traditional classes and computer-laboratory hours.
During computer laboratory, students will be guided in the implementation
of simplified versions of the presented algorithms. Imaging problems,
including denoising, inpainting and anomaly detection, will be considered
as running examples through the computer labs. Matlab is the preferred
programming language (but you are of course allowed to use Python) and
students are invited to work on their own laptop.

*Detailed course program follows:*

Basics on linear representations and vector spaces modeling
- Introduction: getting familiar with transform-domain representation and
sparsity.
- Basic notions: linear representations, norms, orthonormal
basis/projections, sparsity in a strict ℓ0 sense.
- Computer Lab: projection w.r.t. orthonormal basis, computing sparse
solution. Introduce denoising as a running example.

Redundant representations and  Sparse Coding (Minimum ℓ0 norm)
- Sparse Coding w.r.t redundant dictionaries Greedy Algorithms: Matching
Pursuit, Orthogonal Matching Pursuit.
- Computer Lab: Greedy Sparse-Coding Algorithms
- An overview of theoretical guarantees and convergence results.

Convex Relaxation of the Sparse Coding Problem (Minimum ℓ1 norm)
- Sparse coding as a convex optimization problem, connections with ℓ0
solutions.
- BPDN and the LASSO, the two formulation and ℓ1 sparsity as a
regularization prior in linear regression
- Other norms promoting sparsity: visual intuition. Theoretical guarantees.
- Minimum ℓ1 Sparse Coding Algorithms: Iterative Reweighted Least Squares,
Proximal Methods, Iterative Soft Thresholding, ADMM.
- Computer Lab: Minimum ℓ1 Sparse Coding Algorithms

Dictionary Learning
- Dictionary Learning Algorithms: Gradient Descent, MOD, KSVD
- Task-Driven Dictionary Learning for Supervised learning  tasks
- Computer Lab: Dictionary Learning

Structured Sparsity
- Joint Sparsity, Group Sparsity, Sparse Coding Algorithms, Group LASSO
- Extended Problems: Matrix Completion / Robust PCA
- Computer Lab: Structured Sparsity Algorithms

Extended sparse models
- Convolutional Sparsity and their connections with Convolutional Networks.
- Double Sparsity

Sparsity in engineering applications
- Relevant unsupervised learning problems involving sparsity as a
regularization prior: Image Denoising, Inpainting, Superresolution,
Deblurring.
- Supervised learning problems by Dictionary Learning and Sparse Coding:
Classification and Anomaly Detection
- Computer Lab: Classification and Anomaly Detection in images

-- 
Giacomo Boracchi, PhD
Dipartimento Elettronica, Informazione e Bioingegneria
DEIB, Politecnico di Milano
Via Ponzio, 34/5 20133 Milano, Italy.

https://boracchi.faculty.polimi.it/
-------------- parte successiva --------------
Un allegato HTML è stato rimosso...
URL: http://ipv01.unipv.it/pipermail/dottorcomp/attachments/20210130/d621bbe5/attachment-0001.htm 


Maggiori informazioni sulla lista Dottorcomp