Guidelines: The lectures are for a four-unit, one-semester, fourteen-week course, and each lecture is for one and a half hours. The lectures follow
generally the same order of the book chapters. They are intentionally organized into several modules, so that future instructors can easily customize them to
courses with different focus, length, and depth. Please contact Yi Ma
if any instructors like to obtain latex files of the slides editable for their own lectures.
Now, be prepared for a journey from high-dimensional to low-dimensional, from linear to nonlinear, from convex to nonconvex, from shallow to deep, and from theoretical to practical.
Or, at a more technical and computational level, this is also a journey from L0, to L1, and to L4, from rank, to nuclear norm, and to coding rate, and from
one subspace, to multiple subspaces, and to multiple submanifolds.
Chapter 1 and 2:
- Lecture 01: Introduction: Background, History, and Overview.
- Lecture 02: Sparse Models and L0 Minimization.
- Lecture 03: Relaxing the Sparse Recovery Problem via L1 Minimization.
Chapter 3 and 6:
- Lecture 04: Convex Methods for Correct Sparse Recovery.
- Lecture 05: Convex Sparse Recovery: Towards Stronger Correctness Results.
- Lecture 06: Convex Sparse Recovery: Matrices with Restricted Isometry Property.
- Lecture 07: Convex Sparse Recovery: Noisy Observations and Approximate Sparsity.
- Lecture 08: Convex Sparse Recovery: Phase Transition in Sparse Recovery.
Chapter 4 and 5:
- Lecture 09: Convex Low-Rank Matrix Recovery: Random Measurements.
- Lecture 10: Convex Low-Rank Matrix Recovery: Matrix Completion.
- Lecture 11: Convex Low-Rank and Sparse Decomposition: Algorithms.
- Lecture 12A: Convex Low-Rank and Sparse Decomposition: Analysis (jamboard notes by Jiantao Jiao).
- Lecture 12B: Convex Low-Rank and Sparse Decomposition: Extentions.
- Lecture 13: Convex Optimization for Structured Data: Unconstrained.
- Lecture 14: Convex Optimization for Structured Data: Constrained & Scalable.
Chapter 7 and 9:
- Lecture 15: Nonconvex Formulations: Sparsifying Dictionary Learning.
- Lecture 16: Nonconvex Methods: Dictionary Learning via L4 Maximization.
- Lecture 17: Nonconvex Optimization: First Order Methods.
- Lecture 18: Nonconvex Optimization: Power Iteration and Fixed Point.
Chapter 12 and 15:
- Lecture 19: Nonlinear Structured Models: Sparsity in Convolution and Deconvolution.
- Lecture 20: Nonlinear Structured Models: Transform Invariant Low-Rank Texture.
Chapter 16 and beyond:
- Lecture 21: Deep Discriminative Models: The Principle of Maximal Coding Rate Reduction.
- Lecture 22: Deep Discriminative Models: White-Box Deep Convolution Networks from Rate Reduction.
- Lecture 23: Deep Generative Models: Closed-Loop Data Transcription via Minimaxing Rate Reduction.
Notes: Slides for Lecture 12A, about analysis and proof of Principal Component Pursuit,
are based on jamboard notes from Professor Jiantao Jiao. Lecture 12B is about extentions and generalizations to PCP. Lectures 21 and 22 share the same (long) deck of slides.
(This version was last updated on 2021-12-28.)
Additional notes for discussion sessions of the course (administered by teaching assisant Simon Zhai):
Additional presentations and guest lectures (by the teaching staff or colleagues):
©2021 Yi Ma
Last modified: Thur December 16 11:15:08 PST 2021