SC-2: Learning Nonlinear and Deep Low-Dimensional Representations from High-Dimensional Data: from Theory to Practice (Day 2)
Qing Qu, University of Michigan Sam Buchanan, Toyota Technological Institute at Chicago Yi Ma, UC Berkeley Atlas Zhangyang Wang, University of Texas at Austin John Wright, Columbia University Yuqian Zhang, Rutgers University Zhihui Zhu, Ohio State University
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Over the past two decades, our signal processing community has witnessed explosive developments and the power of low-dimensional models for high-dimensional data, which revolutionized many applications from engineering to science. In the meantime, the community is in the transition of embracing the power of modern machine learning, especially deep learning, with unprecedented new challenges in terms of modeling and interpretability. This short course provides a timely tutorial that uniquely bridges fundamental mathematical models from signal processing to contemporary topics in nonconvex optimization and deep learning through low-dimensional models.
This short course will show how (i) these low-dimensional models and principles provide a valuable lens for formulating problems and understanding the behavior of methods, and (ii) how ideas from nonconvexity and deep learning help make these core models practical for real-world problems with nonlinear data and observation models, measurement nonidealities, etc. The course will start by introducing fundamental linear low-dimensional models (e.g., basic sparse and low-rank models) with motivating engineering applications, followed by a suite of scalable and efficient optimization methods. Based on these developments, we will introduce nonlinear low-dimensional models for several fundamental learning and inverse problems, followed by their guaranteed correctness and efficient nonconvex optimization. Building upon these results, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models, as well as leading to new principles for designing deep networks for learning low-dimensional structures, with both clear interpretability and practical benefits.