ELEC 570

Syllabus – SPRING 2022

DISTRIBUTED METHODS FOR OPTIMIZATION AND MACHINE LEARNING

PRELIMINARY VERSION (08/06/21), SUBJECT TO CHANGES

Instructor:                   César A Uribe – cauribe@rice.edu  (DH2048)

                                    Office Hours: By appointment

Course Website:          https://cauribe.rice.edu/ece570/

Prerequisite:                Basic linear algebra and probability theory. Graph theory. Control theory, optimization theory, signal processing, and machine learning knowledge is helpful.

Time and Place:           4:00 PM – 5:15 PM TR, Place TBD

Course Rationale:        Graduate level course in the ECE department which aims to introduce modern design and analysis methods for distributed and decentralized algorithms with applications to signal processing, optimization, control and machine learning. STATS, CAAM, CS, MECH and other students are welcome.

Summary:                   

This course will provide a comprehensive presentation of modern design and analysis methods for distributed and decentralized algorithms for signal processing, optimization, control and machine learning applications. The course will focus on mathematical analysis techniques for the iteration, computational and communication complexity of distributed data processing methods over networks, where data is generated, stored or processed by groups of computational units or agents connected via communication channels over networks. The aim is to introduce modern approaches for distributed information processing with a deep understanding on the effects of communication constraints, network topology, computational resources, and robustness. The contents of this course lie in the intersection of network science, optimization and machine learning. Topics will cover the classical literature in distributed decision making, opinion dynamics, distributed optimization, decentralized control, to more recent topics in distributed machine learning, federated learning, and social learning.

Learning Outcomes:

At the conclusion of this course, students will be able to:

  • Identify the algorithmic constraints imposed by engineering problems based on the way data is generated, stored and processed.
  • Design appropriate distributed information aggregation and processing algorithms for the solution of optimization and machine learning problems over distributed networks.
  • Estimate the price of decentralization in algorithmic design, the robustness trade-offs introduced by distributed computations, and the computational and communication complexities of distributed algorithms executed over networks.
  • Simulate distributed algorithms for applications to machine learning, signal processing and control over computational platforms.
  • Critically analyze recent literature in distributed optimization and machine learning to identify the underlying assumptions, and place the obtained results in context with the state-of-the-art.

Grading Policy:

  • Collaborative Project: 40%
  • Individual Project: Proposal 10%, Midway Check 15%, Presentation 10%, Report 25%

Collaborative Project:  

We will develop a course collaborative project called “The Decentropedia.” The objective will be to map, as much as we can, the huge number of distributed and decentralized algorithms into a coherent reference manual. A plethora of distributed and decentralized methods are available in the literature. Several properties can be assumed for specific problems, i.e., function classes (smoothness, (non)-convexity, stochasticity, etc), network model (static, time-varying, directedness, connectivity, randomness, etc), oracle model (zeroth, first-, second-order methods), communication constraints (quantization, compression, channel capacity, noisy channels, etc.), optimization constraints (linear, non-linear, convex, set constraints, etc), robustness (adversarial agents, asynchrony, message losses, delays), just to name a few. Application-oriented papers are welcomed as well, e.g., robotics, computer vision, logistics, biosciences, physics.

Homework for the Decentropedia will consist of creating weekly summaries of two to three reference papers of your choosing, with a concise description of the mathematical problem being solved, the assumptions and models, the result (theorem), and the application. Think of them as an infographic for the paper that will serve as reference for future researchers looking for a quick statement of the assumptions and results. Every week an assigned scribe will compile the chapters of the previous week into a common document: “The Decentropedia.” Scribes will not be required to deliver the chapters during their tenure.

A total of nine (9) Decentropedia deliverables are scheduled. However, you will have two weeks off available (of your choosing), where no homework needs to be delivered by you, no questions asked. The immediate homework after a week off will require three paper’s summaries. A third week off can be arranged in special circumstances, personal calamity, health, subject to approval of the instruction. Course attendees must serve as scribe at least once. Homework is due at midnight (anywhere on earth) on Fridays.

Individual Project:

Each course attendee will develop an individual project for the class. The objective will be to study a distributed/decentralized version one research topic in detail. A great example would be to propose a distributed problem formulation related to their PhD thesis topic. The project can be application oriented, theory oriented, or both. Application oriented projects use existing distributed algorithms to solve a specific problem under a new distributed/decentralized formulation. Results for this class of projects will usually be simulation results, or practical implementations. Theory oriented projects study a distributed/decentralized problem class where a clear theoretical gap has been detected. Results in this class of projects will usually be theorems where convergence, or convergence rates are shown. Alternative projects are welcomed based on the attendees’ individual interests. Any convex combination of the two classes above are welcomed. The student is expected to make a conference-style presentation at the end of the semester (15min).

Tentative Course Schedule

WeekDateContentsNotes
18/24/2021Course Overview 
 8/26/2021Refresher on convex analysis 
28/31/2021Refresher on optimization methods 
 9/2/2021Refresher on graph theory 
 9/3/2021 Registration Deadline
39/7/2021The distributed optimization problem: Part I 
 9/9/2021The distributed optimization problem: Part II  
 9/10/2021 Proposal Deadline
49/14/2021Consensus, Gossip and Random Walks: Part I 
 9/16/2021Consensus, Gossip and Random Walks: Part II 
 9/17/2021 Decentropedia #1
59/21/2021Consensus, Gossip and Random Walks: Part III 
 9/23/2021Epidemics and Opinion Dynamics 
 9/24/2021 Decentropedia #2
69/28/2021Distributed Gradient Descent (DGD) and friends: Part I 
 9/30/2021Distributed Gradient Descent (DGD) and friends: Part II 
 10/1/2021 Decentropedia #3
710/5/2021Distributed Gradient Descent (DGD) and friends: Part III 
 10/7/2021Distributed Gradient Descent (DGD) and friends: Part IV 
 10/8/2021 Midway Check Deadline
810/12/2021 Midterm Recess 
 10/14/2021Near-optimal Dual-based Algorithms: Part I 
 10/15/2021 Decentropedia #4
910/19/2021Near-optimal Dual-based Algorithms: Part II 
 10/21/2021Application: Distributed Solution of a Linear System 
 10/22/2021 Decentropedia #5
1010/26/2021Application: Distributed Optimal Transport 
 10/28/2021Application: Distributed Inference and Estimation 
 10/29/2021 Decentropedia #6
1111/2/2021Application: Distributed Control and Multiagent Robotics 
 11/4/2021Robustness and Adversarial Agents: Part I 
 11/5/2021 Decentropedia #7
1211/9/2021Robustness and Adversarial Agents: Part II 
 11/11/2021Federated Learning (FL) and friends: Part I 
 11/12/2021 Decentropedia #8
1311/16/2021Federated Learning (FL) and friends: Part II 
 11/18/2021Federated Learning (FL) and friends: Part III 
 11/19/2021 Decentropedia #9
1411/23/2021Federated Learning (FL) and friends: Part IV 
 11/25/2021 Thanksgiving
1511/30/2021Project Presentations 
 12/2/2021Project Presentations 
 12/3/2021 Last Day of Classes
1612/7/2021Project Presentations 
 12/9/2021Project Presentations 
 12/10/2021 Final Report Deadline
 12/14/2021 End of Semester
 12/22/2021 Deadline for Grades

Recommended Materials: No textbook is required. Below is a list of suggested reading materials.

  1. D. Bertsekas, Nonlinear Programming, Athena Scientific, 2nd Edition.
  2. Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, Springer.
  3. D. Bertsekas and J. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific.
  4. J. Nocedal and S. Wright, Numerical Optimization, Springer, 2nd Edition.
  5. S. Boyd and L. Vandenverghe, Convex Optimization, Cambridge University Press.
  6. G. (George) Lan, Lectures on Optimization Methods for Machine Learning, 2019.
  7. Zhu, Minghui, and Sonia Martínez, Distributed optimization-based control of multi-agent networks in complex environments (2019).
  8. Giselsson, P., & Rantzer, A. (Eds.). (2018). Large-scale and distributed optimization (Vol. 2227). Springer.
  9. Boyd, S., Parikh, N., & Chu, E. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc.
  10. Nedić, A., Pang, J. S., Scutari, G., & Sun, Y. (2018). Multi-agent Optimization: Cetraro, Italy 2014 (Vol. 2224). Springer.
  11. Nedić, A. (2015). Convergence rate of distributed averaging dynamics and optimization in networks. Foundations and Trends® in Systems and Control2(1), 1-100.

Honor System:

In this course, all students will be held to the standards of the Rice Honor Code, a code that you pledged to honor when you matriculated at this institution. If you are unfamiliar with the details of this code and how it is administered, you should consult the Honor System Handbook at http://honor.rice.edu/honor-systemhandbook/. This handbook outlines the University’s expectations for the integrity of your academic work, the procedures for resolving alleged violations of those expectations, and the rights and responsibilities of students and faculty members throughout the process.

Attendance Policy:

While class attendance is not mandatory, regular class attendance is important for maximum learning effectiveness and it significantly improves performance on graded course deliverables.

Disability Support Services:

If you have a documented disability or other condition that may affect academic performance, you should: 1) make sure that this documentation is on file with Disability Support Services (111 Allen Center, adarice@rice.edu, x5841) to determine the accommodations you need; and 2) speak with the instructor to discuss your accommodation needs.

Syllabus Change Policy:

This syllabus is only a guide for the course and is subject to change with advanced notice.