Skip to main content
  • SPS
    Members: $49.00
    IEEE Members: $59.00
    Non-members: $69.00

About this Bundle

Tutorial Bundle: ICASSP 2024 Tutorial 6: Safe and Trustworthy Large Language Models (Parts 1-2), April 2024.

One of the defining characteristics of the 21st century has been the proliferation and dispersion of data and computational resources. These trends have enabled a paradigm shift in many areas of engineering, signal processing, and beyond, allowing us to design intelligent systems which build models and perform inference directly from data. These techniques have led to tremendous progress across a number of signal processing areas ranging from speech and image processing to recommender systems, forecasting, communication systems and power allocation. At the same time, data is increasingly available in dispersed locations, rather than powerful central data centers. Data is generated and processed at the edge, on our mobile and IoT devices, in sensors scattered throughout “smart cities” and “smart grids”, robotic swarms, and vehicles on the road. In order to benefit from these vast and distributed data sets while preserving communication efficiency, privacy and robustness, we need to employ distributed learning algorithms, which rely on local processing and interactions. When properly designed, these algorithms are able to exhibit globally optimal behavior and match the performance of benchmarks relying on central aggregation of raw data. The development of algorithms for distributed signal and information processing has been an active area of research for the past 25 years [1, 2], reaching now a critical point where it has reached a level of maturity that allows a cohesive overview to be presented in a classroom. At the same time, the recent emergence of federated learning [3] has galvanized interest in the area of distributed learning beyond the signal processing community, and led to rapid adoption by major industry players including Google and Apple. This short course on “Multi-Agent Optimization and Learning” provides attendees with tools for distributed optimisation and learning that allow them to design intelligent distributed systems. Emphasis is placed on why algorithms work, how we can systematically develop them, and how we can quantify their performance trade-offs. We also show how to use this information to drive design decisions. This course will bring students and researchers in the signal processing community up to speed with an active area of research with solid foundation but many open questions. Practitioners will be provided with fundamental understanding of distributed multi-agent systems, allowing them to identify, evaluate and exploit their value in respective applications.

16 Oct 2024