-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 01:56
Mobile phones, wearable devices, autonomous vehicles, smart homes, and hospitals are examples of modern distributed networks generating massive amounts of data each day. Due to the growing computational power in these devices and the increasing size of the datasets, coupled with concerns over sharing private data, federated and decentralized training of statistical models have become desirable and often necessary. In these approaches, each participating device (which is referred to as agent or node) has a local training dataset which is never uploaded to the server. Training data is kept locally on users� devices, and the devices are used as agents performing computation on their local data to update global models of interest. Today, many industries and major companies (such as Google, Apple, etc.) are beginning to incorporate such technologies into their own products.
In applications where communication to a server becomes a bottleneck, decentralized topologies (where agents only communicate with their neighboring devices) are potential alternatives to federated topologies (where a central server connects with all remote devices). This talk falls into the broad theme of decentralized learning over graphs. By recognizing the increasing ability of many emerging technologies to collect data in a distributed and streamed manner, the talk focus will be on presenting decentralized approaches where devices are collecting data in a continuous manner and where the underlying data-generation models can change over time. Moreover, by recognizing that modern machine learning and signal processing applications (where tremendous volumes of training data are generated continuously by a massive number of heterogeneous devices) have several key properties that differentiate them from standard distributed inference applications, the talk focus will be on presenting decentralized multitask approaches for learning in statistical heterogeneous settings. Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem). It helps improving the network performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias. The working hypothesis for decentralized multitask learning is that agents are allowed to cooperate with each other to learn distinct, though related tasks. In this talk, special emphasis will be placed on illustrating the benefit of cooperation by showing (i) how it steers the network limiting point, (ii) how different cooperation rules allow to promote different task relatedness models, and (iii) how and when cooperation over multitask networks outperforms non-cooperative strategies.
In applications where communication to a server becomes a bottleneck, decentralized topologies (where agents only communicate with their neighboring devices) are potential alternatives to federated topologies (where a central server connects with all remote devices). This talk falls into the broad theme of decentralized learning over graphs. By recognizing the increasing ability of many emerging technologies to collect data in a distributed and streamed manner, the talk focus will be on presenting decentralized approaches where devices are collecting data in a continuous manner and where the underlying data-generation models can change over time. Moreover, by recognizing that modern machine learning and signal processing applications (where tremendous volumes of training data are generated continuously by a massive number of heterogeneous devices) have several key properties that differentiate them from standard distributed inference applications, the talk focus will be on presenting decentralized multitask approaches for learning in statistical heterogeneous settings. Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist in another problem). It helps improving the network performance relative to learning each task separately by using the domain information contained in the training signals of related tasks as an inductive bias. The working hypothesis for decentralized multitask learning is that agents are allowed to cooperate with each other to learn distinct, though related tasks. In this talk, special emphasis will be placed on illustrating the benefit of cooperation by showing (i) how it steers the network limiting point, (ii) how different cooperation rules allow to promote different task relatedness models, and (iii) how and when cooperation over multitask networks outperforms non-cooperative strategies.