Courses: Machine Learning
A Machine Learning Lecture Series
The IEEE Signal Processing Society (SPS) is proud to offer this free course bundle "A Machine Learning Lecture Series" by Prof. Sergios Theodoridis, with course material preparation by Konstantinos Koutroumbas.
The goal of this series of lectures is to introduce the newcomer to the “secrets” of the machine learning (ML) discipline. In the dawn of the 4th industrial revolution era, machine learning is one among the key technologies that drive the advances and the fast evolution of this new historical period.
The series of lectures is intended to cover a major part of what is considered as basic knowledge in machine learning. The lectures start from the definitions of regression and classification and move on from the classics to the most recent advances in the field. The online lectures have been developed to address the needs of those who wish to grasp and understand the basic notions behind the methods and algorithms that have been developed and not just the needs of the black box type of users of ML algorithms.
The series of lectures comprises five parts.
- Part 1 deals with the basic definitions as well as the fundamentals related to regression and classification.
- Part 2 deals with the classics on classification, starting with the Bayes classifier rule and ending with the classification trees and the "boosting" concept.
- Part 3 presents the notion of kernels and support vector machines.
- Part 4 focuses on deep learning, following a historical development, starting from the classical perceptron and moving on to convolutional neural networks, recurrent neural networks, adversarial examples and GANs.
- Part 5 presents Bayesian learning, latent variables, the expectation-maximization algorithm and the variational approximation concept, with applications to Gaussian mixtures and regression.
Part 1, Part 2 and Part 4 are a must.
Earn Educational Credits for Short Courses
Access all of the courses below on Machine Learning and take advantage of earning educational credits and continuing educational units by taking a quiz. Or, visit the Machine Learning Course page directly.
Note: Look for Professional Development Hours (PDH) and Continuing Educational Units (CEU) notations indicated on these courses on our Resource Center to acquire educational credits. Our webinars also offer PDHs.
Biography
Sergios Theodoridis is Professor Emeritus of Signal Processing and Machine Learning with the Department of Informatics and Telecommunications of the National and Kapodistrian University of Athens, Greece. He is also adjunct Professor with the Aalborg University, Denmark. He has served as Distinguished Professor, Aalborg University, Denmark and, also, as a visiting Professor with the Shenzhen Research Institute of Big Data (SRIBD), the Chinese University of Hong Kong, Shenzhen, China (2018-2020). His research interests lie in the areas of Machine Learning and Signal Processing.
He is the author of the book “Machine Learning: From the Classics, to Deep Learning, Transformers and Diffusion Models”, Academic Press, 3nd Ed., 2024, the co-author of the best-selling book “Pattern Recognition”, Academic Press, 4th ed. 2009, the co-author of the book “Introduction to Pattern Recognition: A MATLAB Approach”, Academic Press, 2010, the co-editor of the book “Efficient Algorithms for Signal Processing and System Identification”, Prentice Hall 1993, and the co-author of three books in Greek, two of them for the Greek Open University. His books have been translated into Chinese, Korean, Japanese and Greek languages.
He is the co-author of seven papers that have received Best Paper Awards including the 2014 IEEE Signal Processing Magazine Best Paper Award and the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding Paper Award.
He has received an honorary doctoral degree (D.Sc) from the University of Edinburgh, Scotland, UK, 2023. He is the recipient of the 2021 Institute of Electronics and Electrical Engineering (IEEE) Signal Processing Society (SPS) Norbert Wiener Award, which is the IEEE SP Society’s highest honor, the 2017 European Association for Signal and Image Processing (EURASIP) Athanasios Papoulis Award, the 2014 IEEE SPS Carl Friedrich Gauss Education Award and the 2014 EURASIP Meritorious Service Award. He has served as a Distinguished Lecturer for the IEEE SP as well as the Circuits and Systems Societies. He was Otto Monstead Guest Professor, Technical University of Denmark, 2012, and holder of the Excellence Chair in the Department of Signal Processing and Communications, University Carlos III, Madrid, Spain, 2011.
He currently serves as Chairman of the IEEE SP Society Awards Board. He has served as Vice President IEEE Signal Processing Society, as President of the European Association for Signal Processing (EURASIP), as a member of the Board of Governors for the IEEE Circuits and Systems (CAS) Society, as a member of the Board of Governors (Member-at-Large) of the IEEE SP Society and as a Chair of the Signal Processing Theory and Methods (SPTM) technical committee of IEEE SPS. He has served as Editor-in-Chief for the IEEE Transactions on Signal Processing, the world’s flagship journal in Signal Processing.
He is Fellow of IET, a Corresponding Fellow of the Royal Society of Edinburgh (RSE), a Fellow of EURASIP and a Life Fellow of IEEE.
Links to Machine Learning Lecture Series on the IEEE SPS Resource Center
Part 1
- P0 - Machine Learning-An Overview
- P1.1 - Introduction I
- P1.2 - Introduction II
- P1.3-Least Squares Linear Regression
- P1.4-Classification
- P1.5 Biased VS Unbiased Estimators
- P1.6-Cramer-Rao Bound
- P1.7-Ridge Regression
- P1.8-Inverse Problems and Overfitting
- P1.9-MSE Optimal Estimator
- P1.10-Bias-Variance Tradeoff
- P1.11-Maximum Likelihood Method
- P1.12-Curse of Dimensionality
- P1.13-Cross Validation
Part 2
- P2.1-Bayesian Rule
- P2.2-Average Risk
- P2.3-The Gaussian Case
- P2.4-Minimum Distance and Naive Bayes Classifiers
- P2.5-k-Nearest Neighbor Rule
- P2.6-Logistic Regression
- P2.7-Scatter Matrices
- P2.8-Fisher's Linear Discriminant
- P2.9-Proof of Fisher's Method
- P2.10-Classification Trees I
- P2.11-Classification Trees II
- P2.12-Combining Classifiers
- P2.13-AdaBoost
Part 3
- P3.1-Nonlinear Models and Cover's Theorem
- P3.2-Reproducing Kernel Hilbert Spaces
- P3.3-Kernel Trick
- P3.4-Examples of Kernels
- P3.5-Representer Theorem
- P3.6-Kernel Ridge Regression
- P3.7-Support Vector Regression
- P3.8-SVM-The Linearly Separable Classes Case
- P3.9-SVM-The Nonseparable Classes Case
- P3.10-SVM- The Hinge Loss Function
Part 4
- P4.1a-Perceptron Follow Up
- P4.1-Perceptron
- P4.2-Feed Forward NNs
- P4.3-Training NNs
- P4.4-The Backpropagation Algorithm
- P4.5-Variants of the Gradient Descent Scheme
- P4.6-Batch Normalization
- P4.7-Loss Function Selection and the Softmax Activation Function
- P4.8-ReLU Activation Function
- P4.9-Regularization and Dropout
- P4.10-Universal Approximation Property and Deep Networks
- P4.11-On Optimization and Generalization Properties of Deep Networks
- P4.12-Convolutional NNs I
- P4.13-Convolutional NNs II
- P4.14-Recurrent Neural Networks
- P4.15-Attention Mechanism
- P4.16-Adversarial Examples
- P4.17-Generative Adversarial Networks
- P4.18-Capsule Networks
- P4.19-Neural Machine Translation I
- P4.20-Neural Machine Translation II
- P4.21-Neural Machine Translation III
Part 5
- P5.1-.Bayesian Learning Introductory Remarks
- P5.2-Maximum Likelihood and MAP
- P5.3-Bayesian Generalized Linear Regression
- P5.4-Bayesian Regression-Simulation Examples
- P5.5-The Evidence Function
- P5.6-The EM Algorithm
- P5.7-The EM and the Linear Regression Task
- P5.8-Gaussian Mixture Models
- P5.9-Clustering, GMMs and the k-Means Algorithm
- P5.10-A Lower Bound Interpretation of EM
- P5.11-Exponential Family of Distributions
- P5.12-The Variational Approximation Method
- P5.13-Variational Inference and Linear Regression