Second Semester
Fri: 11:00 a.m. - 1:00 p.m., Aula 8
Fri: 2:00 p.m. - 4:00 p.m., Aula D8
Introduction [pdf]
AI spring? Artificial Intelligence, Machine Learning, Deep Learning: facts, myths and a few reflections.
Fundamentals: Artificial Neural Networks [pdf]
Foundations of machine learning: dataset, representation, evaluation, optimization. Feed-forward neural networks as universal approximators.
Flow Graphs and Automatic Differentiation [pdf]
Tensorial representation, flow graphs. Automatic differentiation: primal graph, adjoint graph.
Deep Networks [pdf]
Deeper networks: potential advantages and new challenges. Tensorial layerwise representation. Softmax and cross-entropy.
Aside 1: Tensor Broadcasting [pdf]
Shannon Entropy (Wikipedia)
Cross Entropy (Wikipedia)
Learning as Optimization [pdf]
Vanishing and exploding gradients. First and second order optimization, approximations, optimizers. Further tricks.
Aside 2: Exponential Moving Average [pdf]
Aside 3: Predictions [pdf]
From in-sample optimization to out-of-sample generalization.
Deep Convolutional Neural Networks [pdf]
Convolutional filter, filter banks, feature maps, pooling, layerwise gradients.
Deep Convolutional Neural Networks and Beyond [pdf]
Some insight into what happens in convolution layers. Different DCNN architectures. Transfer learning. Segmentation and object detection.
J Yosinski, J Clune, Y Bengio, H Lipson, "How transferable are features in deep neural networks?" in Advances in Neural Information Processing Systems (NIPS 2014) [link]
Aside 4: Hardware for Deep Learning [pdf]
Main differences bewtween CPUs and GPUs, SIMT parallelism, bus-oriented communication, a few caveats.
Aside 5: Differentiating Algorithms [pdf]
Wengert list, ahead-of-time and runtime autodiff, lazy mode, just-in-time compilation, differences among TensorFlow, PyTorch, JAX.
Deep Learning and Time Series [pdf]
Recurrent Neural Networks (RNN), temporal unfolding, LSTM Cells, GRU cells, encoder / decoder, convolution, time series analysis-
Aside 6: Auto-Encoders [pdf]
A very popular and powerful network architecture pattern, which is also the basis for diffusion models. The relation between Auto-Encoders and Principal Component Analysis.
Marco Piastra
Contact: marco.piastra@unipv.it
Christopher Bishop, Hugh Bishop
Deep Learning: Foundations and Concepts
Springer, 2024
[Online version]
Aston Zhang, Zachary Lipton, Mu Li, Alexander Smola
Dive into Deep Learning
Cambridge University Press, 2024
[Online version, with exercises]
Ian Goodfellow, Yoshua Bengio, Aaron Courville
Deep Learning
MIT Press, 2017
[Online version]
Kevin P. Murphy
Probabilistic Machine Learning: Advanced Topics
MIT Press, 2023
[Pre-print]
Richard s. Sutton, Andrew G. Barto
Reinforcement Learning: An Introduction (second edition)
MIT Press, 2018
[Online version]