Università degli Studi di Pavia

Facoltà di Ingegneria


Deep Learning

A.A. 2025-2026

Second Semester

Fri: 11:00 a.m. - 1:00 p.m., Aula 8

Fri: 2:00 p.m. - 4:00 p.m., Aula D8

Lectures & Suggested Readings:

  • Reports of errors in the resources below are always welcome
    1. 2026.03.06 (theory)

      Introduction [pdf]
      AI spring? Artificial Intelligence, Machine Learning, Deep Learning: facts, myths and a few reflections.

    2. 2026.03.06 (theory)

      Fundamentals: Artificial Neural Networks [pdf]
      Foundations of machine learning: dataset, representation, evaluation, optimization. Feed-forward neural networks as universal approximators.

    3. 2026.03.13 (theory)

      Flow Graphs and Automatic Differentiation [pdf]
      Tensorial representation, flow graphs. Automatic differentiation: primal graph, adjoint graph.

      Aside 1: Mini-Batches as Tensors [pdf]

    4. 2026.03.20 (theory)

      Deep Neural Networks [pdf]
      Deeper networks: potential advantages and new challenges. Tensorial layerwise representation.

    5. 2026.03.20 (theory)

      Regression vs. Classification [pdf]
      Softmax and cross-entropy. Likelihood and loss functions. Regularization.

      Shannon Entropy (Wikipedia)

      Cross Entropy (Wikipedia)

    6. 2026.03.27 (theory)

      Learning as Optimization [pdf]
      Vanishing and exploding gradients. First and second order optimization, approximations, optimizers. Further tricks.

    7. 2026.04.10 (theory)

      Predictions [pdf]
      Optimization and prediction: bias and variance. Overfitting. Evaluating classifiers.

      Aside 2: Hardware for Deep Learning [pdf]

      Aside 3: Differentiating Algorithms [pdf]<

    8. 2026.04.17 (theory)

      Deep Convolutional Neural Networks [pdf]
      Convolutional filter, filter banks, feature maps, pooling, layerwise gradients.

      Deep Convolutional Neural Networks and Beyond [pdf]
      Some insight into what happens in convolution layers. Different DCNN architectures. Transfer learning.

      J Yosinski, J Clune, Y Bengio, H Lipson, "How transferable are features in deep neural networks?" in Advances in Neural Information Processing Systems (NIPS 2014) [link]

    9. 2026.04.24 (theory)

      Recurrent Neural Networks [pdf]
      Recurrent Neural Networks (RNN), temporal unfolding, LSTM Cells, GRU cells, encoder / decoder, convolution, time series analysis.

      Auto-Encoders [pdf]
      A very popular and powerful network architecture pattern, which is also the basis for diffusion models. The relation between Auto-Encoders and Principal Component Analysis.

    Instructor

    1. Marco Piastra

    2. Contact: marco.piastra@unipv.it


    Kiro

    1. Course info


    Exams

    1. See Faculty website


    Further resources:

    Video recordings and Colab notebooks are available on Kiro

      (There are no required textbooks for this course. The following books are recommended as optional readings)

      1. Christopher Bishop, Hugh Bishop
        Deep Learning: Foundations and Concepts
        Springer, 2024
        [Online version]

      2. Simon J.D. Prince
        Understanding Deep Learning
        The MIT Press, 2023
        [Online version]

      3. Aston Zhang, Zachary Lipton, Mu Li, Alexander Smola
        Dive into Deep Learning
        Cambridge University Press, 2024
        [Online version, with exercises]

      4. Kevin P. Murphy
        Probabilistic Machine Learning: Advanced Topics
        The MIT Press, 2023
        [Pre-print]

      5. Ian Goodfellow, Yoshua Bengio, Aaron Courville
        Deep Learning
        The MIT Press, 2017
        [Online version]

      6. Richard s. Sutton, Andrew G. Barto
        Reinforcement Learning: An Introduction (second edition)
        MIT Press, 2018
        [Online version]


      Links

      1. Artificial Intelligence Reading Group


      1. Deep Learning, A.A. 2024-2025 and before