Fast.ai - Practical Deep Learning

 

Practical Deep Learning (Course Intro)

Welcome!

  1. Example topics covered:

    1. Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems

    2. Create random forests and regression models

    3. Deploy models

    4. Learn PyTorch, fastai and Hugging Face

  2. There are 9 lessons, and each lesson is around 90 minutes long.

  3. You don’t need any special hardware or software.

  4. You don’t need to know any university math.

Real results

  1. You can see example student projects here.

  2. Students have gotten jobs at Google Brain, OpenAI, Adobe, Amazon, and Tesla.

Your teacher

  1. Jeremy Howard

  2. I was the top-ranked competitor globally in machine learning competitions on Kaggle (the world’s largest machine learning community) two years running.

  3. Founded the first company to focus on deep learning and medicine.

Is this course for me?

  1. We wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course.

The software you will be using

  • In this course, you’ll be using PyTorch, fastai, Hugging Face Transformers, and Gradio.

  • PyTorch is a low-level foundation library; fastai adds higher-level functionality on top of PyTorch.

Why deep learning?

  1. It can be used across many disciplines for a large range of pattern recognition tasks.

  2. He gives a bunch of examples.

What you will learn

  • Train models in computer vision, NLP, tabular data, collaborative filtering (e.g. movie recommendation)

  • Deploy the models as web apps

  • Why and how deep learning models work, and how to use that knowledge to improve the accuracy, speed, and reliability of your models

  • How to implement stochastic gradient descent and a complete training loop from scratch

  • Techniques covered:

    • Random forests and gradient boosting

    • Affine functions and nonlinearities

    • Parameters and activations

    • Transfer learning

    • Stochastic gradient descent (SGD)

    • Data augmentation

    • Weight decay

    • Image classification

    • Entity and word embeddings

How do I get started?

  1. To watch the videos, click on the Lessons section in the navigation sidebar.

  2. We strongly suggest not using your own computer for training models in this course.

  3. Ask for help in the forums, but search first to see if your question has already been answered.

Part 1

Getting started

Deployment

Neural net foundations

Natural Language (NLP)

From-scratch model

Random forests

Collaborative filtering

Convolutions (CNNs)

Part 2

Stable Diffusion

Diving Deeper

Matrix multiplication

Mean shift clustering

Backpropagation & MLP

Backpropagation

Autoencoders

The Learner framework

Initialization/normalization

Accelerated SGD & ResNets

DDPM and Dropout

Mixed Precision

DDIM

Karras et al (2022)

Super-resolution

Attention & transformers

Latent diffusion

Â