Practical Deep Learning (Course Intro)

Welcome!

  1. Example topics covered:

    1. Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems

    2. Create random forests and regression models

    3. Deploy models

    4. Learn PyTorch, fastai and Hugging Face

  2. There are 9 lessons, and each lesson is around 90 minutes long.

  3. You don’t need any special hardware or software.

  4. You don’t need to know any university math.

Real results

  1. You can see example student projects here.

  2. Students have gotten jobs at Google Brain, OpenAI, Adobe, Amazon, and Tesla.

Your teacher

  1. Jeremy Howard

  2. I was the top-ranked competitor globally in machine learning competitions on Kaggle (the world’s largest machine learning community) two years running.

  3. Founded the first company to focus on deep learning and medicine.

Is this course for me?

  1. We wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course.

The software you will be using

Why deep learning?

  1. It can be used across many disciplines for a large range of pattern recognition tasks.

  2. He gives a bunch of examples.

What you will learn

How do I get started?

  1. To watch the videos, click on the Lessons section in the navigation sidebar.

  2. We strongly suggest not using your own computer for training models in this course.

  3. Ask for help in the forums, but search first to see if your question has already been answered.

Part 1

Getting started

Deployment

Neural net foundations

Natural Language (NLP)

From-scratch model

Random forests

Collaborative filtering

Convolutions (CNNs)

Part 2

Stable Diffusion

Diving Deeper

Matrix multiplication

Mean shift clustering

Backpropagation & MLP

Backpropagation

Autoencoders

The Learner framework

Initialization/normalization

Accelerated SGD & ResNets

DDPM and Dropout

Mixed Precision

DDIM

Karras et al (2022)

Super-resolution

Attention & transformers

Latent diffusion