Nathan Wailes - Blog - GitHub - LinkedIn - Patreon - Reddit - Stack Overflow - Twitter - YouTube
Fast.ai - Practical Deep Learning
- 2 Part 1
- 3 Part 2
- 3.1 Stable Diffusion
- 3.2 Diving Deeper
- 3.3 Matrix multiplication
- 3.4 Mean shift clustering
- 3.5 Backpropagation & MLP
- 3.6 Backpropagation
- 3.7 Autoencoders
- 3.8 The Learner framework
- 3.9 Initialization/normalization
- 3.10 Accelerated SGD & ResNets
- 3.11 DDPM and Dropout
- 3.12 Mixed Precision
- 3.13 DDIM
- 3.14 Karras et al (2022)
- 3.15 Super-resolution
- 3.16 Attention & transformers
- 3.17 Latent diffusion
Â
Practical Deep Learning for Coders - Practical Deep Learning
My understanding is that this is the single-best course on deep learning out there. Jeremy Howard is a great teacher; unlike a lot of other people teaching this stuff, he does a great job of explaining potentially-complicated ideas in simple ways.
Practical Deep Learning (Course Intro)
Welcome!
Example topics covered:
Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems
Create random forests and regression models
Deploy models
Learn PyTorch, fastai and Hugging Face
There are 9 lessons, and each lesson is around 90 minutes long.
You don’t need any special hardware or software.
You don’t need to know any university math.
Real results
You can see example student projects here.
Students have gotten jobs at Google Brain, OpenAI, Adobe, Amazon, and Tesla.
Your teacher
Jeremy Howard
I was the top-ranked competitor globally in machine learning competitions on Kaggle (the world’s largest machine learning community) two years running.
Founded the first company to focus on deep learning and medicine.
Is this course for me?
We wrote this course to make deep learning accessible to as many people as possible. The only prerequisite is that you know how to code (a year of experience is enough), preferably in Python, and that you have at least followed a high school math course.
The software you will be using
In this course, you’ll be using PyTorch, fastai, Hugging Face Transformers, and Gradio.
PyTorch is a low-level foundation library; fastai adds higher-level functionality on top of PyTorch.
Why deep learning?
It can be used across many disciplines for a large range of pattern recognition tasks.
He gives a bunch of examples.
What you will learn
Train models in computer vision, NLP, tabular data, collaborative filtering (e.g. movie recommendation)
Deploy the models as web apps
Why and how deep learning models work, and how to use that knowledge to improve the accuracy, speed, and reliability of your models
How to implement stochastic gradient descent and a complete training loop from scratch
Techniques covered:
Random forests and gradient boosting
Affine functions and nonlinearities
Parameters and activations
Transfer learning
Stochastic gradient descent (SGD)
Data augmentation
Weight decay
Image classification
Entity and word embeddings
How do I get started?
To watch the videos, click on the Lessons section in the navigation sidebar.
We strongly suggest not using your own computer for training models in this course.
Ask for help in the forums, but search first to see if your question has already been answered.
Part 1
Getting started
Deployment
Neural net foundations
Natural Language (NLP)
From-scratch model
Random forests
Collaborative filtering
Convolutions (CNNs)
Part 2
Stable Diffusion
Diving Deeper
Matrix multiplication
Mean shift clustering
Backpropagation & MLP
Backpropagation
Autoencoders
The Learner framework
Initialization/normalization
Accelerated SGD & ResNets
DDPM and Dropout
Mixed Precision
DDIM
Karras et al (2022)
Super-resolution
Attention & transformers
Latent diffusion
Â