Table of contents
Child pages
Related pages
Demos
Courses
- Fast.ai ← This looks really good. It's free.
Tutorials
- Christopher Olah's explanations of concepts used in neural networks
- http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/
Books
- Neural networks and deep learning ← by Michael Nielsen
- Learning deep architectures for AI.pdf
- 2012 - Machine Learning for Hackers
- Rec'd by Pete Warden
- Deep Learning: A Practitioner's Approach
- Michael Nielsen - Neural Networks and Deep Learning
- This is so well written.
- Deep Learning: Methods and Applications
- Deep Learning - Ian Goodfellow, Aaron Courville, and Yoshua Bengio
People
- http://petewarden.com/ - On Google's deep learning team, he has great blog posts
- Gary Marcus
- http://www.technologyreview.com/featuredstory/544606/can-this-man-make-ai-more-human/
- http://www.amazon.com/The-Algebraic-Mind-Integrating-Connectionism/dp/0262632683
- This looks interesting, maybe too academic to be practical, though.
Technologies
- BURLAP
- TensorFlow
- Theano
- Torch
- https://code.google.com/p/word2vec/
- Mentioned by Pete Warden IIRC
Papers
- http://deeplearning.net/reading-list/
- Google DeepMind
- Andrew Ng's papers
- Misc
Articles
- 2014.01.27 - Times.uk - The man with his fingers on the future (Demis Hassibis)
- 2014.01.29 - MIT Tech Review - Is Google Cornering the Market on Deep Learning?
- 2015.09.14 - MIT Tech Review - Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level
- 2015.11.10 - Wired - TensorFlor, Google's Open Source AI, Signals Big Changes in Hardware Too
- 2015.12.08 - MIT Tech Review - Here’s What Developers Are Doing with Google’s AI Brain
- 2015.12.10 - MIT Tech Review - Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology
- 2015.12.16 - MIT Tech Review - Baidu’s Deep-Learning System Rivals People at Speech Recognition
- 2015.12.17 - MIT Tech Review - Can This Man Make AI More Human?
- 2016.03.29 - Michael Nielsen - Is AlphaGo Really Such a Big Deal?
Over the past few years, neural networks have been used to capture intuition and recognize patterns across many domains. Many of the projects employing these networks have been visual in nature, involving tasks such as recognizing artistic style or developing good video-game strategy. But there are also striking examples of networks simulating intuition in very different domains, including audio and natural language.
Because of this versatility, I see AlphaGo not as a revolutionary breakthrough in itself, but rather as the leading edge of an extremely important development: the ability to build systems that can capture intuition and learn to recognize patterns. Computer scientists have attempted to do this for decades, without making much progress. But now, the success of neural networks has the potential to greatly expand the range of problems we can use computers to attack.
Just because neural networks can do a good job of capturing some specific types of intuition, that doesn’t mean they can do as good a job with other types. Maybe neural networks will be no good at all at some tasks we currently think of as requiring intuition.
In actual fact, our existing understanding of neural networks is very poor in important ways. For example, a 2014 paper described certain “adversarial examples” which can be used to fool neural networks.
- Another limitation of existing systems is that they often require many human examples to learn from.
- Now we’ve got so many wonderful challenges ahead: to expand the range of intuition types we can represent, to make the systems stable, to understand why and how they work, and to learn better ways to combine them with the existing strengths of computer systems.
Videos
- 2015.11.04 - Yann LeCun: Teaching Machines to Understand Us
- 2014.07.24 - O'Reilly - How to Get Started with Deep Learning in Computer Vision
- with Pete Warden
Websites
Misc ideas
- One thing the ML algorithm could do is to try to constantly predict what is going to happen next, and update its beliefs when its prediction is either confirmed or contradicted.
- I suspect that's how human brains work.