...
- 2014.01.27 - Times.uk - The man with his fingers on the future (Demis Hassibis)
- 2014.01.29 - MIT Tech Review - Is Google Cornering the Market on Deep Learning?
- 2015.09.14 - MIT Tech Review - Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level
- 2015.11.10 - Wired - TensorFlor, Google's Open Source AI, Signals Big Changes in Hardware Too
- 2015.12.08 - MIT Tech Review - Here’s What Developers Are Doing with Google’s AI Brain
- 2015.12.10 - MIT Tech Review - Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology
- 2015.12.16 - MIT Tech Review - Baidu’s Deep-Learning System Rivals People at Speech Recognition
- 2015.12.17 - MIT Tech Review - Can This Man Make AI More Human?
- 2016.03.29 - Michael Nielsen - Is AlphaGo Really Such a Big Deal?
Over the past few years, neural networks have been used to capture intuition and recognize patterns across many domains. Many of the projects employing these networks have been visual in nature, involving tasks such as recognizing artistic style or developing good video-game strategy. But there are also striking examples of networks simulating intuition in very different domains, including audio and natural language.
Because of this versatility, I see AlphaGo not as a revolutionary breakthrough in itself, but rather as the leading edge of an extremely important development: the ability to build systems that can capture intuition and learn to recognize patterns. Computer scientists have attempted to do this for decades, without making much progress. But now, the success of neural networks has the potential to greatly expand the range of problems we can use computers to attack.
Just because neural networks can do a good job of capturing some specific types of intuition, that doesn’t mean they can do as good a job with other types. Maybe neural networks will be no good at all at some tasks we currently think of as requiring intuition.
In actual fact, our existing understanding of neural networks is very poor in important ways. For example, a 2014 paper described certain “adversarial examples” which can be used to fool neural networks.
- Another limitation of existing systems is that they often require many human examples to learn from.
- Now we’ve got so many wonderful challenges ahead: to expand the range of intuition types we can represent, to make the systems stable, to understand why and how they work, and to learn better ways to combine them with the existing strengths of computer systems.
- AlphaZero
- World Models
...