General resources for learning about neural networks and deep learning. See Natural Language Processing, Computer Vision, Reinforcement Learning, and Generative Models for more resources. It is split into four sections and I update the list on an ongoing basis.
- Tutorials, textbooks, and overviews
- Regularization, optimizers, and backpropagation
- Libraries & tools
- Recent News
- General architecture innovations
- Applications to art and aesthetics
- Tutorials, textbooks, libraries, and overviews
- Deep Learning, Udacity: Intermediate to advanced online course on deep learning built by Google. If you are familiar with introductory data science concepts, this is a good place to start learning about neural networks.
- Neural Networks and Deep Learning: Michael Nielsen’s superb introduction to neural networks and deep learning. This book contains in depth but easy to follow explanation of all aspects of neural networks. His explanation of backpropagation is easily the best that I’ve seen.
- Hugo LaRochelle’s videos, 1.1 – 2.11: Watch these to really understand the backpropagation algorithm. Well worth the time.
- Deep Learning Book: Comprehensive textbook written by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It assumes no prior experience with deep learning but is quite technical. I recommend reading it after Michael Nielsen’s book if you do not have a strong mathematical background.
- Bay area deep learning school: Over 20 hours of video tutorials across two days. Here is day one, day two, and the schedule.
- Neural Networks Tutorial: New to deep learning and like to learn by looking at code? This is a thoughtful, detailed introduction to the topic by Andy Thomas. I particularly enjoyed the sections explaining why vectorization matters and how implement more efficient deep learning models.
- Neural Network Playground: Visual tool for exploring how neural networks learn. Particularly good for comparing activation functions and thinking through the minimal network complexity required to separate a dataset.
- The Neural Network Zoo: Struggling to keep up with all the new neural network architectures? This cheat sheet should help.
- Colah’s Blog: Well written, in depth blog posts on Neural Networks. Neural Networks, Manifolds and Topology contains an excellent explanation of the transformations that neural networks implicitly apply to input data so as to make it linearly separable. Colah’s explanation of the manifold hypothesis provides food for thought for the future of neural networks.
- Most cited deep learning papers since 2010
- Regularization, optimizers, and backpropagation
- Dropout: A simple way to prevent neural networks from overfitting: The paper by Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov that introduced and explained the effectiveness of dropout, published in JMLR in 2014.
- Efficient Back Prop, Yann LeCun: Tricks for improving the performance of backpropagation
- An overview of gradient descent optimization algorithms: Great blog post by Sebastian Ruder. Explains the intuition behind the main algorithms to improve learning by gradient descent; Momentum, Nesterov, Adagrad, Adadelta, Adam, RMSrop
- Practical Recommendations for Gradient-Based Training of Deep Architectures, Yoshua Bengio: Practical guide to the difficult task of training deep neural networks
- Ill-Conditioning in Neural Networks, Warren S. Sarle: Interesting and brief discussion of the condition of neural networks and the effect this has on network training time using different algorithms
- Libraries and tools
- Keras: a minimal, highly modular Python library for building neural networks, runs on Theano or TensorFlow
- Plotting model training history: Quick tutorial by Jason Brownlee explaining how to plot your model’s training history using Keras and Matplotlib
- TensorFlow: Google’s open source machine learning software library
- pyTorch: Fast, flexible library for building and training neural networks. Dynamic graph computation and seamless switching between CPU and GPU.
- Example implementations of a wide variety of networks in pyTorch. The official version and the unofficial version.
- Jeff Dean’s Lecture for YC AI: Published in August 2017, it’s an interesting whistle stop tour of the ways Google is applying deep learning to a variety of problems. Reinforcement learning is playing an increasingly large role.
- The major advancements in deep learning in 2016
- 50 things I learned at NIPs 2016, Andreas Stuhlmuller: Interesting food for thought generated from last year’s conference
- The next wave of deep learning applications
General architecture innovations
Applications to art and aesthetics
- A Neural Algorithm of Artistic Style: using neural networks to separate the style of an image from its content, then combined to form images “in the style of…”
- Blade Runner – Autoencoded: Terence Broad trained a variational autoencoder on each frame of the original film, Blade Runner, then reconstructed it frame by frame using his model. The result is a beautiful and mesmerizing film which provides an insight into the process of comprehension. With full awareness of the mechanistic process, you feel that you are watching through the eyes of a machine trying to make sense of what it sees.
- Understanding aesthetics with deep learning: Can a neural network assess the quality of a photograph?
- Deep Dream: Out of a feedback loop which caused networks to exaggerate their responses to images, emerged introspection. Computer dreams. Scroll down here to Mike Tyka’s discussion about his creation.
I’m always looking to learn more. Please send suggestions or comments to contact [at] learningmachinelearning [dot] org