Neural Networks

General resources for learning about neural networks and deep learning. See Natural Language Processing, Computer Vision, Reinforcement Learning, and Generative Models for more resources. It is split into four sections and I update the list on an ongoing basis.

  • General
    • Tutorials, textbooks, and overviews
    • Regularization, optimizers, and backpropagation
    • Libraries & tools
  • Recent News
  • General architecture innovations
  • Applications to art and aesthetics

General

  • Tutorials, textbooks, libraries, and overviews
    • Deep Learning, Udacity: Intermediate to advanced online course on deep learning built by Google. If you are familiar with introductory data science concepts, this is a good place to start learning about neural networks.
    • Neural Networks and Deep Learning: Michael Nielsen’s superb introduction to neural networks and deep learning. This book contains in depth but easy to follow explanation of all aspects of neural networks. His explanation of backpropagation is easily the best that I’ve seen.
    • Hugo LaRochelle’s videos, 1.1 – 2.11: Watch these to really understand the backpropagation algorithm. Well worth the time.
    • Deep Learning Book: Comprehensive textbook written by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It assumes no prior experience with deep learning but is quite technical. I recommend reading it after Michael Nielsen’s book if you do not have a strong mathematical background.
    • Bay area deep learning school: Over 20 hours of video tutorials across two days. Here is day one, day two, and the schedule.
    • Neural Networks Tutorial: New to deep learning and like to learn by looking at code? This is a thoughtful, detailed introduction to the topic by Andy Thomas. I particularly enjoyed the sections explaining why vectorization matters and how implement more efficient deep learning models.
    • Neural Network Playground: Visual tool for exploring how neural networks learn. Particularly good for comparing activation functions and thinking through the minimal network complexity required to separate a dataset.
    • The Neural Network Zoo: Struggling to keep up with all the new neural network architectures? This cheat sheet should help.
    • Colah’s Blog: Well written, in depth blog posts on Neural Networks. Neural Networks, Manifolds and Topology contains an excellent explanation of the transformations that neural networks implicitly apply to input data so as to make it linearly separable. Colah’s explanation of the manifold hypothesis provides food for thought for the future of neural networks.
    • Most cited deep learning papers since 2010
  • Regularization, optimizers, and backpropagation
  • Libraries and tools
    • Keras: a minimal, highly modular Python library for building neural networks, runs on Theano or TensorFlow
    • Plotting model training history: Quick tutorial by Jason Brownlee explaining how to plot your model’s training history using Keras and Matplotlib
    • TensorFlow: Google’s open source machine learning software library
    • pyTorch: Fast, flexible library for building and training neural networks. Dynamic graph computation and seamless switching between CPU and GPU.
    • Example implementations of a wide variety of networks in pyTorch. The official version and the unofficial version.

Recent News

General architecture innovations

Applications to art and aesthetics

  • A Neural Algorithm of Artistic Style: using neural networks to separate the style of an image from its content, then combined to form images “in the style of…”
  • Blade Runner – Autoencoded: Terence Broad trained a variational autoencoder on each frame of the original film, Blade Runner, then reconstructed it frame by frame using his model. The result is a beautiful and mesmerizing film which provides an insight into the process of comprehension. With full awareness of the mechanistic process, you feel that you are watching through the eyes of a machine trying to make sense of what it sees.
  • Understanding aesthetics with deep learning: Can a neural network assess the quality of a photograph?
  • Deep Dream: Out of a feedback loop which caused networks to exaggerate their responses to images, emerged introspection. Computer dreams. Scroll down here to Mike Tyka’s discussion about his creation.

I’m always looking to learn more. Please send suggestions or comments to contact [at] learningmachinelearning [dot] org

Advertisement