Regularization for Neural Networks

Regularization is an umbrella term given to any technique that helps to prevent a neural network from overfitting the training data. This post, available as a PDF below, follows on from my Introduction to Neural Networks and explains what overfitting is, why neural networks are regularized and gives a brief overview of the main techniques available to a network designer. Weight regularization, early stopping, and dropout are discussed in more detail.

Regularization for Neural Networks PDF

Advertisement

One thought on “Regularization for Neural Networks

  1. Pingback: A Neural Network program in Python: Part I | Learning Machine Learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s