A Margin-Based Measure of Generalization for Deep Networks

Yiding Jiang
Dilip Krishnan
Samy Bengio
ICLR (2019)

Abstract

Recent research has demonstrated that deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon
indicates that loss functions such as cross-entropy are not a reliable indicator of
generalization. This leads to the crucial question of how generalization gap can be
predicted from training data and network parameters. In this paper, we propose
such a measure, and conduct extensive empirical studies on how well it can predict
the generalization gap. Our measure is based on the concept of margin distribution,
which are the distances of training points to the decision boundary. We find that
it is necessary to use margin distributions at multiple layers of a deep network.
On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates
very strongly with the generalization gap. In addition, we find the following other
factors to be of importance: normalizing margin values for scale independence,
using characterizations of margin distribution rather than just the margin (closest
distance to decision boundary), and working in log space instead of linear space
(effectively using a product of margins rather than a sum). Our measure can be
easily applied to feedforward deep networks with any architecture and may point
towards new training loss functions that could enable better generalization.

Research Areas