Monotonic Calibrated Interpolated Look-Up Tables

Maya Gupta
Andrew Cotter
Jan Pfeifer
Alexander Mangylov
Wojciech Moczydlowski
Alexander van Esbroeck
Journal Machine Learning Research (JMLR) (2016)

Abstract

Real-world machine learning applications may require functions to be interpretable and fast to evaluate, in addition to accurate. In particular, guaranteed monotonicity of the learned function can be critical to user trust. We propose meeting these three goals for low-dimensional machine learning problems by learning flexible, monotonic functions using calibrated interpolated look-up tables. We extend the structural risk minimization framework
of lattice regression to train monotonic look-up tables by solving a convex prob-
lem with appropriate linear inequality constraints. In addition, we propose jointly learning interpretable calibrations of each feature to normalize continuous features and handle categorical or missing data, though this changes the optimization problem to be non-convex. We address large-scale learning through parallelization, mini-batching, and propose random sampling of additive regularizer terms. Experiments on seven real-world problems with five to sixteen features and thousands to millions of training samples show the proposed monotonic functions can achieve state-of-the-art accuracy on practical problems while providing greater transparency to users.

Research Areas