Memorizing Transformers

Yuhuai Wu
Markus Rabe
Christian Szegedy
ICLR 2022 (2022)
Google Scholar

Abstract

Language models typically need to be trained or finetuned in order to acquire
new knowledge, which involves updating their weights. We instead envision
language models that can simply read and memorize new data at inference time,
thus acquiring new knowledge immediately. In this work, we extend language
models with the ability to memorize the internal representations of past inputs. We
demonstrate that an approximate kNN lookup into a non-differentiable memory of
recent (key, value) pairs improves language modeling across various benchmarks
and tasks, including generic webtext (C4), math papers (arXiv), books (PG-19),
code (Github), as well as formal theorems (Isabelle). We show that the performance
steadily improves when we increase the size of memory up to 262K tokens. On
benchmarks including code and mathematics, we find that the model is capable of
making use of newly defined functions and theorems during test time.

Research Areas