Distributed Representations of Words and Phrases and their Compositionality

Tomas Mikolov
Ilya Sutskever
Neural and Information Processing System (NIPS) (2013)

Abstract

The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large number
of precise syntactic and semantic word relationships. In this paper we present
several extensions that improve both the quality of the vectors and the training
speed. By subsampling of the frequent words we obtain significant speedup and
also learn more regular word representations. We also describe a simple alternative
to the hierarchical softmax called negative sampling.
An inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings of
“Canada” and “Air” cannot be easily combined to obtain “Air Canada”. Motivated
by this example, we present a simple method for finding phrases in text, and show
that learning good vector representations for millions of phrases is possible.