Randomized Fractal Expansions for Production-Scale Public Collaborative-Filtering Data Sets

Francois Belletti
John Anderson
Karthik Singaram Lakshmanan
Nicolas Mayoraz
Pankaj Kanwar
Taylor Robie
Tayo Oguntebi
Yi-fan Chen
Arxiv (2019)

Abstract

Recommender system research suffers from a disconnect between the size of academic data sets and the scale of industrial production systems.
In order to bridge that gap, we propose to generate large-scale user/item interaction data sets by expanding pre-existing public data sets.
Our key contribution is a technique that expands user/item incidence matrices to large numbers of rows (users), columns (items), and non-zero values (interactions). The proposed method adapts Kronecker Graph Theory to preserve key higher order statistical properties such as the fat-tailed distribution of user engagements, item popularity, and singular value spectra of user/item interaction matrices.
Preserving such properties is key to building large realistic synthetic data sets which can be employed reliably to benchmark recommender systems and the systems employed to train them.

We further apply our stochastic expansion algorithm to the binarized MovieLens 20M data set, which comprises 20M interactions between 27K movies and 138K users.
The resulting expanded data set has 1.2B ratings, 2.2M users, and 855K items, which can be scaled up or down.
Furthermore, we present collaborative filtering experiments demonstrating that the generated synthetic data entails valuable insights for machine learning at scale in recommender systems.
We provide code pointers to reproduce our data and our experiments.