Scaling Wearable Foundation Models

Girish Narayanswamy
Kumar Ayush
Yuzhe Yang
Orson Xu
Shun Liao
Shyam Tailor
Jake Sunshine
Tim Althoff
Shrikanth (Shri) Narayanan
Jiening Zhan
Mark Malhotra
Shwetak Patel
Samy Abdel-Ghaffar
Daniel McDuff
2025

Abstract

Wearable sensors have become ubiquitous thanks to a variety of health tracking features. The resulting continuous and longitudinal measurements from everyday life generate large volumes of data. However, making sense of these observations for scientific and actionable insights is non-trivial. Inspired by the empirical success of generative modeling, where large neural networks learn powerful representations from vast amounts of text, image, video, or audio data, we investigate the scaling properties of wearable sensor foundation models across compute, data, and model size. Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, accelerometer, electrodermal activity, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM, a multimodal foundation model built on the largest wearable-signals dataset with the most extensive range of sensor modalities to date. Our results establish the scaling laws of LSM for tasks such as imputation, interpolation and extrapolation across both time and sensor modalities. Moreover, we highlight how LSM enables sample-efficient downstream learning for tasks including exercise and activity recognition.