Guowang Li

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract The alignment of language models (LMs) with human values increasingly relies on using other LMs as automated judges, or ``autoraters''. However, their reliability is limited by a foundational issue: they are trained on deterministic preference labels, forcing a single ground truth onto tasks that are often subjective, ambiguous, or nuanced. We argue that a truly reliable autorater must learn to model the full distribution of preference defined by a target population. In this paper, we propose a general framework for calibrating probabilistic autoraters to any given preference distribution. We formalize the problem and present two learning methods tailored to different data conditions: direct supervised fine-tuning for dense, probabilistic labels, and a reinforcement learning approach for sparse, binary labels. Our empirical results show that finetuning autoraters with a distribution-matching objective leads to verbalized probability predictions that are better aligned with the target preference distribution, with improved calibration and significantly lower positional bias, all while preserving performance on objective tasks. View details