Minimax Regret for Stochastic Shortest Path

Aviv Rosenberg
Yonathan Efroni
NeurIPS 2021 (2021) (to appear)
Google Scholar

Abstract

We study the Stochastic Shortest Path (SSP) problem in which an agent has to reach a goal state in minimum total expected cost.
In the learning formulation of the problem, the agent has no prior knowledge about the costs and dynamics of the model.
She repeatedly interacts with the model for $K$ episodes, and has to minimize her regret.
In this work we show that the minimax regret for this setting is $\widetilde O(\sqrt{ (B_\star^2 + B_\star) |S| |A| K})$ where $B_\star$ is a bound on the expected cost of the optimal policy from any state, $S$ is the state space, and $A$ is the action space.
This matches the $\Omega (\sqrt{ B_\star^2 |S| |A| K})$ lower bound of \citet{rosenberg2020near} for $B_\star \ge 1$, and improves their regret bound by a factor of $\sqrt{|S|}$.
For $B_\star < 1$ we prove a matching lower bound of $\Omega (\sqrt{ B_\star |S| |A| K})$.
Our algorithm is based on a novel reduction from SSP to finite-horizon MDPs.
To that end, we provide an algorithm for the finite-horizon setting whose leading term in the regret depends polynomially on the expected cost of the optimal policy and only logarithmically on the horizon.

Research Areas