Dueling Convex Optimization

Aadirupa Saha
ICML 2021 (2021) (to appear)
Google Scholar

Abstract

We address the problem of convex optimization with preference (dueling) feedback. Like the traditional optimization objective, the goal is to find the optimal point with least possible query complexity, however without the luxury of even a zeroth order feedback (function value at the queried point). Instead, the learner can only observe a single noisy bit which is a win-loss feedback for a pair of queried points based on the difference of their function values.

The problem is undoubtedly of great practical relevance as in many real world scenarios, such as recommender systems or learning from customer preferences, where the system feedback is often restricted to just one binary-bit preference information.

We consider the problem of online convex optimization (OCO) solely by actively querying $\{0,1\}$ noisy-comparison feedback of decision point pairs, with the objective of finding a near-optimal point (function minimizer) with the least possible number of queries.

For the non-stationary OCO setup, where the underlying convex function may change over time, we prove an impossibility result towards achieving the above objective.
We next focus only the on stationary OCO problem and our main contribution lies in designing a normalized-gradient-descent based algorithm towards finding a $\epsilon$-best optimal point. Towards this our algorithm is shown to yield a convergence rate of $\tilde O(\nicefrac{d\beta}{\epsilon \nu^2})$ ($\nu$ being the noise parameter) when the underlying function is $\beta$-smooth. Further we show an improved convergence rate of just $\tilde O(\nicefrac{d\beta}{\alpha \nu^2} \log \frac{1}{\epsilon})$ when the function is additionally also $\alpha$-strongly convex.

Research Areas