Economics and Electronic Commerce

Google is a global leader in electronic commerce. Not surprisingly, considerable attention is devoted to research in this area. Topics include 1) auction design, 2) advertising effectiveness, 3) statistical methods, 4) forecasting and prediction, 5) survey research, 6) policy analysis and a host of other topics. This research involves interdisciplinary collaboration among computer scientists, economists, statisticians, and analytic marketing researchers both at Google and academic institutions around the world.

A major challenge is solving these problems at very large scales. For example, the advertising market has billions of transactions daily, spread across millions of advertisers, presenting a unique opportunity to test and refine economic principles as applied to a very large number of interacting, self-interested parties with a myriad of objectives. At Google, research is an opportunity to translate direction into practice, influencing how production systems are designed and used.

Recent Publications

Preview abstract Effective model calibration is a critical and indispensable component in developing Media Mix Models (MMMs). One advantage of Bayesian-based MMMs lies in their capacity to accommodate the information from experiment results and the modelers' domain knowledge about the ad effectiveness by setting priors for the model parameters. However, it remains ambiguous about how and which Bayesian priors should be tuned for calibration purpose. In this paper, we propose a new calibration method through model reparameterization. The reparameterized model includes Return on Ads Spend (ROAS) as a model parameter, enabling straightforward adjustment of its prior distribution to align with either experiment results or the modeler's prior knowledge. The proposed method also helps address several key challenges regarding combining MMMs and incrementality experiments. We use simulations to demonstrate that our approach can significantly reduce the bias and uncertainty in the resultant posterior ROAS estimates. View details
Data Exchange Markets via Utility Balancing
Aditya Bhaskara
Sungjin Im
Kamesh Munagala
Govind S. Sankar
WebConf (2024)
Preview abstract This paper explores the design of a balanced data-sharing marketplace for entities with heterogeneous datasets and machine learning models that they seek to refine using data from other agents. The goal of the marketplace is to encourage participation for data sharing in the presence of such heterogeneity. Our market design approach for data sharing focuses on interim utility balance, where participants contribute and receive equitable utility from refinement of their models. We present such a market model for which we study computational complexity, solution existence, and approximation algorithms for welfare maximization and core stability. We finally support our theoretical insights with simulations on a mean estimation task inspired by road traffic delay estimation. View details
Modeling Recommender Ecosystems: Research Challenges at the Intersection of Mechanism Design, Reinforcement Learning and Generative Models
Martin Mladenov
Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence (AAAI-24), Vancouver (2024) (to appear)
Preview abstract Modern recommender systems lie at the heart of complex ecosystems that couple the behavior of users, content providers, advertisers, and other actors. Despite this, the focus of the majority of recommender research---and most practical recommenders of any import---is on the \emph{local, myopic} optimization of the recommendations made to individual users. This comes at a significant cost to the \emph{long-term utility} that recommenders could generate for its users. We argue that explicitly modeling the incentives and behaviors of all actors in the system---and the interactions among them induced by the recommender's policy---is strictly necessary if one is to maximize the value the system brings to these actors and improve overall ecosystem ``health.'' Doing so requires: optimization over long horizons using techniques such as \emph{reinforcement learning}; making inevitable tradeoffs among the utility that can be generated for different actors using the methods of \emph{social choice}; reducing information asymmetry, while accounting for incentives and strategic behavior, using the tools of \emph{mechanism design}; better modeling of both user and item-provider behaviors by incorporating notions from \emph{behavioral economics and psychology}; and exploiting recent advances in \emph{generative and foundation models} to make these mechanisms interpretable and actionable. We propose a conceptual framework that encompasses these elements, and articulate a number of research challenges that emerge at the intersection of these different disciplines. View details
Learning to Bid in Contextual First Price Auctions
Ashwinkumar Badanidiyuru Varadaraja
Guru Prashanth Guruganesh
The Web Conference (2023)
Preview abstract In this paper, we investigate the problem about how to bid in repeated contextual first price auctions. We consider a single bidder (learner) who repeatedly bids in the first price auctions: at each time t, the learner observes a context xt ∈ R d and decides the bid based on historical information and xt. We assume a structured linear model of the maximum bid of all the others mt = α0 · xt + zt, where α0 ∈ R d is unknown to the learner and zt is randomly sampled from a noise distribution F with log-concave density function f. We consider both binary feedback (the learner can only observe whether she wins or not) and full information feedback (the learner can observe mt) at the end of each time t. For binary feedback, when the noise distribution F is known, we propose a bidding algorithm, by using maximum likelihood estimation (MLE) method to achieve at most Oe( p log(d)T ) regret. Moreover, we generalize this algorithm to the setting with binary feedback and the noise distribution is unknown but belongs to a parametrized family of distributions. For the full information feedback with unknown noise distribution, we provide an algorithm that achieves regret at most Oe( √ dT). Our approach combines an estimator for log-concave density functions and then MLE method to learn the noise distribution F and linear weight α0 simultaneously. We also provide a lower bound result such that any bidding policy in a broad class must achieve regret at least Ω(√ T), even when the learner receives the full information feedback and F is known. View details
Preview abstract Although costs are prevalent in ad auctions, not many auction theory works study auction design in the presence of cost in the classic settings. One reason is that most auctions design in the setting without cost directly generalize to the setting with cost when the bidders maximizing quasi-linear utility. However, as auto-bidding becomes a major choice of advertisers in online advertising, the distinction from the underlying behavior model often leads to different solutions of many well-studied auctions. In the context of ad auctions with cost, VCG achieves optimal welfare with quasi-linear utility maximizing bidders, while has 0 welfare approximation guarantee with value maximizing bidders who follow the optimization behind common auto-bidding algorithms. In this paper, we prove that the approximation welfare guarantee of VCG auction can be significantly improved by a minimal change --- introducing cost multipliers. We note that one can use either one multiplier per auction or one multiplier per bidder, but one global multiplier across all auctions and bidders does not work. Finally, to echo with our theoretical results, we conduct empirical evaluations using semi-synthetic data derived from real auction data of a major advertising platform. View details
Subset-Reach Estimation in Cross-Media Measurement
Jiayu Peng
Rieman Li
Ying Liu
na, na, na (2023), n/a
Preview abstract We propose two novel approaches to address a critical problem of reach measurement across multiple media -- how to estimate the reach of an unobserved subset of buying groups (BGs) based on the observed reach of other subsets of BGs. Specifically, we propose a model-free approach and a model-based approach. The former provides a coarse estimate for the reach of any subset by leveraging the consistency among the reach of different subsets. Linear programming is used to capture the constraints of the reach consistency. This produces an upper and a lower bound for the reach of any subset. The latter provides a point estimate for the reach of any subset. The key idea behind the latter is to exploit the conditional independence model. In particular, the groups of the model are created by assuming each BG has either high or low reach probability in a group, and the weights of each group are determined through solving a non-negative least squares (NNLS) problem. In addition, we also provide a framework to give both confidence interval and point estimates by integrating these two approaches with training points selection and parameter fine-tuning through cross-validation. Finally, we evaluate the two approaches through experiments on synthetic data. View details