Jenny Hamer

Jenny Hamer

I work in fundamental machine learning research areas and apply machine learning to problems in climate and weather. My research interests lie predominantly within machine learning and optimization, and include federated learning and optimization under heterogeneity; the intersection and trade-offs of fairness, robustness, and other performance metrics; unsupervised and embedding learning; and applying ML to address the climate crisis.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Simple Steps to Success: Axiomatics of Distance-Based Algorithmic Recourse
    Nicholas Perello
    Jake Valladares
    Vignesh Viswanathan
    Yair Zick
    Transactions on Machine Learning (TMLR) (2024)
    Preview abstract Algorithmic recourse is a process that leverages counterfactual explanations, going beyond understanding why a system produced a given classification, to providing a user with actions they can take to change their predicted outcome. Existing approaches to compute such interventions---known as recourse---identify a set of points that satisfy some desiderata---e.g. an intervention in the underlying causal graph, minimizing a cost function, etc. Satisfying these criteria, however, requires extensive knowledge of the underlying model structure, an often unrealistic amount of information in several domains. We propose a data-driven and model-agnostic framework to compute counterfactual explanations. We introduce StEP, a computationally efficient method that offers incremental steps along the data manifold that directs users towards their desired outcome. We show that StEP uniquely satisfies a desirable set of axioms. Furthermore, via a thorough empirical and theoretical investigation, we show that StEP offers provable robustness and privacy guarantees while outperforming popular methods along important metrics. View details
    Preview abstract Communication cost is often a bottleneck in federated learning and other client-based distributed learning scenarios. To overcome this, several gradient compression and model compression algorithms have been proposed. In this work, we propose an alternative approach whereby an ensemble of pre-trained base predictors is trained via federated learning. This method allows for training a model which may otherwise surpass the communication bandwidth and storage capacity of the clients to be learned with on-device data through federated learning. Motivated by language modeling, we prove the optimality of ensemble methods for density estimation for standard empirical risk minimization and agnostic risk minimization. We provide communication-efficient ensemble algorithms for federated learning, where per-round communication cost is independent of the size of the ensemble. Furthermore, unlike works on gradient compression, our proposed approach reduces the communication cost of both server-to-client and client-to-server communication. View details