Morteza Zadimoghaddam

Morteza Zadimoghaddam

I am a staff research scientist (L6) at Google Zurich office in Switzerland. Previously, I spent 4 years in Google Cambridge, two years in the same Zurich office and 4 years in the New York office where I started my career at Google in January 2014. Prior to Google, I did my PhD in computer science at MIT (CSAIL) under supervision of Professor Erik D. Demaine. I work on applying optimization techniques to various practical problems in order to find provably efficient algorithms. In particular, I apply infrastructure optimization methods to save computational resources at scale. On the mathematical and research side, I am interested in Submodular Optimization and its applications in large scale data mining and machine learning problems.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Deletion Robust Non-Monotone Submodular Maximization over Matroids
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Journal of Machine Learning Research, 26 (2025), pp. 1-28
    Preview abstract Maximizing a submodular function is a fundamental task in machine learning and in this paper we study the deletion robust version of the problem under the classic matroids constraint. Here the goal is to extract a small size summary of the dataset that contains a high value independent set even after an adversary deleted some elements. We present constant-factor approximation algorithms, whose space complexity depends on the rank $k$ of the matroid and the number $d$ of deleted elements. In the centralized setting we present a $(4.597+O(\eps))$-approximation algorithm with summary size $O( \frac{k+d}{\eps^2}\log \frac{k}{\eps})$ that is improved to a $(3.582+O(\eps))$-approximation with $O(k + \frac{d}{\eps^2}\log \frac{k}{\eps})$ summary size when the objective is monotone. In the streaming setting we provide a $(9.435 + O(\eps))$-approximation algorithm with summary size and memory $O(k + \frac{d}{\eps^2}\log \frac{k}{\eps})$; the approximation factor is then improved to $(5.582+O(\eps))$ in the monotone case. View details
    The Cost of Consistency: Submodular Maximization with Constant Recourse
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Ola Svensson
    Proceedings of the 57th Annual ACM Symposium on Theory of Computing (2025), 1406–1417
    Preview abstract In this work, we study online submodular maximization and how the requirement of maintaining a stable solution impacts the approximation. In particular, we seek bounds on the best-possible approximation ratio that is attainable when the algorithm is allowed to make, at most, a constant number of updates per step. We show a tight information-theoretic bound of $2/3$ for general monotone submodular functions and an improved (also tight) bound of $3/4$ for coverage functions. Since both these bounds are attained by non poly-time algorithms, we also give a poly-time randomized algorithm that achieves a $0.51$-approximation. Combined with an information-theoretic hardness of $1/2$ for deterministic algorithms from prior work, our work thus shows a separation between deterministic and randomized algorithms, both information theoretically and for poly-time algorithms. View details
    Consistent Submodular Maximization
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Proceedings of the 41st International Conference on Machine Learning (2024)
    Preview abstract Maximizing monotone submodular functions under cardinality constraints is a classic algorithmic problem with several applications in data mining and machine learning. In this paper we study this problem in a dynamic setting with consistency constrains. In this setting, elements arrive in a streaming fashion and one is interested in maintaining a constant approximation to the optimal solution and in having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). We provide several algorithms in this setting with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real world instances. View details
    Consistent Submodular Maximization
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Proceedings of the 41st International Conference on Machine Learning (2024), pp. 11979-11991
    Preview abstract Maximizing monotone submodular functions under cardinality constraints is a classic algorithmic problem with several applications in data mining and machine learning. In this paper we study this problem in a dynamic setting with consistency constrains. In this setting, elements arrive in a streaming fashion and one is interested in maintaining a constant approximation to the optimal solution and in having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). We provide several algorithms in this setting with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real world instances. View details
    Consistent Submodular Maximization
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Proceedings of the 41st International Conference on Machine Learning, PMLR (2024), pp. 11979-11991
    Preview abstract Maximizing monotone submodular functions under cardinality constraints is a classic algorithmic problem with several applications in data mining and machine learning. In this paper we study this problem in a dynamic setting with consistency constrains. In this setting, elements arrive in a streaming fashion and one is interested in maintaining a constant approximation to the optimal solution and in having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). We provide several algorithms in this setting with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real world instances. View details
    Consistent Submodular Maximization
    Paul Duetting
    Federico Fusco
    Ashkan Norouzi Fard
    Proceedings of the 41st International Conference on Machine Learning, PMLR (2024), pp. 11979-11991
    Preview abstract Maximizing monotone submodular functions under cardinality constraints is a classic algorithmic problem with several applications in data mining and machine learning. In this paper we study this problem in a dynamic setting with consistency constrains. In this setting, elements arrive in a streaming fashion and one is interested in maintaining a constant approximation to the optimal solution and in having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). We provide several algorithms in this setting with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real world instances. View details
    Preview abstract Maximizing a monotone submodular function is a fundamental task in machine learning. In this paper we study the deletion robust version of the problem under the classic matroids constraint. Here the goal is to extract a small size summary of the dataset that contains a high value independent set even after an adversary deleted some elements. We present constant-factor approximation algorithms, whose space complexity depends on the rank $k$ of the matroid and the number $d$ of deleted elements. In the centralized setting we present a $(3.582+O(\eps))$-approximation algorithm with summary size $O(k + \frac{d \log k}{\eps^2})$. In the streaming setting we provide a $(5.582+O(\eps))$-approximation algorithm with summary size and memory $O(k + \frac{d \log k}{\eps^2})$. We complement our theoretical results with an in-depth experimental analysis showing the effectiveness of our algorithms on real-world datasets. View details
    Edge-Weighted Online Bipartite Matching
    Runzhou Tao
    Zhiyi Huang
    Journal of the ACM, 69 (2022), 45:1-45:35
    Preview abstract Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for the unweighted problem that achieves an optimal competitive ratio of 1 - 1/e. Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial 1/2-competitive greedy algorithm. In this paper, we present the first online algorithm that breaks the long standing 1/2 barrier and achieves a competitive ratio of at least 0.5086. In light of the hardness result of Kapralov, Post, and Vondrák (SODA 2013) that restricts beating a 1/2 competitive ratio for the more general problem of monotone submodular welfare maximization, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting. The main ingredient in our online matching algorithm is a novel subroutine called online correlated selection (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems. View details
    Distributed load balancing: a new framework and improved guarantees
    Allen Liu
    Binghui Peng
    Innovations in Theoretical Computer Science (2021)
    Preview abstract Inspired by applications on search engines and web servers, we consider a load balancing problem with a general \textit{convex} objective function. In this problem, we are given a bipartite graph on a set of sources $S$ and a set of workers $W$ and the goal is to distribute the load from each source among its neighboring workers such that the total load of workers are as balanced as possible. We present a new distributed algorithm that works with \textit{any} symmetric non-decreasing convex function for evaluating the balancedness of the workers' load. Our algorithm computes a nearly optimal allocation of loads in $O(\log n \log^2 d/\eps^3)$ rounds where $n$ is the number of nodes, $d$ is the maximum degree, and $\eps$ is the desired precision. If the objective is to minimize the maximum load, we modify the algorithm to obtain a nearly optimal solution in $O(\log n \log d/\eps^2)$ rounds. This improves a line of algorithms that require a polynomial number of rounds in $n$ and $d$ and appear to encounter a fundamental barrier that prevents them from obtaining poly-logarithmic runtime \cite{berenbrink2005dynamic, berenbrink2009new, subramanian1994analysis, rabani1998local}. In our paper, we introduce a novel primal-dual approach with multiplicative weight updates that allows us to circumvent this barrier. Our algorithm is inspired by \cite{agrawal2018proportional} and other distributed algorithms for optimizing linear objectives but introduces several new twists to deal with general convex objectives. View details
    Fully Dynamic Algorithm for Constrained Submodular Optimization
    Ashkan Norouzi Fard
    Jakub Tarnawski
    Slobodan Mitrović
    NeurIPS 2020 (to appear)
    Preview abstract The task of maximizing a monotone submodular function under a cardinality constraint is at the core of many machine learning and data mining applications, including data summarization, sparse regression and coverage problems. We study this problem in the context of fully dynamic streams, where elements can be both inserted and removed. Our main result is a randomized algorithm that maintains an efficient data structure with a poly-logarithmic ammortized update time and returns a (1/2 - \epsilon)-approximate solution. We complement our theoretical analysis with an empirical study of the performance of our algorithm. View details