Pasin Manurangsi
Authored Publications
Sort By
Differentially Private Insights into AI Use
Daogao Liu
Pritish Kamath
Alexander Knop
Adam Sealfon
Da Yu
Chiyuan Zhang
Conference on Language Modeling (COLM) 2025 (2025)
Preview abstract
We introduce Urania, a novel framework for generating insights about LLM chatbot interactions with rigorous differential privacy (DP) guarantees. The framework employs a private clustering mechanism and innovative keyword extraction methods, including frequency-based, TF-IDF-based, and LLM-guided approaches. By leveraging DP tools such as clustering, partition selection, and histogram-based summarization, Urania provides end-to-end privacy protection. Our evaluation assesses lexical and semantic content preservation, pair similarity, and LLM-based metrics, benchmarking against a non-private method inspired by CLIO (Tamkin et al., 2024). Moreover, we develop a simple empirical privacy evaluation that demonstrates the enhanced robustness of our DP pipeline. The results show the framework’s ability to extract meaningful conversational insights while maintaining stringent user privacy, effectively balancing data utility with privacy preservation.
View details
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models
Lynn Chua
Yangsibo Huang
Pritish Kamath
Amer Sinha
Chulin Xie
Chiyuan Zhang
COLM (2025)
Preview abstract
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, i.e., be crosslingual? This study evaluates state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU benchmark) contexts. Since simple inference-time mitigation methods offer only limited improvement, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is available at https://github.com/google-research/crosslingual-knowledge-barriers.
View details
How Unique is Whose Web Browser? The role of demographics in browser fingerprinting
Pritish Kamath
Robin Lassonde
2025
Preview abstract
Web browser fingerprinting can be used to identify and track users across the Web, even without cookies, by collecting attributes from users' devices to create unique "fingerprints". This technique and resulting privacy risks have been studied for over a decade. Yet further research is limited because prior studies did not openly publish their data. Additionally, data in prior studies had biases and lacked user demographics.
Here we publish a first-of-its-kind open dataset that includes browser attributes with users' demographics, collected from 8,400 US study participants, with their informed consent. Our data collection process also conducted an experiment to study what impacts users' likelihood to share browser data for open research, in order to inform future data collection efforts, with survey responses from a total of 12,461 participants. Female participants were significantly less likely to share their browser data, as were participants who were shown the browser data we asked to collect.
In addition we demonstrate how fingerprinting risks differ across demographic groups. For example, we find lower income users are more at risk, and find that as users' age increases, they are both more likely to be concerned about fingerprinting and at real risk of fingerprinting. Furthermore, we demonstrate an overlooked risk: user demographics, such as gender, age, income level, ethnicity and race, can be inferred from browser attributes commonly used for fingerprinting, and we identify which browser attributes most contribute to this risk.
Overall, we show the important role of user demographics in the ongoing work that intends to assess fingerprinting risks and improve user privacy, with findings to inform future privacy enhancing browser developments. The dataset and data collection tool we openly publish can be used to further study research questions not addressed in this work.
View details
VaultGemma
Lynn Chua
Prem Eruvbetine
Chiyuan Zhang
Thomas Mesnard
Borja De Balle Pigem
Daogao Liu
Amer Sinha
Pritish Kamath
Yangsibo Huang
Christopher A. Choquette-Choo
George Kaissis
Armand Joulin
Da Yu
Ryan McKenna
arxiv (2025)
Preview abstract
In this work, we present VaultGemma 1B, a model based on the Gemma family of models fully trained with differential privacy. VaultGemma 1B is 1 billion parameter pretrained model based on the Gemma 2 series of models and uses the same dataset for training. We will be releasing a tech report and the weights of this model.
View details
Preview abstract
When dividing items among agents, two of the most widely studied fairness notions are envy-freeness and proportionality. We consider a setting where m chores are allocated to n agents and the disutility of each chore for each agent is drawn from a probability distribution. We show that an envy-free allocation exists with high probability provided that m ≥ 2n, and moreover, m must be at least n + Θ(n) in order for the existence to hold.
On the other hand, we prove that a proportional allocation is likely to exist as long as m = ω(1), and this threshold is asymptotically tight. Our results reveal a clear contrast with the allocation of goods, where a larger number of items is necessary to ensure existence for both notions.
View details
Improved FPT Approximation Scheme and Approximate Kernel for Biclique-Free Max k-Weight SAT: Greedy Strikes Back
Theoretical Computer Science, 1028 (2025)
Preview abstract
In the Max k-Weight SAT (aka Max SAT with Cardinality Constraint) problem, we are given a CNF formula with n variables and m clauses together with a positive integer k. The goal is to find an assignment where at most k variables are set to one that satisfies as many constraints as possible. Recently, Jain et al. (SODA 2023) gave an FPT approximation scheme (FPT-AS) with running time 2^O((dk/ε)^d) * (n + m)^O(1) for Max k-Weight SAT when the incidence graph is K_{d,d}-free. They asked whether a polynomial-size approximate kernel exists. In this work, we answer this question positively by giving an (1 − ε)-approximate kernel with (dk/ε)^O(d) variables. This also implies an improved FPT-AS with running time (dk/ε)^O(dk) * (n+m)^O(1)-time algorithm for the problem. Our approximate kernel is based mainly on a couple of greedy strategies together with a sunflower lemma-style reduction rule.
View details
Improved Lower Bound for Differentially Private Facility Location
Information Processing Letters, 187 (2025)
Preview abstract
We consider the differentially private (DP) facility location problem in the so called super-set output setting proposed by Gupta et al. [GLM+10]. The current best known expected approximation ratio for an ε-DP algorithm is O(log n / √ε) due to Cohen-Addad et al. [CEF+22] where n denote the size of the metric space, meanwhile the best known lower bound is Ω(1/√ε) [EGLW19].
In this short note, we give a lower bound of Ω(min{log n, √(log n/ε)}) on the expected approximation ratio of any ε-DP algorithm, which is the first evidence that the approximation ratio has to grow with the size of the metric space.
View details
On the Differential Privacy and Interactivity of Privacy Sandbox Reports
Charlie Harrison
Pritish Kamath
Alexander Knop
Ethan Leeman
Vikas Sahu
PETS (2025)
Preview abstract
The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities as part of the effort to limit third-party cookies. In particular, the Private Aggregation API (PAA) and the Attribution Reporting API (ARA) can be used for ad measurement while providing different guardrails for safeguarding user privacy, including a framework for satisfying differential privacy (DP). In this work, we provide an abstract model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee under certain assumptions. Our analysis handles the case where both the queries and database can change interactively based on previous responses from the API.
View details
Preview abstract
We study the allocation of indivisible goods under conflicting constraints, represented by a graph. In this framework, vertices correspond to goods and edges correspond to conflicts between a pair of goods. Each agent is allocated an independent set in the graph. In a recent work of [Kumar et al., 2024], it was shown that a maximal EF1 allocation exists for interval graphs and two agents with monotone valuations. We significantly extend this result by establishing that a maximal EF1 allocation exists for *any graph* when the two agents have monotone valuations. To compute such an allocation, we present a polynomial-time algorithm for additive valuations as well as a pseudo-polynomial time algorithm for monotone valuations. Moreover, we complement our findings by providing a counter example demonstrating a maximal EF1 allocation may not exist for three agents with monotone valuations. Additionally, we establish NP-hardness of determining the existence of such allocations for every fixed number n of agents.
View details
Preview abstract
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data. Specifically, given a trust graph where vertices correspond to parties and neighbors are mutually trusting, we give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP (where each party trusts no other party). We further study a robust variant where each party trusts all but an unknown subset of at most t of its neighbors (where t is a given parameter), and give an algorithm for this setting. We complement our algorithms with lower bounds, and discuss implications of our work to other tasks in private learning and analytics.
View details