Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field
Sort By
1 - 15 of 542 publications
Security Signals: Making Web Security Posture Measurable At Scale
David Dworken
Artur Janc
Santiago (Sal) Díaz
Workshop on Measurements, Attacks, and Defenses for the Web (MADWeb)
Preview abstract
The area of security measurability is gaining increased attention, with a wide range of organizations calling for the development of scalable approaches for assessing the security of software systems and infrastructure. In this paper, we present our experience developing Security Signals, a comprehensive system providing security measurability for web services, deployed in a complex application ecosystem of thousands of web services handling traffic from billions of users. The system collects security-relevant information from production HTTP traffic at the reverse proxy layer, utilizing novel concepts such as synthetic signals augmented with additional risk information to provide a holistic view of the security posture of individual services and the broader application ecosystem. This approach to measurability has enabled large-scale security improvements to our services, including prioritized rollouts of security enhancements and the implementation of automated regression monitoring. Furthermore, it has proven valuable for security research and prioritization of defensive work. Security Signals addresses shortcomings of prior web measurability proposals by tracking a comprehensive set of security properties relevant to web applications, and by extracting insights from collected data for use by both security experts and non-experts. We believe the lessons learned from the implementation and use of Security Signals offer valuable insights for practitioners responsible for web service security, potentially inspiring new approaches to web security measurability.
View details
Preview abstract
Storage on Android has evolved significantly over the years, with each new Android version introducing changes aimed at enhancing usability, security, and privacy. While these updates typically help with restricting app access to storage through various mechanisms, they may occasionally introduce new complexities and vulnerabilities. A prime example is the introduction of scoped storage in Android 10, which fundamentally changed how apps interact with files. While intended to enhance user privacy by limiting broad access to shared storage, scoped storage has also presented developers with new challenges and potential vulnerabilities to address. However, despite its significance for user privacy and app functionality, no systematic studies have been performed to study Android’s scoped storage at depth from a security perspective. In this paper, we present the first systematic security analysis of the scoped storage mechanism. To this end, we design and implement a testing tool, named ScopeVerif, that relies on differential analysis to uncover security issues and implementation inconsistencies in Android’s storage. Specifically, ScopeVerif takes a list of security properties and checks if there are any file operations that violate any security properties defined in the official Android documentation. Additionally, we conduct a comprehensive analysis across different Android versions as well as a cross-OEM analysis to identify discrepancies in different implementations and their security implications. Our study identifies both known and unknown issues of scoped storage. Our cross-version analysis highlights undocumented changes as well as partially fixed security loopholes across versions. Additionally, we discovered several vulnerabilities in scoped storage implementations by different OEMs. These vulnerabilities stem from deviations from the documented and correct behavior, which potentially poses security risks. The affected OEMs and Google have acknowledged our findings and offered us bug bounties in response.
View details
Preview abstract
Judging an action’s safety requires knowledge of the context in which the action takes place. To human agents who act in various contexts, this may seem obvious: performing an action such as email deletion may or may not be appropriate depending on the email’s content, the goal (e.g., to erase sensitive emails or to clean up trash), and the type of email address (e.g., work or personal). Unlike people, computational systems have often had only limited agency in limited contexts. Thus, manually crafted policies and user confirmation (e.g., smartphone app permissions or network access control lists), while imperfect, have sufficed to restrict harmful actions. However, with the upcoming deployment of generalist agents that support a multitude of tasks (e.g., an automated personal assistant), we argue that we must rethink security designs to adapt to the scale of contexts and capabilities of these systems. As a first step, this paper explores contextual security in the domain of agents and proposes contextual agent security (Conseca), a framework to generate just-in-time, contextual, and human-verifiable security policies.
View details
SMaCk: Efficient Instruction Cache Attacks via Self-Modifying Code Conflicts
Seonghun Son
Berk Gulmezoglu
ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (2025)
Preview abstract
Self-modifying code (SMC) allows programs to alter their own instructions, optimizing performance and functionality on x86 processors. Despite its benefits, SMC introduces unique microarchitectural behaviors that can be exploited for malicious purposes. In this paper, we explore the security implications of SMC by examining how specific x86 instructions affecting instruction cache lines lead to measurable timing discrepancies between cache hits and misses. These discrepancies facilitate refined cache attacks, making them less noisy and more effective. We introduce novel attack techniques that leverage these timing variations to enhance existing methods such as Prime+Probe and Flush+Reload. Our advanced techniques allow adversaries to more precisely attack cryptographic keys and create covert channels akin
to Spectre across various x86 platforms. Finally, we propose a dynamic detection methodology utilizing hardware performance counters to mitigate these enhanced threats.
View details
Permission Rationales in the Web Ecosystem: An Exploration of Rationale Text and Design Patterns
Yusra Elbitar
Soheil Khodayari
Marian Harbach
Gianluca De Stefano
Balazs Engedy
Giancarlo Pellegrino
Sven Bugiel
CHI 2025, ACM
Preview abstract
Modern web applications rely on features like camera and geolocation for personalized experiences, requiring user permission via browser prompts. To explain these requests, applications provide rationales—contextual information on why permissions are needed. Despite their importance, little is known about how rationales appear on the web or their influence on user decisions.
This paper presents the first large-scale study of how the web ecosystem handles permission rationales, covering three areas: (i) identifying webpages that use permissions, (ii) detecting and classifying permission rationales, and (iii) analyzing their attributes to understand their impact on user decisions. We examined over 770K webpages from Chrome telemetry, finding 3.6K unique rationale texts and 749 rationale UIs across 85K pages. We extracted key rationale attributes and assessed their effect on user behavior by cross-referencing them with Chrome telemetry data. Our findings reveal nine key insights, providing the first evidence of how different rationales affect user decisions.
View details
Preview abstract
As part of Google's ongoing efforts to define best practices for secure AI systems, we’re sharing our aspirational framework for secure AI agents. We advocate for a hybrid, defense-in-depth strategy that combines the strengths of traditional, deterministic security controls with dynamic, reasoning-based defenses. This approach is grounded in three core principles: agents must have well-defined human controllers, their powers must be carefully limited, and their actions and planning must be observable. This paper reflects our current thinking and the direction of our efforts as we work towards ensuring that AI agents can be powerful, useful, and secure by default.
View details
ExfilState: Automated Discovery of Timer-Free Cache Side Channels on ARM CPUs
Preview
Fabian Thomas
Michael Torres
Michael Schwarz
ACM Conference on Computer and Communications Security (CCS) (2025) (to appear)
VaultGemma
Lynn Chua
Prem Eruvbetine
Chiyuan Zhang
Thomas Mesnard
Borja De Balle Pigem
Daogao Liu
Amer Sinha
Pritish Kamath
Yangsibo Huang
Christopher A. Choquette-Choo
George Kaissis
Armand Joulin
Da Yu
Ryan McKenna
arxiv (2025)
Preview abstract
In this work, we present VaultGemma 1B, a model based on the Gemma family of models fully trained with differential privacy. VaultGemma 1B is 1 billion parameter pretrained model based on the Gemma 2 series of models and uses the same dataset for training. We will be releasing a tech report and the weights of this model.
View details
Differentially Private Synthetic Data Release for Topics API Outputs
Travis Dick
Josh Karlin
Adel Javanmard
Peilin Zhong
Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2025)
Preview abstract
Recently, significant attention has been devoted to the design and analysis of Privacy-Preserving Ads APIs. Despite the interest of academics and regulators in understanding the privacy properties of such APIs, the empirical study of these methods is severely hindered by the lack of publicly-available data. This is because, a reliable empirical analysis of the privacy properties on an API requires access to a dataset consisting of realistic API outputs for a large collection of users. However, privacy reasons prevent the general release of such browsing data to the public.
In this work we address this problem by developing a novel methodology to construct synthetic API outputs that are simultaneously realistic enough to enable accurate study and provide strong privacy protections to the user.
We focus on one of the Privacy-Preserving Ads API: the Topics API; part of Google Chrome's Privacy Sandbox which enables interest-based advertising without relying third-party cookies. We develop a methodology to generate a Differentially Private dataset realistic enough to close match the re-identification risk properties of the real Topics API data. The use of differential privacy prevents the leak of private user information from this release.
Our methodology is based on first computing a large number of differentially-private statistics describing how output API traces evolve over time. Then, we design a parameterized distribution over sequences of API traces and optimize its parameters so that they closely matches the statistics obtained. Finally, we create the synthetic data by drawing from this distribution.
Our work is complemented with an open source release of the anonymized dataset obtained by this methodology. We hope this will enable external researchers to analyze the API in-depth and replicate prior and future work on a realistic large-scale dataset. We believe that this work will contribute to fostering transparency on the privacy properties of Privacy-Preserving Ads APIs.
View details
Democratizing ML for Enterprise Security: A Self-Sustained Attack Detection Framework
Ge Zhang
Birkett Huber
Sam Lipton
Benoit Seguin
Yanis Pavlidis
Conference on Applied Machine Learning in Information Security (2025) (to appear)
Preview abstract
Despite advancements in machine learning for security, rule-based detection remains prevalent in Security Operations Centers due to the resource intensiveness and skill gap associated with ML solutions. While traditional rule-based methods offer efficiency, their rigidity leads to high false positives or negatives and requires continuous manual maintenance. This paper proposes a novel, two-stage hybrid framework to democratize ML-based threat detection. The first stage employs intentionally loose YARA rules for coarse-grained filtering, optimized for high recall. The second stage utilizes an ML classifier to filter out false positives from the first stage's output. To overcome data scarcity, the system leverages Simula, a seedless synthetic data generation framework, enabling security analysts to create high-quality training datasets without extensive data science expertise or pre-labeled examples. A continuous feedback loop incorporates real-time investigation results to adaptively tune the ML model, preventing rule degradation.
This proposed model with active learning has been rigorously tested for a prolonged time in a production environment spanning tens of thousands of systems. The system handles initial raw log volumes often reaching 250 billion events per day, significantly reducing them through filtering and ML inference to a handful of daily tickets for human investigation. Live experiments over an extended timeline demonstrate a general improvement in the model's precision over time due to the active learning feature. This approach offers a self-sustained, low-overhead, and low-maintenance solution, allowing security professionals to guide model learning as expert ``teachers''.
View details
Preview abstract
Differential privacy can be achieved in a distributed manner, where multiple parties add independent noise such that their sum protects the overall dataset with differential privacy. A common technique here is for each party to sample their noise from the decomposition of an infinitely divisible distribution. We introduce two novel mechanisms in this setting: 1) the generalized discrete Laplace (GDL) mechanism, whose distribution (which is closed under summation) follows from differences of i.i.d. negative binomial shares, and 2) The multi-scale discrete Laplace (MSDLap) mechanism, which follows the sum of multiple i.i.d. discrete Laplace shares at different scales. The mechanisms can be parameterized to have 𝑂(Δ^3𝑒^{−𝜀}) and 𝑂 (min(Δ^3𝑒^{−𝜀}, Δ^2𝑒^{−2𝜀/3})) MSE, respectively, where the latter bound matches known optimality results. Furthermore, the MSDLap mechanism has the optimal MSE including constants as 𝜀 → ∞. We also show a transformation from the discrete setting to the continuous setting, which allows us to transform both mechanisms to the continuous setting and thereby achieve the optimal 𝑂 (Δ^2𝑒^{−2𝜀/3}) MSE. To our knowledge, these are the first infinitely divisible additive noise mechanisms that achieve order-optimal MSE under pure differential privacy for either the discrete or continuous setting, so our work shows formally there is no separation in utility when query-independent noise adding mechanisms are restricted to infinitely divisible noise. For the continuous setting, our result improves upon Pagh and Stausholm’s Arete distribution which gives an MSE of 𝑂(Δ^2𝑒^{−𝜀/4}) [35]. We apply our results to improve a state of the art multi-message shuffle DP protocol from [3] in the high 𝜀 regime.
View details
On Design Principles for Private Adaptive Optimizers
Abhradeep Guha Thakurta
Arun Ganesh
Privacy-Preserving Machine Learning Workshop 2025 (2025) (to appear)
Preview abstract
The spherical noise added to gradients in differentially private (DP) training undermines the performance of adaptive optimizers like AdaGrad and Adam, and hence many recent works have proposed algorithms to address this challenge. However, the empirical results in these works focus on simple tasks and models and the conclusions may not generalize to model training in practice. In this paper we survey several of these variants, and develop better theoretical intuition for them as well as perform empirical studies comparing them. We find that a common intuition of aiming for unbiased estimates of second moments of gradients in adaptive optimizers is misguided, and instead that a simple technique called scale-then-privatize (which does not achieve unbiased second moments) has more desirable theoretical behaviors and outperforms all other variants we study on a small-scale language model training task. We additionally argue that scale-then-privatize causes the noise addition to better match the application of correlated noise mechanisms which are more desirable to use in practice.
View details
Preview abstract
In modern datasets, where single records can have multiple owners, enforcing user-level differential privacy requires capping each user's total contribution. This "contribution bounding" becomes a significant combinatorial challenge. Existing sequential algorithms for this task are computationally intensive and do not scale to the massive datasets prevalent today. To address this scalability bottleneck, we propose a novel and efficient distributed algorithm. Our approach models the complex ownership structure as a hypergraph, where users are vertices and records are hyperedges. The algorithm proceeds in rounds, allowing users to propose records in parallel. A record is added to the final dataset only if all its owners unanimously agree, thereby ensuring that no user's predefined contribution limit is violated. This method aims to maximize the size of the resulting dataset for high utility while providing a practical, scalable solution for implementing user-level privacy in large, real-world systems.
View details
DroidCCT: Cryptographic Compliance Test via Trillion-Scale Measurement
Preview
Rémi Audebert
Pedro Barbosa
Borbala Benko
Alex (Mac) Mihai
László Siroki
Catherine Vlasov
Annual Computer Security Applications Conference (ACSAC) (2025) (to appear)
Differentially Private Insights into AI Use
Daogao Liu
Pritish Kamath
Alexander Knop
Adam Sealfon
Da Yu
Chiyuan Zhang
Conference on Language Modeling (COLM) 2025 (2025)
Preview abstract
We introduce Urania, a novel framework for generating insights about LLM chatbot interactions with rigorous differential privacy (DP) guarantees. The framework employs a private clustering mechanism and innovative keyword extraction methods, including frequency-based, TF-IDF-based, and LLM-guided approaches. By leveraging DP tools such as clustering, partition selection, and histogram-based summarization, Urania provides end-to-end privacy protection. Our evaluation assesses lexical and semantic content preservation, pair similarity, and LLM-based metrics, benchmarking against a non-private method inspired by CLIO (Tamkin et al., 2024). Moreover, we develop a simple empirical privacy evaluation that demonstrates the enhanced robustness of our DP pipeline. The results show the framework’s ability to extract meaningful conversational insights while maintaining stringent user privacy, effectively balancing data utility with privacy preservation.
View details