Elie Bursztein

Elie Bursztein

I lead Google's anti-abuse research team, which invents ways to protect users against cyber-criminal activities and Internet threats. I've redesigned Google's CAPTCHA to make it easier, and I've made Chrome safer and faster by implementing better cryptography. I spend my spare time doing video game research, photography, and magic tricks. I was born in Paris, France, wear berets, and now live with my wife in Mountain View, California
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    DroidCCT: Cryptographic Compliance Test via Trillion-Scale Measurement
    Rémi Audebert
    Pedro Barbosa
    Borbala Benko
    Alex (Mac) Mihai
    László Siroki
    Catherine Vlasov
    Annual Computer Security Applications Conference (ACSAC) (2025) (to appear)
    Preview
    Supporting the Digital Safety of At-Risk Users: Lessons Learned from 9+ Years of Research and Training
    Tara Matthews
    Patrick Gage Kelley
    Lea Kissner
    Andreas Kramm
    Andrew Oplinger
    Andy Schou
    Stephan Somogyi
    Dalila Szostak
    Jill Woelfer
    Lawrence You
    Izzie Zahorian
    ACM Transactions on Computer-Human Interaction, 32(3) (2025), pp. 1-39
    Preview abstract Creating information technologies intended for broad use that allow everyone to participate safely online—which we refer to as inclusive digital safety—requires understanding and addressing the digital-safety needs of a diverse range of users who face elevated risk of technology-facilitated attacks or disproportionate harm from such attacks—i.e., at-risk users. This article draws from more than 9 years of our work at Google to understand and support the digital safety of at-risk users—including survivors of intimate partner abuse, people involved with political campaigns, content creators, youth, and more—in technology intended for broad use. Among our learnings is that designing for inclusive digital safety across widely varied user needs and dynamic contexts is a wicked problem with no “correct” solution. Given this, we describe frameworks and design principles we have developed to help make at-risk research findings practically applicable to technologies intended for broad use and lessons we have learned about communicating them to practitioners. View details
    Leveraging Virtual Reality to Enhance Diversity and Inclusion training at Google
    Karla Brown
    Patrick Gage Kelley
    Leonie Sanderson
    2024 CHI Conference on Human Factors in Computing Systems, ACM
    Preview abstract Virtual reality (VR) has emerged as a promising educational training method, offering a more engaging and immersive experience than traditional approaches. In this case study, we explore its effectiveness for diversity, equity, and inclusion (DEI) training, with a focus on how VR can help participants better understand and appreciate different perspectives. We describe the design and development of a VR training application that aims to raise awareness about unconscious biases and promote more inclusive behaviors in the workplace. We report initial findings based on the feedback of Google employees who took our training and found that VR appears to be an effective way to enhance DEI training. In particular, participants reported that VR training helped them better recognize biases and how to effectively respond to them. However, our findings also highlight some challenges with VR-based DEI training, which we discuss in terms of future research directions. View details
    Preview abstract ML models have shown significant promise in their ability to identify side channels leaking from a secure chip. However, the datasets used to train these models present unique challenges. Existing file formats often lack the ability to record metadata, which impedes the reusability and/or reproducibility of published datasets. Moreover, training pipelines for deep neural networks often require specific patterns to iterate through the data that are not provided by these file formats. In this presentation, we talk about the lessons learned in our research on side-channel attacks, and share insights gained from our mistakes in data structuring and iteration strategies. Additionally, we present Sedpack, our open-source dataset library which encapsulates these learnings to minimize oversights. Sedpack is optimized for speed, as we will demonstrate with some preliminary benchmarks. It can scale to larger-than-local-storage datasets, as these are becoming larger and larger with PQC. And it is not limited to ML pipelines either, as it can be easily used for classical attacks too. Join us to try Sedpack, with our hope to save you time in your side-channel research efforts. To get you started, we also publish several datasets in this format that we used in our publication Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning, CHES 2024. View details
    Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning
    Karel Král
    Marina Zhang
    Transactions on Cryptographic Hardware and Embedded Systems (TCHES), IACR (2024)
    Preview abstract To make cryptographic processors more resilient against side-channel attacks, engineers have developed various countermeasures. However, the effectiveness of these countermeasures is often uncertain, as it depends on the complex interplay between software and hardware. Assessing a countermeasure’s effectiveness using profiling techniques or machine learning so far requires significant expertise and effort to be adapted to new targets which makes those assessments expensive. We argue that including cost-effective automated attacks will help chip design teams to quickly evaluate their countermeasures during the development phase, paving the way to more secure chips.In this paper, we lay the foundations toward such automated system by proposing GPAM, the first deep-learning system for power side-channel analysis that generalizes across multiple cryptographic algorithms, implementations, and side-channel countermeasures without the need for manual tuning or trace preprocessing. We demonstrate GPAM’s capability by successfully attacking four hardened hardware-accelerated elliptic-curve digital-signature implementations. We showcase GPAM’s ability to generalize across multiple algorithms by attacking a protected AES implementation and achieving comparable performance to state-of-the-art attacks, but without manual trace curation and within a limited budget. We release our data and models as an open-source contribution to allow the community to independently replicate our results and build on them. View details
    Preview abstract The task of content-type detection, which entails determining the data type encoded by byte streams, has a long history within the realm of computing and nowadays it is a key primitive for critical automated pipelines. The first program ever developed to perform this task is "file", which shipped with Bell Labs UNIX over five decades ago. Since then, a number of additional tools have been developed, but, despite their importance, to date it is not clear how well these approaches perform, and whether modern techniques can improve over the state of the art. This paper sheds light on this overlooked area. We collect a dataset of more than 26M samples, and we perform the first large-scale evaluation of existing content type tools. Then, we introduce Magika, a new content type detection tool based on deep learning. Magika is designed to be fast (5ms inference time), even on a single CPU, thus making it a viable replacement for existing command line tools and suitable for large-scale automated pipelines. Magika achieves 99\%+ average precision and recall, which is a double-digit % accuracy improvement (in absolute terms) over the state of the art. As a testament to its real-world utility, we are working with a large email provider and with Visual Studio Code developers on integrating Magika to be their reference content-type detector. To ease reproducibility, we release all our artifacts, including the tool, the model, the training pipeline, the dataset collection codebase, and details about our dataset. View details
    Hybrid Post-Quantum Signatures in Hardware Security Keys
    Diana Ghinea
    Jennifer Pullman
    Julien Cretin
    Rafael Misoczki
    Stefan Kölbl
    Applied Cryptography and Network Security Workshop (2023)
    Preview abstract Recent advances in quantum computing are increasingly jeopardizing the security of cryptosystems currently in widespread use, such as RSA or elliptic-curve signatures. To address this threat, researchers and standardization institutes have accelerated the transition to quantum-resistant cryptosystems, collectively known as Post-Quantum Cryptography (PQC). These PQC schemes present new challenges due to their larger memory and computational footprints and their higher chance of latent vulnerabilities. In this work, we address these challenges by introducing a scheme to upgrade the digital signatures used by security keys to PQC. We introduce a hybrid digital signature scheme based on two building blocks: a classically-secure scheme, ECDSA, and a post-quantum secure one, Dilithium. Our hybrid scheme maintains the guarantees of each underlying building block even if the other one is broken, thus being resistant to classical and quantum attacks. We experimentally show that our hybrid signature scheme can successfully execute on current security keys, even though secure PQC schemes are known to require substantial resources. We publish an open-source implementation of our scheme at https://github.com/google/OpenSK/releases/tag/hybrid-pqc so that other researchers can reproduce our results on a nRF52840 development kit. View details
    Identifying and Mitigating the Security Risks of Generative AI
    Clark Barrett
    Brad Boyd
    Nicholas Carlini
    Brad Chen
    Jihye Choi
    Amrita Roy Chowdhury
    Anupam Datta
    Soheil Feizi
    Kathleen Fisher
    Tatsunori B. Hashimoto
    Dan Hendrycks
    Somesh Jha
    Daniel Kang
    Florian Kerschbaum
    Eric Mitchell
    John Mitchell
    Zulfikar Ramzan
    Khawaja Shams
    Dawn Song
    Ankur Taly
    Diyi Yang
    Foundations and Trends in Privacy and Security, 6 (2023), pp. 1-52
    Preview abstract Every major technical invention resurfaces the dual-use dilemma—the new technology has the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such as large language models (LLMs) and diffusion models, have shown remarkable capabilities (e.g., in-context learning, code-completion, and text-to-image generation and editing). However, GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks. This paper reports the findings of a workshop held at Google (co-organized by Stanford University and the University of Wisconsin-Madison) on the dual-use dilemma posed by GenAI. This paper is not meant to be comprehensive, and reports on some of the interesting findings from the workshop. We discuss short-term and long-term goals for the community on this topic. We hope this paper provides a launching point on this important topic and provides interesting problems that the research community can work to address. View details
    Preview abstract Content creators—social media personalities with large audiences on platforms like Instagram, TikTok, and YouTube—face a heightened risk of online hate and harassment. We surveyed 135 creators to understand their personal experiences with attacks (including toxic comments, impersonation, stalking, and more), the coping practices they employ, and gaps they experience with existing solutions (such as moderation or reporting). We find that while a majority of creators view audience interactions favorably, nearly every creator could recall at least one incident of hate and harassment, and attacks are a regular occurrence for one in three creators. As a result of hate and harassment, creators report self-censoring their content and leaving platforms. Through their personal stories, their attitudes towards platform-provided tools, and their strategies for coping with attacks and harms, we inform the broader design space for how to better protect people online from hate and harassment. View details
    Preview abstract People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns. View details