Tris Warkentin

Tris is a Product Management Director at Google DeepMind, leading new product development based on breakthrough AI research. Before DeepMind, Tris led the PM team for Google Brain, launching Bard, PaLM, Imagen, Parti, and more. Prior to Brain, Tris was the lead PM for the TensorFlow Ecosystem's Tools and Services team, including TensorFlow Extended (TFX), TensorBoard, TensorFlow Enterprise, TensorFlow Probability, TensorFlow Hub, and TensorFlow Serving. Previously, Tris led Google's ML efforts in Display Advertising Quality and Automation.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    VaultGemma
    Lynn Chua
    Prem Eruvbetine
    Chiyuan Zhang
    Thomas Mesnard
    Borja De Balle Pigem
    Daogao Liu
    Amer Sinha
    Pritish Kamath
    Yangsibo Huang
    Christopher A. Choquette-Choo
    George Kaissis
    Armand Joulin
    Da Yu
    Ryan McKenna
    arxiv (2025)
    Preview abstract In this work, we present VaultGemma 1B, a model based on the Gemma family of models fully trained with differential privacy. VaultGemma 1B is 1 billion parameter pretrained model based on the Gemma 2 series of models and uses the same dataset for training. We will be releasing a tech report and the weights of this model. View details
    CodeGemma: Open Code Models Based on Gemma
    Heri Zhao
    Joshua Howland
    Nam Nguyen
    Siqi Zuo
    Andrea Hu
    Christopher A. Choquette-Choo
    Jingyue Shen
    Joe Kelley
    Mateo Wirth
    Paul Michel
    Peter Choy
    Pratik Joshi
    Sarmad Hashmi
    Shubham Agrawal
    Zhitao Gong
    Jane Fine
    Ale Hartman
    Bin Ni
    Kathy Korevec
    Kelly Schaefer
    (2024)
    Preview abstract This paper introduces CodeGemma, a family of specialized open code models built on top of Gemma, capable of a variety of code and natural language generation tasks. We release three model checkpoints. CodeGemma 7B pretrained (PT) and instruction-tuned (IT) variants have remarkably resilient natural language understanding, excel in mathematical reasoning, while matching code capabilities of other open models. CodeGemma 2B is a state-of-the-art code completion model designed for fast code infilling and open-ended generation in latency sensitive settings. View details
    Levels of AGI for Operationalizing Progress on the Path to AGI
    Jascha Sohl-Dickstein
    Allan Dafoe
    Aleksandra Faust
    Clement Farabet
    Shane Legg
    (2023)
    Preview abstract We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy. It is our hope that this framework will be useful in an analogous way to the levels of autonomous driving, by providing a common language to compare models, assess risks, and measure progress along the path to AGI. To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. These principles include focusing on capabilities rather than mechanisms; separately evaluating generality and performance; and defining stages along the path toward AGI, rather than focusing on the endpoint. With these principles in mind, we propose “Levels of AGI” based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels. Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems. View details
    Towards ML Engineering: A Brief History Of TensorFlow Extended (TFX)
    Abhijit Karmarkar
    Ahmet Altay
    Aleksandr Zaks
    Anusha Ramesh
    Jarek Wilkiewicz
    Jiri Simsa
    Justin Hong
    Mitch Trott
    Neoklis Polyzotis
    Noé Lutz
    Robert Crowe
    Sarah Sirajuddin
    Zhitao Li
    (2020)
    Preview abstract Software Engineering, as a discipline, has matured over the past 5+ decades. The modern world heavily depends on it, so the increased maturity of Software Engineering is a necessary blessing. Practices like testing and reliable technologies help make Software Engineering reliable enough to build industries upon. Meanwhile, Machine Learning (ML) has also grown over the past 2+ decades. ML is used more and more for research, experimentation and production workloads. ML now commonly powers widely-used products integral to our lives. But ML Engineering, as a discipline, has not widely matured as much as its Software Engineering ancestor. Can we take what we have learned and help the nascent field of applied ML evolve into ML Engineering the way Programming evolved into Software Engineering [book]? In this article we will give a whirlwind tour of Sibyl [article] and TensorFlow Extended (TFX) [website], two successive end-to-end (E2E) ML platforms at Alphabet. We will share the lessons learned from over a decade of applied ML built on these platforms, explain both their similarities and their differences, and expand on the shifts (both mental and technical) that helped us on our journey. In addition, we will highlight some of the capabilities of TFX that help realize several aspects of ML Engineering. We argue that in order to unlock the gains ML can bring, organizations should advance the maturity of their ML teams by investing in robust ML infrastructure and promoting ML Engineering education. We also recommend that before focusing on cutting-edge ML modeling techniques, product leaders should invest more time in adopting interoperable ML platforms for their organizations. In closing, we will also share a glimpse into the future of TFX. View details
    ×