Yossi Matias

Yossi Matias

Yossi Matias is Vice President, Google, and the Head of Google Research.

Under Yossi’s leadership, world-class global teams are leading breakthrough research on Foundational Machine Learning & Algorithms, Computing Systems & Quantum Computing, Science, AI for Societal Impact in Health, Climate, Sustainability, Education and Socio-technical research, and foundational advancements in Generative AI - shaping the future of technology and driving the magic cycle between research and real-world impact.

Yossi was previously on Google Search leadership for over a decade, driving strategic features and technologies, and pioneered Conversational AI innovations to help transform the phone experience and help remove barriers of modality and languages. He was also the founding lead of Google center in Israel and supported other global sites. During his tenure at Google Yossi founded and spearheaded initiatives such as Google's AI for Social Good, Crisis Response, Google for Startups Accelerator, social and cultural initiatives seeding Google Arts & Culture, and programs fostering startups, sustainability, and STEM and AI literacy for youth.

Prior to Google Yossi was on the Computer Science faculty at Tel Aviv University, a visiting professor at Stanford, and a Research Scientist at Bell Labs. He’s published over 200 papers and is the inventor of over 80 patents. He pioneered some of the early technologies for internet privacy, contextual search, and the effective analysis of Big Data. He is a recipient of the Gödel Prize, an ACM Fellow, and a recipient of the ACM Kanellakis Theory and Practice Award for seminal work on streaming algorithms, data sketches, and large-scale data analytics

Yossi has a track record of impact-driven breakthrough research and innovation, and extensive product leadership, transforming products and advancing AI to help address global challenges.

-----------------------------------------------

More about work Yossi has been leading in recent years:

Search: Google Search leadership for over a decade included Autocomplete, Google Trends, Search Console, and Search experiences in weather, sports, dictionary and more.

Generative AI: From Research to Reality in efficiency, factuality, multilinguality, health, education, geospatial reasoning and more. Efficiency - extensive research including Speculative Decoding which has impact across the industry. Factuality - extensive research work on consistency and multi-modal factuality, published benchmarks (TRUE, FACTS) and leading to Double Check. Health - MedPaLM, MedGemini, MedGemma (more below). Education - LearnLM.

Conversational AI: Pioneering innovations in conversational AI as the "ultimate user interface" towards ambient intelligence and new experiences. Google Duplex - from a defining moment in AI to helping people and businesses well over 1 trillion times to getting things done faster directly from Search. Helping transform the phone experience (Call Screen, Hold for Me) and helping remove barriers of modality and languages making content and communication more universally available (Live Caption, Live Relay, Euphonia, Read Aloud).

Health AI: Work on Google’s Health AI is driving AI research to help transform healthcare from innovation to impact and help make healthcare more accessible for everyone, with multiple breakthroughs including Med-PaLM(1, 2), MedGemini(3, 4), AMIE(5,6), MedGemma.

Scientific discovery: Using AI to drive scientific research with greater real-world benefit. Accelerating scientific discovery with AI Co-Scientist, powering science breakthroughs and advancing healthcare and scientific discovery.

Climate Resilience: Leadership of AI for Climate and Sustainability, work on climate crisis mitigation (Greenlight,Contrails) as well as climate crisis nowcasting and forecasting - with leadership work on Google’s Crisis Response initiative (SOS alerts, flood forecasting, wildfire detection, FireSat). From research to climate resilience, Google Earth AI.

Special initiatives: Founding lead of Google’s AI for Social Good, Google for Startup Accelerator (from supporting early stage entrepreneurs to particular focus on AI & ML and to focus on Sustainability, expanding to regional programs and globally). Founding lead of Mind the Gap and Hello Tech. Pioneered an initiative of bringing online hundreds of heritage collections (including the Dead Sea Scrolls and the Nelson Mandela archive), and helped establish Google’s Arts and Culture.

Global sites: Founded and has lead Google’s center in Israel, through growth to over 2500 on staff, and founding lead of Campus TLV. Also supported Google’s growth (4X) in Bangalore, India, and oversaw Google’s Expanding Research Center in Africa, innovations for Africa and the world, initiating AI Community Center in Accra and supporting the future of AI Research in Africa and globally..

Additional work: Sketches, streaming algorithms, approximate query answering (seminal work - AMS, Synopses, AQUA Project - see awards below), Privacy and Security - early work on privacy and personalization (see also NYTimes article) based on the novel Janus function, early lightweight security primitives, and foundations for BLE ephemeral IDs); Parallel computation (highly parallel randomized algorithms, parallel models, parallel scheduling..); Compression (LZ improvements, compression in networks,.. ) and more (see publications).

Awards: Yossi is an ACM Fellow for contributions to the analysis of large data sets and data streams. His foundational work on data streams, data synopses and sketches, motivated by computational challenges in what was then the world’s largest data warehouses, was recognized with the Gödel Prize in Theoretical Computer Science, and with the ACM Kanellakis Theory and Practice Award for the instrumental role it played "in the development of the field of streaming algorithms, which is one of the most prolific and highly regarded areas of data management research" and its broad applicability to large-scale data analytics.

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations
    Uri Hasson
    Samuel A. Nastase
    Harshvardhan Gazula
    Aditi Rao
    Tom Sheffer
    Werner Doyle
    Orrin Devinsky
    aditi singh
    Adeen Flinker
    Patricia Dugan
    Bobbi Aubrey
    Sasha Devore
    Daniel Friedman
    Leonard Niekerken
    Catherine Kim
    Haocheng Wang
    Zaid Zada
    Gina Choe
    Nature Human Behaviour (2025)
    Preview abstract This study introduces a unified computational framework connecting acoustic, speech and word-level linguistic structures to study the neural basis of everyday conversations in the human brain. We used electrocorticography to record neural signals across 100 h of speech production and comprehension as participants engaged in open-ended real-life conversations. We extracted low-level acoustic, mid-level speech and contextual word embeddings from a multimodal speech-to-text model (Whisper). We developed encoding models that linearly map these embeddings onto brain activity during speech production and comprehension. Remarkably, this model accurately predicts neural activity at each level of the language processing hierarchy across hours of new conversations not used in training the model. The internal processing hierarchy in the model is aligned with the cortical hierarchy for speech and language processing, where sensory and motor regions better align with the model’s speech embeddings, and higher-level language areas better align with the model’s language embeddings. The Whisper model captures the temporal sequence of language-to-speech encoding before word articulation (speech production) and speech-to-language encoding post articulation (speech comprehension). The embeddings learned by this model outperform symbolic models in capturing neural activity supporting natural speech and language. These findings support a paradigm shift towards unified computational models that capture the entire processing hierarchy for speech comprehension and production in real-world conversations. View details
    LLM-based Lossless Text Simplification and its Effect on User Comprehension and Mental Load
    Theo Guidroz
    Diego Ardila
    Jimmy Li
    Adam Mansour
    Paul Jhun
    Nina Gonzalez
    Xiang Ji
    Mike Sanchez
    Sujay Kakarmath
    Miguel Ángel Garrido
    Faruk Ahmed
    Divyansh Choudhary
    Jay Hartford
    Georgina Xu
    Henry Serrano
    Yifan Wang
    Jeff Shaffer
    Eric (Yifan) Cao
    Sho Fujiwara
    Peggy Bui
    arXiv (2025)
    Preview abstract Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility. View details
    Performance of a Deep Learning Diabetic Retinopathy Algorithm in India
    Arthur Brant
    Xiang Yin
    Lu Yang
    Divleen Jeji
    Sunny Virmani
    Anchintha Meenu
    Naresh Babu Kannan
    Florence Thng
    Lily Peng
    Ramasamy Kim
    JAMA Network Open (2025)
    Preview abstract Importance: While prospective studies have investigated the accuracy of artificial intelligence (AI) for detection of diabetic retinopathy (DR) and diabetic macular edema (DME), to date, little published data exist on the clinical performance of these algorithms. Objective: To evaluate the clinical performance of an automated retinal disease assessment (ARDA) algorithm in the postdeployment setting at Aravind Eye Hospital in India. Design, Setting, and Participants: This cross-sectional analysis involved an approximate 1% sample of fundus photographs from patients screened using ARDA. Images were graded via adjudication by US ophthalmologists for DR and DME, and ARDA’s output was compared against the adjudicated grades at 45 sites in Southern India. Patients were randomly selected between January 1, 2019, and July 31, 2023. Main Outcomes and Measures: Primary analyses were the sensitivity and specificity of ARDA for severe nonproliferative DR (NPDR) or proliferative DR (PDR). Secondary analyses focused on sensitivity and specificity for sight-threatening DR (STDR) (DME or severe NPDR or PDR). Results: Among the 4537 patients with 4537 images with adjudicated grades, mean (SD) age was 55.2 (11.9) years and 2272 (50.1%) were male. Among the 3941 patients with gradable photographs, 683 (17.3%) had any DR, 146 (3.7%) had severe NPDR or PDR, 109 (2.8%) had PDR, and 398 (10.1%) had STDR. ARDA’s sensitivity and specificity for severe NPDR or PDR were 97.0% (95% CI, 92.6%-99.2%) and 96.4% (95% CI, 95.7%-97.0%), respectively. Positive predictive value (PPV) was 50.7% and negative predictive value (NPV) was 99.9%. The clinically important miss rate for severe NPDR or PDR was 0% (eg, some patients with severe NPDR or PDR were interpreted as having moderate DR and referred to clinic). ARDA’s sensitivity for STDR was 95.9% (95% CI, 93.0%-97.4%) and specificity was 94.9% (95% CI, 94.1%-95.7%); PPV and NPV were 67.9% and 99.5%, respectively. Conclusions and Relevance: In this cross-sectional study investigating the clinical performance of ARDA, sensitivity and specificity for severe NPDR and PDR exceeded 96% and caught 100% of patients with severe  NPDR and PDR for ophthalmology referral. This preliminary large-scale postmarketing report of the performance of ARDA after screening 600 000 patients in India underscores the importance of monitoring and publication an algorithm's clinical performance, consistent with recommendations by regulatory bodies. View details
    Triaging mammography with artificial intelligence: an implementation study
    Sarah M. Friedewald
    Sunny Jansen
    Fereshteh Mahvar
    Timo Kohlberger
    David V. Schacht
    Sonya Bhole
    Dipti Gupta
    Scott Mayer McKinney
    Stacey Caron
    David Melnick
    Mozziyar Etemadi
    Samantha Winter
    Alejandra Maciel
    Luca Speroni
    Martha Sevenich
    Arnav Agharwal
    Rubin Zhang
    Gavin Duggan
    Shiro Kadowaki
    Atilla Kiraly
    Jie Yang
    Basil Mustafa
    Krish Eswaran
    Shravya Shetty
    Breast Cancer Research and Treatment (2025)
    Preview abstract Purpose Many breast centers are unable to provide immediate results at the time of screening mammography which results in delayed patient care. Implementing artificial intelligence (AI) could identify patients who may have breast cancer and accelerate the time to diagnostic imaging and biopsy diagnosis. Methods In this prospective randomized, unblinded, controlled implementation study we enrolled 1000 screening participants between March 2021 and May 2022. The experimental group used an AI system to prioritize a subset of cases for same-visit radiologist evaluation, and same-visit diagnostic workup if necessary. The control group followed the standard of care. The primary operational endpoints were time to additional imaging (TA) and time to biopsy diagnosis (TB). Results The final cohort included 463 experimental and 392 control participants. The one-sided Mann-Whitney U test was employed for analysis of TA and TB. In the control group, the TA was 25.6 days [95% CI 22.0–29.9] and TB was 55.9 days [95% CI 45.5–69.6]. In comparison, the experimental group's mean TA was reduced by 25% (6.4 fewer days [one-sided 95% CI > 0.3], p<0.001) and mean TB was reduced by 30% (16.8 fewer days; 95% CI > 5.1], p=0.003). The time reduction was more pronounced for AI-prioritized participants in the experimental group. All participants eventually diagnosed with breast cancer were prioritized by the AI. Conclusions Implementing AI prioritization can accelerate care timelines for patients requiring additional workup, while maintaining the efficiency of delayed interpretation for most participants. Reducing diagnostic delays could contribute to improved patient adherence, decreased anxiety and addressing disparities in access to timely care. View details
    LLM-based Lossless Text Simplification and its Effect on User Comprehension and Cognitive Load
    Theo Guidroz
    Diego Ardila
    Jimmy Li
    Adam Mansour
    Paul Jhun
    Nina Gonzalez
    Xiang Ji
    Mike Sanchez
    Sujay Kakarmath
    Miguel Ángel Garrido
    Faruk Ahmed
    Divyansh Choudhary
    Jay Hartford
    Georgina Xu
    Henry Serrano
    Yifan Wang
    Jeff Shaffer
    Eric (Yifan) Cao
    Sho Fujiwara
    Peggy Bui
    arXiv (2025)
    Preview abstract Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility. View details
    Preview abstract While large language models (LLMs) have shown promise in diagnostic dialogue, their capabilities for effective management reasoning - including disease progression, therapeutic response, and safe medication prescription - remain under-explored. We advance the previously demonstrated diagnostic capabilities of the Articulate Medical Intelligence Explorer (AMIE) through a new LLM-based agentic system optimised for clinical management and dialogue, incorporating reasoning over the evolution of disease and multiple patient visit encounters, response to therapy, and professional competence in medication prescription. To ground its reasoning in authoritative clinical knowledge, AMIE leverages Gemini's long-context capabilities, combining in-context retrieval with structured reasoning to align its output with relevant and up-to-date clinical practice guidelines and drug formularies. In a randomized, blinded virtual Objective Structured Clinical Examination (OSCE) study, AMIE was compared to 21 primary care physicians (PCPs) across 100 multi-visit case scenarios designed to reflect UK NICE Guidance and BMJ Best Practice guidelines. AMIE was non-inferior to PCPs in management reasoning as assessed by specialist physicians and scored better in both preciseness of treatments and investigations, and in its alignment with and grounding of management plans in clinical guidelines. To benchmark medication reasoning, we developed RxQA, a multiple-choice question benchmark derived from two national drug formularies (US, UK) and validated by board-certified pharmacists. While AMIE and PCPs both benefited from the ability to access external drug information, AMIE outperformed PCPs on higher difficulty questions. While further research would be needed before real-world translation, AMIE's strong performance across evaluations marks a significant step towards conversational AI as a tool in disease management. View details
    Preview abstract Generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), have demonstrated significant potential in clinical reasoning skills such as history-taking and differential diagnosis generation—critical aspects of medical education. This work explores how LLMs can augment medical curricula through interactive learning. We conducted a participatory design process with medical students, residents and medical education experts to co-create an AI-powered tutor prototype for clinical reasoning. As part of the co-design process, we conducted a qualitative user study, investigating learning needs and practices via interviews, and conducting concept evaluations through interactions with the prototype. Findings highlight the challenges learners face in transitioning from theoretical knowledge to practical application, and how an AI tutor can provide personalized practice and feedback. We conclude with design considerations, emphasizing the importance of context-specific knowledge and emulating positive preceptor traits, to guide the development of AI tools for medical education. View details
    Closing the AI generalisation gap by adjusting for dermatology condition distribution differences across clinical settings
    Rajeev Rikhye
    Aaron Loh
    Grace Hong
    Margaret Ann Smith
    Vijaytha Muralidharan
    Doris Wong
    Michelle Phung
    Nicolas Betancourt
    Bradley Fong
    Rachna Sahasrabudhe
    Khoban Nasim
    Alec Eschholz
    Basil Mustafa
    Jan Freyberg
    Terry Spitz
    Kat Chou
    Peggy Bui
    Justin Ko
    Steven Lin
    The Lancet eBioMedicine (2025)
    Preview abstract Background: Generalisation of artificial intelligence (AI) models to a new setting is challenging. In this study, we seek to understand the robustness of a dermatology (AI) model and whether it generalises from telemedicine cases to a new setting including both patient-submitted photographs (“PAT”) and clinician-taken photographs in-clinic (“CLIN”). Methods: A retrospective cohort study involving 2500 cases previously unseen by the AI model, including both PAT and CLIN cases, from 22 clinics in the San Francisco Bay Area, spanning November 2015 to January 2021. The primary outcome measure for the AI model and dermatologists was the top-3 accuracy, defined as whether their top 3 differential diagnoses contained the top reference diagnosis from a panel of dermatologists per case. Findings: The AI performed similarly between PAT and CLIN images (74% top-3 accuracy in CLIN vs. 71% in PAT), however, dermatologists were more accurate in PAT images (79% in CLIN vs. 87% in PAT). We demonstrate that demographic factors were not associated with AI or dermatologist errors; instead several categories of conditions were associated with AI model errors (p < 0.05). Resampling CLIN and PAT to match skin condition distributions to the AI development dataset reduced the observed differences (AI: 84% CLIN vs. 79% PAT; dermatologists: 77% CLIN vs. 89% PAT). We demonstrate a series of steps to close the generalisation gap, requiring progressively more information about the new dataset, ranging from the condition distribution to additional training data for rarer conditions. When using additional training data and testing on the dataset without resampling to match AI development, we observed comparable performance from end-to-end AI model fine tuning (85% in CLIN vs. 83% in PAT) vs. fine tuning solely the classification layer on top of a frozen embedding model (86% in CLIN vs. 84% in PAT). Interpretation: AI algorithms can be efficiently adapted to new settings without additional training data by recalibrating the existing model, or with targeted data acquisition for rarer conditions and retraining just the final layer. View details
    A personal health large language model for sleep and fitness coaching
    Anastasiya Belyaeva
    Zhun Yang
    Nick Furlotte
    Chace Lee
    Erik Schenck
    Yojan Patel
    Jian Cui
    Logan Schneider
    Robby Bryant
    Ryan Gomes
    Allen Jiang
    Roy Lee
    Javier Perez
    Jamie Rogers
    Cathy Speed
    Shyam Tailor
    Megan Walker
    Jeffrey Yu
    Tim Althoff
    Conor Heneghan
    Mark Malhotra
    Shwetak Patel
    Shravya Shetty
    Jiening Zhan
    Daniel McDuff
    Nature Medicine (2025)
    Preview abstract Although large language models (LLMs) show promise for clinical healthcare applications, their utility for personalized health monitoring using wearable device data remains underexplored. Here we introduce the Personal Health Large Language Model (PH-LLM), designed for applications in sleep and fitness. PH-LLM is a version of the Gemini LLM that was finetuned for text understanding and reasoning when applied to aggregated daily-resolution numerical sensor data. We created three benchmark datasets to assess multiple complementary aspects of sleep and fitness: expert domain knowledge, generation of personalized insights and recommendations and prediction of self-reported sleep quality from longitudinal data. PH-LLM achieved scores that exceeded a sample of human experts on multiple-choice examinations in sleep medicine (79% versus 76%) and fitness (88% versus 71%). In a comprehensive evaluation involving 857 real-world case studies, PH-LLM performed similarly to human experts for fitness-related tasks and improved over the base Gemini model in providing personalized sleep insights. Finally, PH-LLM effectively predicted self-reported sleep quality using a multimodal encoding of wearable sensor data, further demonstrating its ability to effectively contextualize wearable modalities. This work highlights the potential of LLMs to revolutionize personal health monitoring via tailored insights and predictions from wearable data and provides datasets, rubrics and benchmark performance to further accelerate personal health-related LLM research. View details
    Earth AI: Unlocking Geospatial Insights with Foundation Models and Cross-Modal Reasoning
    Aaron Bell
    Aviad Barzilai
    Roy Lee
    Gia Jung
    Charles Elliott
    Adam Boulanger
    Amr Helmy
    Jacob Bien
    Ruth Alcantara
    Nadav Sherman
    Hassler Thurston
    Yotam Gigi
    Bolous Jaber
    Vered Silverman
    Luke Barrington
    Tim Thelin
    Elad Aharoni
    Kartik Hegde
    Yuval Carny
    Shravya Shetty
    Yehonathan Refael
    Stone Jiang
    David Schottlander
    Juliet Rothenberg
    Luc Houriez
    Yochai Blau
    Joydeep Paul
    Yang Chen
    Yael Maguire
    Aviv Slobodkin
    Shlomi Pasternak
    Alex Ottenwess
    Jamie McPike
    Per Bjornsson
    Natalie Williams
    Reuven Sayag
    Thomas Turnbull
    Ali Ahmadalipour
    David Andre
    Amit Aides
    Ean Phing VanLee
    Niv Efron
    Monica Bharel
    arXiv (preprint 2025), arXiv, arXiv:2510.18318 https://doi.org/10.48550/arXiv.2510.18318 (2025)
    Preview abstract Geospatial data offers immense potential for understanding our planet. However, the sheer volume and diversity of this data along with its varied resolutions, timescales, and sparsity pose significant challenges for thorough analysis and interpretation. The emergence of Foundation Models (FMs) and Large Language Models (LLMs) offers an unprecedented opportunity to tackle some of this complexity, unlocking novel and profound insights into our planet. This paper introduces a comprehensive approach to developing Earth AI solutions, built upon foundation models across three key domains—Planet-scale Imagery, Population, and Environment—and an intelligent Gemini-powered reasoning engine. We present rigorous benchmarks showcasing the power and novel capabilities of our foundation models and validate that they provide complementary value to improve geospatial inference. We show that the synergy between these models unlocks superior predictive capabilities. To handle complex, multi-step queries, we developed a Gemini-powered agent that jointly reasons over our multiple foundation models along with large geospatial data sources and tools to unlock novel geospatial insights. On a new benchmark of real-world crisis scenarios, our agent demonstrates the ability to deliver critical and timely insights, effectively bridging the gap between raw geospatial data and actionable understanding. View details