Responsible AI

The mission of the Responsible AI and Human Centered Technology (RAI-HCT) team is to conduct research and develop methodologies, technologies, and best practices to ensure AI systems are built responsibly.

people talking in an office space

The mission of the Responsible AI and Human Centered Technology (RAI-HCT) team is to conduct research and develop methodologies, technologies, and best practices to ensure AI systems are built responsibly.

About the team

We want to ensure that AI, and its development, have a positive impact on everyone, including marginalized communities. To meet this goal, we research and develop technology with a human-centered perspective, building tools and processes that put our AI Principles into practice at scale. Working alongside diverse collaborators, including our partner teams and external contributors as we strive to make AI more transparent, fair, and useful to diverse communities. We also seek to constantly improve the reliability and safety of our entire AI ecosystem.

Our intention is to create a future where technology benefits all users and society.

What we do

  • Foundational Research: Build foundational insights and methodologies that define the state-of-the-art of Responsible AI development across the field
  • Impact at Google: Collaborate with and contribute to teams across Alphabet to ensure that Google’s products are built following our AI Principles
  • Democratize AI: Embed a diversity of cultural contexts and voices in AI development, and empower a broader audience with consistent access, control, and explainability
  • Tools and Guidance: Develop tools and technical guidance that can be used by Google, our customers, and the community to test and improve AI products for RAI objectives

Team focus summaries

Equity and fairness

Identify and prevent unjust or prejudicial treatment of people, particularly underrepresented groups, when and where they manifest in algorithmic systems.

Safety

Develop strong safety practices to avoid unintended results through research in robustness, benchmarking, and adversarial testing.

Responsible data

Identify and advance responsible data practices for ML datasets, covering the spectrum from research methods and techniques to tooling and best practices.

Interpretability and explainability

Develop methods and techniques to help developers and users understand and explain ML model inferences and predictions.

Foundational ML research with responsible values

Develop machine learning methodologies that represent AI at its best (responsible, fair, transparent, robust, and inclusive), and apply them in the real world.

Community-Focused research

Explore the social and historical context and experiences of communities that have been impacted by AI. Promote research approaches that center community knowledge when developing new AI technologies, through their participation in research.

Human-Computer interaction

Design and build human-in-the-loop tools that make machine learning models more intuitive and interactive for users.

Applications for social good

Demonstrate AI’s societal benefit by enabling real-world impact.

Featured publications

Towards a Critical Race Methodology in Algorithmic Fairness
Alex Hanna
ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*) (2020)
Preview abstract We examine the way race and racial categories are adopted in algorithmic fairness frameworks. Current methodologies fail to adequately account for the socially constructed nature of race, instead adopting a conceptualization of race as a fixed attribute. Treating race as an attribute, rather than a structural, institutional, and relational phenomenon, can serve to minimize the structural aspects of algorithmic unfairness. In this work, we focus on the history of racial categories and turn to critical race theory and sociological work on race and ethnicity to ground conceptualizations of race for fairness research, drawing on lessons from public health, biomedical research, and social survey research. We argue that algorithmic fairness researchers need to take into account the multidimensionality of race, take seriously the processes of conceptualizing and operationalizing race, focus on social processes which produce racial inequality, and consider perspectives of those most affected by sociotechnical systems. View details
Fairness in Recommendation Ranking through Pairwise Comparisons
Alex Beutel
Tulsee Doshi
Hai Qian
Li Wei
Yi Wu
Lukasz Heldt
Zhe Zhao
Lichan Hong
Cristos Goodrow
KDD (2019)
Preview abstract Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them? In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness. View details
Preview abstract Neural networks lack adversarial robustness, ie, they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated predictions, ie, the predicted probability is not a good indicator of how much we should trust our model. In this paper, we study the connection between adversarial robustness and calibration and find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated predictions. Based on this insight, we examine if calibration can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening labels for an example based on how easily it can be attacked by an adversary. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to further improve model calibration. View details
Preview abstract Machine learning (ML) approaches have demonstrated promising results in a wide range of healthcare applications. Data plays a crucial role in developing ML-based healthcare systems that directly affect people’s lives. Many of the ethical issues surrounding the use of ML in healthcare stem from structural inequalities underlying the way we collect, use, and handle data. Developing guidelines to improve documentation practices regarding the creation, use, and maintenance of ML healthcare datasets is therefore of critical importance. In this work, we introduce Healthsheet, a contextualized adaptation of the original datasheet questionnaire for health-specific applications. Through a series of semi-structured interviews, we adapt the datasheets for healthcare data documentation. As part of the Healthsheet development process and to understand the obstacles researchers face in creating datasheets, we worked with three publicly-available healthcare datasets as our case studies, each with different types of structured data: Electronic health Records (EHR), clinical trial study data, and smartphone-based performance outcome measures. Our findings from the interviewee study and case studies show 1) that datasheets should be contextualized for healthcare, 2) that despite incentives to adopt accountability practices such as datasheets, there is a lack of consistency in the broader use of these practices 3) how the ML for health community views datasheets and particularly Healthsheets as diagnostic tool to surface the limitations and strength of datasets and 4) the relative importance of different fields in the datasheet to healthcare concerns. View details
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Dan Moldovan
Ben Adlam
Babak Alipanahi
Alex Beutel
Christina Chen
Jon Deaton
Shaobo Hou
Neil Houlsby
Ghassen Jerfel
Yian Ma
Akinori Mitani
Andrea Montanari
Christopher Nielsen
Thomas Osborne
Rajiv Raman
Kim Ramasamy
Martin Gamunu Seneviratne
Shannon Sequeira
Harini Suresh
Victor Veitch
Journal of Machine Learning Research (2020)
Preview abstract ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain. View details
A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research
Ned Cooper
Tiffanie Horne
Gillian Hayes
Jess Scon Holbrook
Lauren Wilcox
ACM Conference on Human Factors in Computing Systems (ACM CHI) 2022 (2022)
Preview abstract HCI researchers have been gradually shifting attention from individual users to communities when engaging in research, design, and system development. However, our field has yet to establish a cohesive, systematic understanding of the challenges, benefits, and commitments of community-collaborative approaches to research. We conducted a systematic review and thematic analysis of 47 computing research papers discussing participatory research with communities for the development of technological artifacts and systems, published over the last two decades. From this review, we identified seven themes associated with the evolution of a project: from establishing community partnerships to sustaining results. Our findings suggest several tensions characterize these projects, many of which relate to the power and position of researchers, and the computing research environment, relative to community partners. We discuss the implications of our findings and offer methodological proposals to guide HCI, and computing research more broadly, towards practices that center a community. View details
Preview abstract Human annotated data plays a crucial role in machine learning (ML) research and development. However, the ethical considerations around the processes and decisions that go into dataset annotation have not received nearly enough attention. In this paper, we survey an array of literature that provides insights into ethical considerations around crowdsourced dataset annotation. We synthesize these insights, and lay out the challenges in this space along two layers: (1) who the annotator is, and how the annotators' lived experiences can impact their annotations, and (2) the relationship between the annotators and the crowdsourcing platforms, and what that relationship affords them. Finally, we introduce a novel framework, CrowdWorkSheets, for dataset developers to facilitate transparent documentation of key decisions points at various stages of the data annotation pipeline: task formulation, selection of annotators, platform and infrastructure choices, dataset analysis and evaluation, and dataset release and maintenance. View details
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL-IJCNLP) (2022)
Preview abstract Recent research has revealed undesirable biases in NLP data and models. However, these efforts focus of social disparities in West, and are not directly portable to other geo-cultural contexts. In this paper, we focus on NLP fair-ness in the context of India. We start with a brief account of the prominent axes of social disparities in India. We build resources for fairness evaluation in the Indian context and use them to demonstrate prediction biases along some of the axes. We then delve deeper into social stereotypes for Region and Religion, demonstrating its prevalence in corpora and models. Finally, we outline a holistic research agenda to re-contextualize NLP fairness research for the Indian context, ac-counting for Indian societal context, bridging technological gaps in NLP capabilities and re-sources, and adapting to Indian cultural values.While we focus on India, this framework can be generalized to other geo-cultural contexts. View details
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditin
Becky White
Inioluwa Deborah Raji
Margaret Mitchell
Timnit Gebru
FAT* Barcelona, 2020, ACM Conference on Fairness, Accountability, and Transparency (ACM FAT* (2020)
Preview abstract Rising concern for the societal implications of artificial intelligencesystems has inspired a wave of academic and journalistic literaturein which deployed systems are audited for harm by investigatorsfrom outside the organizations deploying the algorithms. However,it remains challenging for practitioners to identify the harmfulrepercussions of their own systems prior to deployment, and, oncedeployed, emergent issues can become difficult or impossible totrace back to their source.In this paper, we introduce a framework for algorithmic auditingthat supports artificial intelligence system development end-to-end,to be applied throughout the internal organization development life-cycle. Each stage of the audit yields a set of documents that togetherform an overall audit report, drawing on an organization’s valuesor principles to assess the fit of decisions made throughout the pro-cess. The proposed auditing framework is intended to contribute toclosing theaccountability gapin the development and deploymentof large-scale artificial intelligence systems by embedding a robustprocess to ensure audit integrity. View details

Highlighted projects

Some of our people