Abhishek Roy

Abhishek Roy

Abhishek leads user research efforts on online deception and manipulation in Google's Trust & Safety team. In this role, he oversees quantitative and qualitative research studies focused on gaining insights into user behavior, perceptions, and preferences. These insights are used to drive changes that promote user protection across Google products. Abhishek has worked at Google for over 12 years and has previously led teams conducting user research and policy enforcement efforts on products like Google News and Google Search.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Online scams are a growing threat in India, impacting millions and causing substantial financial losses year over year. This white paper presents ShieldUp!, a novel mobile game prototype designed to inoculate users against common online scams by leveraging the principles of psychological inoculation theory. ShieldUp! exposes users to weakened versions of manipulation tactics frequently used by scammers, and teaches them to recognize and pre-emptively refute these techniques. A randomized controlled trial (RCT) with 3,000 participants in India was conducted to evaluate the game's efficacy in helping users better identify scams scenarios. Participants were assigned to one of three groups: the ShieldUp! group (play time: 15 min), a general scam awareness group (watching videos and reading tips for 10-15 min), and a control group (plays "Chrome Dino", an unrelated game, for 10 minutes). Scam discernment ability was measured using a newly developed Scam Discernment Ability Test (SDAT-10) before the intervention, immediately after, and at a 21-day follow-up. Results indicated that participants who played ShieldUp! showed a significant improvement in their ability to identify scams compared to both control groups, and this improvement was maintained at follow-up. Importantly, while both interventions initially led users to to show increased skepticism towards even genuine online offers (NOT Scam scenarios), this effect dissipated after 21 days, suggesting no long-term negative impact on user trust. This study demonstrates the potential of game-based inoculation as a scalable and effective scam prevention strategy, offering valuable insights for product design, policy interventions, and future research, including the need for longitudinal studies and cross-cultural adaptations. View details
    Seeking in Cycles: How Users Leverage Personal Information Ecosystems to Find Mental Health Information
    Ashlee Milton
    Fernando Maestre
    Stevie Chancellor
    Proceedings of the CHI Conference on Human Factors in Computing Systems (2024)
    Preview abstract Information is crucial to how people understand their mental health and well-being, and many turn to online sources found through search engines and social media. We present the findings from an interview study (n = 17) of participants who use online platforms to seek information about their mental illnesses. We found that participants leveraged multiple platforms in a cyclical process for finding information from their personal information ecosystems, driven by the adoption of new information and uncertainty surrounding the credibility of information. Concerns about privacy, fueled by perceptions of stigma and platform design, also influenced their information-seeking decisions. Our work proposes theoretical implications for social computing and information retrieval on information seeking in users' personal information ecosystems. We also offer design implications to support users in navigating their personal information ecosystems to find mental health information. View details
    A Survey of Scam Exposure, Victimization, Types, Vectors, and Reporting in 12 Countries
    Mo Houtti
    Narsi G
    Ashley Walker
    Journal of Online Trust & Safety, 2 (2024)
    Preview abstract Scams are a widespread issue with severe consequences for both victims and perpetrators, but existing data collection is fragmented, precluding global and comparative local understanding. The present study addresses this gap through a nationally representative survey (n = 8,369) on scam exposure, victimization, types, vectors, and reporting in 12 countries: Belgium, Egypt, France, Hungary, Indonesia, Mexico, Romania, Slovakia, South Africa, South Korea, Sweden, and the United Kingdom. We analyze six survey questions to build a detailed quantitative picture of the scams landscape in each country, and compare across countries to identify global patterns. We find, first, that residents of less affluent countries suffer financial loss from scams more often. Second, we find that the internet plays a key role in scams across the globe, and that GNI per capita is strongly associated with specific scam types and contact vectors. Third, we find widespread underreporting, with residents of less affluent countries being less likely to know how to report a scam. Our findings contribute valuable insights for researchers, practitioners, and policymakers in the online fraud and scam prevention space. View details
    Evidence-Based Misinformation Interventions: Challenges and Opportunities for Measurement and Collaboration
    Yasmin Green
    Andrew Gully
    Yoel Roth
    Joshua Tucker
    Alicia Wanless
    Carnegie Endowment for International Peace (2023)
    Preview abstract The lingering coronavirus pandemic has only underscored the need to find effective interventions to help internet users evaluate the credibility of the information before them. Yet a divide remains between researchers within digital platforms and those in academia and other research professions who are analyzing interventions. Beyond issues related to data access, a challenge deserving papers of its own, opportunities exist to clarify the core competencies of each research community and to build bridges between them in pursuit of the shared goal of improving user-facing interventions that address misinformation online. This paper attempts to contribute to such bridge-building by posing questions for discussion: How do different incentive structures determine the selection of outcome metrics and the design of research studies by academics and platform researchers, given the values and objectives of their respective institutions? What factors affect the evaluation of intervention feasibility for platforms that are not present for academics (for example, platform users’ perceptions, measurability at scale, interaction, and longitudinal effects on metrics that are introduced in real-world deployments)? What are the mutually beneficial opportunities for collaboration (such as increased insight-sharing from platforms to researchers about user feedback regarding a diversity of intervention designs). Finally, we introduce a measurement attributes framework to aid development of feasible, meaningful, and replicable metrics for researchers and platform practitioners to consider when developing, testing, and deploying misinformation interventions. View details
    Preview abstract Conspiracy influencers have become a major means of spreading health misinformation, particularly during the COVID-19 pandemic. Previous research has found that a small number of these influencers are responsible for a large portion of misinformation related to public health. Although there has been research on the spread of misinformation, there has been comparatively little on the strategies conspiracy influencers use to spread their message across platforms. To better understand these strategies, we conducted a crosssectional study of 55 influencers, analyzing their platform usage, audience size, account creation date, and content originality. Our results indicate that these influencers use multiple platforms to circumvent algorithmic discrimination and deplatforming, tailor their content to monetization channels and that despite the rise in popularity of unmoderated platforms, there’s still a reliance on moderated platforms to build an audience. Our findings can inform strategies to combat the spread of health misinformation in the online ecosystem. View details