Social Biases in NLP Models as Barriers for Persons with Disabilities

Stephen Craig Denuyl
Proceedings of ACL 2020, ACL (to appear)
Google Scholar

Abstract

Building equitable and inclusive technologies
demands paying attention to how social attitudes towards persons with disabilities are
represented within technology. Representations perpetuated by NLP models often inadvertently encode undesirable social biases
from the data on which they are trained. In this
paper, first we present evidence of such undesirable biases towards mentions of disability in
two different NLP models: toxicity prediction
and sentiment analysis. Next, we demonstrate
that neural embeddings that are critical first
steps in most NLP pipelines also contain undesirable biases towards mentions of disabilities.
We then expose the topical biases in the social
discourse about some disabilities which may
explain such biases in the models; for instance,
terms related to gun violence, homelessness,
and drug addiction are over-represented in discussions about mental illness.