Information Sciences and TechnologyAI language models show bias against people with disabilities, study finds
Algorithms that drive natural language processing technology — a type of artificial intelligence that allows machines to use text and spoken words in many different applications — often have tendencies that could be offensive or prejudiced toward individuals with disabilities, according to researchers at the Penn State College of Information Sciences and Technology (IST). Credit: Adobe Stock: Photographee.eu. All Rights Reserved.
Expand
UNIVERSITY PARK, Pa. — Natural language processing (NLP) is a type of artificial intelligence that allows machines to use text and spoken words in many different applications — such as smart assistants or email autocorrect and spam filters — helping automate and streamline operations for individual users and enterprises. However, the algorithms that drive this technology often have tendencies that could be offensive or prejudiced toward individuals with disabilities, according to researchers at the Penn State College of Information Sciences and Technology (IST).
The researchers found that all the algorithms and models they tested contained significant implicit bias against people with disabilities. Previous research on pretrained language models — which are trained on large amounts of data that may contain implicit biases — has [...]