Technology Review
AI models are riddled with culturally specific biases. A new data set, called SHADES, is designed to help developers combat the problem by spotting harmful stereotypes and other kinds of discrimination that emerge in AI chatbot responses across a wide range of languages. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face…
Read More
This data set helps researchers spot harmful stereotypes in LLMs
AI models are riddled with culturally specific biases. A new data set, called SHADES, is designed to help developers combat the problem by spotting harmful stereotypes and other kinds of discrimination that emerge in AI chatbot responses across a wide range of languages. Margaret Mitchell, chief ethics scientist at AI startup Hugging Face…