Analysts Spot Origins of Stereotyping in AI Language Technologies

Origins of Stereotyping in AI Language Technologies

A group of analysts has recognized a lot of cultural stereotypes that are brought into man-made reasoning models for language from the get-go in their turn of events—a finding that adds to our comprehension of the components that impact results yielded via search engines and other AI-driven instruments.

1. “Our work identifies stereotypes about people that widely used AI language models pick up as they learn English. The models we’re looking at, and others like them for other languages, are the building blocks of most modern language technologies, from translation systems to question-answering personal assistants to industry tools for resume screening, highlighting the real danger posed by the use of these technologies in their current state.”

2. “We expect this effort and related projects will encourage future research towards building more fair language processing systems.”


Sam Bowman, an assistant professor at NYU’s Department of Linguistics and Center for Data Science and the paper’s senior author

The work dovetails with recent scholarship, for example, Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018), which narratives how racial and different predispositions have tormented generally utilized language advances.

The paper’s different creators were Nikita Nangia, a doctoral competitor at NYU’s Center for Data Science, Clara Vania, a postdoctoral analyst at NYU’s Center for Data Science and Rasika Bhalerao, doctoral up-and-comer at NYU’s Tandon School of Engineering.

H. Asghar:
Related Post