What AI Can Teach Us About Stereotypes

One of the main concerns with AI technologies today is the fear that they will propagate the various biases we already have in society.  A recent Stanford study turned things around however, and highlighted how AI can also turn the mirror onto society and shed light on the biases that exist within it.

The study utilized word embeddings to map relationships and associations between words, and through that measure the changes in gender and ethnic stereotypes over the last century in the United States.  The algorithms were fed text from a huge canon of books, newspapers and other texts, whilst comparing these with official census demographic data and societal changes, such as the women’s movement.

“Word embeddings can be used as a microscope to study historical changes in stereotypes in our society,” the authors say. “Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases.”

Dissecting society

The researchers used embedding to single out specific occupations and adjectives that tended to be biased toward women or ethnic groups each decade from 1900 to the present day.  These embeddings were trained on newspaper articles, whilst also tapping into the work of fellow Stanford researchers who had developed embeddings trained on large text datasets, such as the American books contained on Google Books.

The biases located by the embeddings were then compared to the demographic changes identified in each official census undertaken during the period.

The analysis found a clear shift in how gender was portrayed throughout the 20th century, with things generally changing for the better during that time.

For instance, adjectives such as ‘intelligent’ and ‘logical’ would more often be associated with men in the first half of the 20th century, but this gap narrowed considerably (although it still remains) as we came closer to the present day.

There was also a shift in attitude towards Asians and Asian Americans.  In the early part of the 20th century words like ‘barbaric’ and ‘cruel’ were commonly used adjectives used to describe people with Asian surnames, but towards the end of the century, huge progress had been made.  By the 1990s, the most common adjectives were ‘passive’ and ‘sensitive’.

“The starkness of the change in stereotypes stood out to me,” the authors say. “When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate.”

The work underlines the potential for AI to provide us with greater insight into what biases exist in society, although it is still some way off being able to detect the biases inherent in its own working.  Hopefully that will be something for future research.

 

Related

Facebooktwitterredditpinterestlinkedinmail