Is Your Software Racist About Different Cultures?

Many software applications draw on existing human work to learn
Many software applications draw on existing human work to learn | © Štefan Štefančík / Unsplash

Freelance Writer - instagram.com/andrewthompsonsa

It may seem absurd that seemingly neutral computer software can perpetuate gender or cultural biases around the world. But, as software and internet technology begins to penetrate new, diverse markets, some users are starting to notice problematic phrases and connotations popping up in their daily usage.

Patently racist or just outdated?

The issue first came to prominence when online users began noticing Google Translate’s strange use of pronouns for some languages. Certain translations tended to be stuck in outdated literature, primarily when it came to pronouns for professions and phrases. For example, online studies revealed that in many translations, doctors turned out to be men while nurses were women.

Others have raised similar issues with voice-based personal assistant software on the likes of Apple’s Siri and Amazon’s Alexa—users have reported voice-based personal assistants being unable to identify different accents. Examples of racist software exist elsewhere—Google’s photo recognition software has come under fire for labeling a black man and his friend as “gorillas,” and Microsoft’s automated Twitter chatbot, intended to answer questions and provide support without the need for human intervention, automatically Tweeted racist posts.

Many smart devices rely on human interaction and complex algorithms

A reflection of mankind

Many believe that the issues stem from the still predominantly white male-dominated tech industry, with many products still headquartered in and focused on the Global North. This is clearly driving many biases in standalone software and packaged products, and many have protested against social media companies like Facebook and Youtube for censoring African culture. But, in cases involving software, there’s another force at play that casts a spotlight on other users and resources in the online world.

Many online products, such as chatbots, Google Translate, and photo recognition are driven by online resources—either print material that may be outdated or other users. It later emerged that the Microsoft chatbot scandal, for example, was due to machines learning racist users’ terms and phrases, considering them to be acceptable.

Google Translate uses similar technology to learn from other users and even print resources. In some cases, the software learns translations from existing bodies of writing—many of which have entrenched biases towards different cultures.

Although in these cases it may seem as if it’s the software that’s driving the racist tendencies, in many cases they are simply mirroring society—either present day or historical trends that somehow appear dominant in the system’s algorithms.

And as more human lives are starting to be dominated by artificial intelligence and algorithms—they dictate everything from Netflix recommendations to prison sentencing—it’s critical that tech giants acknowledge machine bias and work towards finding tangible, inclusive solutions.

Looking for solutions

Governments and tech giants have been slow on the uptake when it comes to solutions, but many are starting to realize the racism inherent in many software applications and are trying to resolve the issue.

Projects like Elon Musk’s Open AI may work towards making artificial intelligence more transparent and preventing it from becoming inherently evil. And the United States’s Future of AI Act has a subsection dealing with supporting unbiased development of artificial intelligence. But given how deep-seated this problem is and how the root cause is an algorithm that’s only as good as the data we feed it, it seems unlikely that any legislation will be able to quickly get to the root of the problem.

As tech giants scramble to resolve PR nightmares resulting from such incidents, many consider their awareness and willingness to resolve the issue a good starting point. Although many incidents are not intentional—such as the Microsoft chatbot’s racist Twitter rants—there’s no denying that they have a lasting impact on the users affected.

Until the day that machines become intelligent enough to identify racist behavior or justify their own decisions, software companies and individuals have an undeniable responsibility to ensure equal and fair representation of different cultures.

Culture Trip Summer Sale

Save up to $1,395 on our unique small-group trips! Limited spots.

toast-message-image
close-ad
Edit article