WINTER SALE: Save up to $862 on our trips! Book now and secure your adventure!

Prejudiced AI Is Changing American Lives. What Can We Do About it?

Ailsa Johnson /
Ailsa Johnson / | © Culture Trip

Imagine a world where artificially intelligent algorithms make decisions that affect your everyday life. Now, imagine they’re prejudiced.

This is the world we’re already living in, says data scientist, Harvard PhD and author Cathy O’Neil. (Read part one of our discussion with Dr O’Neil here). We sat down with the National Book Award nominee to find out what we can do about prejudice in the era of big data.

CT: Is AI prejudiced?

CO: Every algorithm that hasn’t been explicitly made fair should be assumed to be prejudiced. Because as people, we are prejudiced. If we acknowledge that, and we are creating these algorithms with our values and our data, then we shouldn’t assume anything has magically happened to make things fair. There’s no magic there.

CT: Where do algorithms get their data?

CO: It depends on the algorithm. Sometimes social media, for things like political market targeting or advertising or for-profit colleges and predatory lending – but a lot of the data isn’t being collected on social media, or even online.

Data collection is increasingly tied into real-life, like getting a job, working at your job, going to college or going to prison. Those things aren’t things we can circumvent with privacy laws. They’re issues of power, where the people who are targeted by the algorithms have no power, and the people who are collecting the information and building and deploying the algorithms have all the power. You don’t have any privacy rights if you’re a criminal defendant, you don’t have any privacy rights at your job, and you don’t have much in the way of privacy rights if you’re applying for a job because if you don’t answer the questions that your future employer has asked you, then you likely won’t get the job.

We should think less about privacy and more about power when it comes to algorithms and the harm [they can cause].

CT: What can we do to make it better?

CO: We can acknowledge that these algorithms are not inherently perfect, and test them for their flaws. We should have ongoing audits and monitors – especially for important decisions like hiring, criminal sentencing or assessing people at their jobs – to make sure that the algorithms are acting they way that we want them to, not in some sort of discriminatory or unfair way.

Ailsa Johnson /

CT: What are the best and worst case scenarios for the data-driven future?

CO: The worst case scenario is what we have now – that we all blindly expect algorithms to be perfect, even though we should know better by now. And we propagate past injustices and unfairnesses. And we continue ignoring the flaws of these algorithms.

The best case scenario is we acknowledge these algorithms aren’t inherently better than humans. We decide what we want as humans, what we’re striving for. What we want society to look like, and we teach those values. If we do that successfully, these algorithms could be better than humans.

CT: What role can everyday people play?

CO: The most important role that an individual can play is to not implicitly trust any algorithm. To have an enormous amount of scepticism. If you’re being evaluated on an algorithm ask ‘How do I know it’s fair, how do I know it’s helpful, how do I know it’s accurate? What’s the error rate? For whom does this algorithm fail? Does it fail women or minorities?’ Ask that kind of question.

The second thing, beyond skepticism, is that if you think an algorithm is being unfair to you or other people is to organise with those other people. A recent example is teachers. The statistical models about value-added teachers are terrible, almost random number generators. But they were being used to decide what teachers should get tenure and what teachers should get fired, all over the US.

My suggestion is for them to get their union to push back. And this did happen in some places. But it’s surprising how little resistance there was because of the mathematical nature of the scoring system.

CT: How did you get into ‘big data’?
CO: I worked on Wall Street and witnessed the financial crisis from inside. I was disgusted by the way mathematics was used to either take advantage of people or to fool people. I saw the kind of damage that could come from mathematical lies, what I call ‘the weaponization of mathematics’.
I decided to get away from it, so I joined Occupy Wall Street and started to work as a data scientist. I slowly realised that we were seeing flawed and misleading hype around misleading data algorithms happening outside of Wall Street as well, and that that was going to lead to a lot of damage. The difference was that while people all over the world noticed the financial crisis, I didn’t think people would notice the failures of these big data algorithms, because they usually happen on the individual level.

Read part one of our discussion with Dr O’Neil here. Dr Cathy O’Neil’s book, The Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, is available now.

About the author

English-American, Claire has lived and worked in the U.S., South America, Europe and the UK. As Culture Trip’s tech and entrepreneurship editor she covers the European startup scene and issues ranging from Internet privacy to the intersection of the web with civil society, journalism, public policy and art. Claire holds a master’s in international journalism from City University, London and has contributed to outlets including Monocle, NPR, Public Radio International and the BBC World Service. When not writing or travelling, she can be found searching for London's best brunch spot or playing with her cat, Diana Ross.

If you click on a link in this story, we may earn affiliate revenue. All recommendations have been independently sourced by Culture Trip.
close-ad