How to take an ethical approach to AI
Humans can do better. So can our algorithms.
Jun 12, 2020 by Jess McCuan
We know humans can be biased. But what about machines? Shouldn’t automated decision-making help us steer clear of discrimination?
Dutch scholar Mireille Hildebrandt argued in a recent paper that bias-free machine learning doesn’t really exist, since bias is fundamental to inductive learning systems. Still, she says, we can design our algorithms to be less outright discriminatory.
Fairer by design
The algorithms themselves may not have morals, but people designing them do. And though AI can be designed on a single laptop, it’s unleashed into a complex world where it has significant impact on the trajectory of people’s lives.
If you’re stumped about the difference between “ethics,” “discrimination” and “bias,” a good place to start is the first part of our course on Ethics & AI, which you can access below.
In the tutorial, Automation Hero data scientists explain how bias can be designed into all aspects of artificial intelligence, from raw data to classification systems to the complex algorithms that run on top of it all. They also outline ways to be more aware of this bias and steps you can take to mitigate discriminatory outcomes.
Bias can creep into any project, automated or not. But it can be particularly dangerous in these 5 real-world scenarios:
Companies have used AI for years to speed up manual tasks involved in recruiting and hiring. In some cases, that’s led to unfair breaks for certain demographic groups, including women and minorities. Starting in 2014, Amazon used a machine learning tool that systematically discriminated against women by penalizing words like “women’s” on resumes. And Carnegie Mellon researchers found that Google was much more likely to show ads for high-paying jobs to groups of male job-seekers.
2. Banking and credit
Classical formulas for credit scoring are designed to leave out variables like age and gender. But plenty of companies now use algorithms to factor in social media activity, granular purchasing data and other information to determine a person’s credit-worthiness. In some cases, algorithmic scoring reinforces older, discriminatory patterns. In others it leads to odd conclusions, like the notion that people who buy birdseed are less likely to default on a loan.
3. Legal, judicial and policing decisions
Courtrooms and police departments are using big data to make better strategy decisions about where to deploy staff and resources. But algorithms can, for example, overestimate how likely a person is to commit a second or third crime. Or, human bias gets introduced into the datasets, and then the AI can compound those discriminatory decisions. Fair policing has never been more controversial than in the U.S. in recent weeks, after a series of brutal police incidents and the death of George Floyd. Now, large companies have sworn off facial recognition technology over concerns about privacy and bias.
4. Social services
Few employees are as overloaded as social workers, and AI solutions help them churn through burgeoning caseloads and identify or reduce risks. But things can go awry when, for example, an algorithm can’t distinguish between fraud and innocent mistakes.
In this highly regulated industry, AI has helped speed up every aspect of claims processing and payouts, but insurers must walk a fine line when, for example, an algorithm might inadvertently reward those in less need of a payout.
Why use AI?
When AI has been thoughtfully developed, it can be used to overcome discrimination in human decision making. Ethical AI has the potential to make the world more just by augmenting some of our human limitations. It can also improve the world, in critical areas like mapping wildfires and detecting cancer. But used poorly, AI can have disastrous consequences, especially for minority groups in the ways mentioned above, in the arenas of employment and the law.
The next best step: Learn all you can about bias and discrimination, and about how to use automation and AI well.
Continue with the second part of our Ethics & AI course, below.