Navigating the intersection between AI and ethics

Humans can do better. So can our algorithms.

Jun 12, 2020 by Jess McCuan

How to take an ethical approach to AI

We know humans can be biased. But what about machines? Shouldn’t automated decision-making help us steer clear of discrimination? 

The short answer is: no. Numerous scholars and institutions have pointed out that biases are fundamental to inductive learning systems because they’ve been designed by humans. However, here’s how algorithms can and need to be built to become less outright discriminatory. 

Fairer by design

The algorithms themselves may not have morals, but people designing them do. Though automation and artificial intelligence can be designed on a single laptop, it’s unleashed into a complex world where it has a significant impact on the trajectory of people’s lives.

The words “ethics,” “discrimination,” and “bias” are often thrown around interchangeably without consideration as to what each means in the context of a business. However, these terms are essential elements of a fair workplace — take a closer look at their definitions with the first part of our course below.


In Part 1 of our Ethics & AI course, learn the difference between “bias” and “discrimination.”

In Part 1 of our Ethics & AI course, learn the difference between “bias” and “discrimination.”

In the tutorial, Automation Hero data scientists explain how bias can be designed into all aspects of artificial intelligence, from raw data, to classification systems, to the complex algorithms that run on top of it all. They also outline ways to be more aware of this bias and steps you can take to mitigate discriminatory outcomes. 

Battling bias

Bias can creep into any project, automated or not. But it can be particularly dangerous in these five scenarios: 

1. Recruiting

Companies have used AI for years to speed up manual tasks involved in recruiting and hiring. In some cases, that’s led to unfair treatment for certain demographic groups, including women and minorities. In 2014, Amazon used a machine learning tool that systematically discriminated against women by penalizing words like “women’s” on resumes. Studies as recent as 2021 continue to show widespread AI-enabled anti-Black bias in recruiting.  

2. Banking and credit

Classical formulas for credit scoring are designed to leave out variables like age and gender, but plenty of companies now use algorithms to factor in social media activity, granular purchasing data, and other information to determine a person’s credit-worthiness. In some cases, algorithmic scoring reinforces older, discriminatory patterns. In others it leads to odd conclusions, like the notion that people who buy birdseed are less likely to default on a loan.   

Courtrooms and police departments are using big data to make better strategy decisions about where to deploy staff and resources. But algorithms can, for example, overestimate how likely a person is to commit a second or third crime. Or, human bias gets introduced into the datasets, and then the AI can compound those discriminatory decisions. 

Fair policing became especially controversial in the U.S. after a series of brutal police incidents and the death of George Floyd in 2020. Now, large companies have sworn off facial recognition technology due to concerns about privacy and bias. 

4. Social services

Few employees are as overloaded as social workers, and AI solutions help them churn through burgeoning caseloads and identify or reduce risks. But things can go awry when, for example, an algorithm can’t distinguish between fraud and innocent mistakes

5. Insurance

In this highly regulated industry, AI has helped speed up every aspect of claims processing and payouts, but insurers must walk a fine line when, for example, an algorithm might inadvertently reward those in less need of a payout

Why use automation and artificial intelligence?

When AI has been thoughtfully developed, it can be used to overcome discrimination in human decision making. Ethical AI has the potential to make the world more just by augmenting some of our human limitations. It can also improve the world, in critical areas like mapping wildfires and detecting cancer. But used poorly, AI can have disastrous consequences, especially for minority groups in the arenas of employment and the law.

The next best step: Learn all you can about bias and discrimination, and about how to use automation and artificial intelligence well.