A.I. Ethics
- Advay Kadam
- Apr 10, 2022
- 3 min read
Updated: Aug 21, 2022

Welcome back! For today’s post, I will be discussing A.I. Ethics, which is an incredibly controversial conversation, but we love controversy! Ethics on their own are controversial, but combining them with Artificial Intelligence… Now that’s what experts like to call “a tough one.”
Imagine you are driving a car and a child suddenly runs across the road, and you notice them at the last second. It’s too late to brake, so your only other option is to quickly turn the car to avoid colliding with the child; but if you turn, you put the people on the sidewalk at risk. So what do you do? What if there’s a baby in the car? This example, along with many other variations of it, is a classic ethical dilemma that can be applied to the world of artificial intelligence. As self-driving cars become more prominent such questions require answers, and while such situations are rare, they are still possible to encounter. There are many factors to consider, that the creators of the self-driving system have to account for, for example, the number of passengers, their age, and the car’s surroundings. Most companies program their A.I. systems to prioritize the safety of the people in the car. So, whenever the self-driving car is in a dangerous situation, the car will take the best course of action that maximizes the safety of the driver. But, is that fair? What if the vehicle is placed in a dangerous situation near an elementary school? Should the driver’s safety be valued above those in the surrounding environment?
While the self-driving car is a renowned example, the discussion of A.I. ethics applies to all aspects of artificial intelligence, as IBM describes A.I. ethics as “a set of guidelines that advise on the design and outcomes of artificial intelligence.” A large part of this ethical discussion about artificial intelligence comes down to the large gender and racial bias in A.I. systems, primarily those involving computer vision. For instance, face classification tends to be less accurate with people with darker skin tones, and facial recognition involving men tends to be more accurate than that of females. So then the question becomes, is it fair to implement such systems despite the bais?
In addition, law enforcement typically uses facial recognition, so A.I. algorithms tend to make greater mistakes when a person has a darker skin tone. In early 2019, Nijeer Parks was falsely accused of shoplifting due to a facial recognition algorithm that misinterpreted him for another African American man. Despite such errors and biases, A.I. systems are still used. Why? Quite simply, they make life much easier. There are evident flaws to the algorithm, but for the most part, they get the job done, and that's what matters… or does it?
This is quite an ethical dilemma. There isn’t a right answer, but the creators of these A.I. systems need to find a solution. The Belmont Report describes that there are three main factors that should serve as a guide for algorithm design: Respect for Persons, Beneficence, and Justice. In essence, these principles highlight that the measure of an algorithm is based on its fairness/equality, the amplification of biases, and the user being aware of potential risks. Of course, figuring out the right amount of “fairness” in an A.I. algorithm is a discussion of its own, and this discussion will only become more significant in the future. We are yet to see the full potential of artificial intelligence, so hopefully, A.I. doesn’t take over and make the ethical decisions for us.
Comments