Algorithmic Bias & Oppression

Shane Staret
13 min readJul 10, 2023

--

Shane Staret

Introduction

In discussions regarding Artificial Intelligence (AI), a long-standing misconception consistently appears: AI is flawless. This misconception allows discussers to make certain assumptions. Using the idea that AI is flawless, it is easy to assume that it presents perfectionism and computational ability that exceeds even the most skilled humans. It also leads to the conclusion that AI is incapable of making mistakes. Therefore, this common idea that AI is perfect is not only a misconception but it is also dangerous, as it bars individuals from being able to critically think about the pitfalls of AI and the many ways that it exhibits error and bias.

In the field of computing, algorithmic bias is defined as being the lack of fairness that emerges from the results generated by a computer system [1]. Not all of these systems are AI systems, but AI systems are just as susceptible to containing algorithmic bias as an ordinary computer system. Algorithmic bias can manifest in many ways, with some having more severe consequences than others. When algorithmic bias leads to the perpetuation of an unequal playing field within society as a whole or within a specific function of society, it has been referred to as algorithmic oppression. The study of algorithmic oppression attempts to explain how such oppression is built into modern institutions, leading to their inevitable production of algorithms that contain bias, and how this affects marginalized groups. Authors exploring this topic also offer possible solutions to end algorithmic bias and oppression.

Background

Algorithmic bias is a recently conceived concept. This is expected, as the power of computing has only been realized over the last several decades. Pioneers within the Artificial Intelligence community recognized the potential for bias to appear within computer systems that are meant to mimic or reflect human decision making processes [2]. Specifically, it was assumed that bias could appear due to the data used to train the algorithms and also the way the code is structured that builds the algorithms [2].

Algorithms are intrinsically tied to humans, as obviously they are unable to exist without a human defining its structure and building its infrastructure. Therefore, algorithms are a human creation. The person creating the algorithm determines the problem solving methodologies employed and can pigeonhole the results that are generated [2]. It is obvious then that the individual or individuals implementing algorithms within a system has direct control over the results that are generated. Having this understanding, it is then simple to see how bias may manifest, as a flawed human is in direct control of supplying the data the system uses to learn as well as the infrastructure that the system uses for decision-making.

Understanding how the data that a system is fed can impact the bias that results is relatively easy. Consider the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm used to predict the likelihood of a particular criminal re-offending. In 2016, ProPublica published an article detailing how this system has consistently and arbitrarily assigned black inmates as having a higher recidivism rate than they actually do and white inmates as having a lower re-offending rate than they really do.

The figure above shows the count for the Decile Score each inmate received broken down by race. There is a clear pattern showing that white defendants typically scored lower with a gradual decrease in count as the score increases. However, the black defendants had more of a uniform score across counts, indicating that COMPAS tends to score white defendants as lower-risk compared to black defendants.[3]

Clearly, the system exhibits bias and inaccuracy, but what could be the cause? An explanation could be that the data supplied to the system is inherently biased, causing the machine to then learn these biases and implement them within the algorithm. If supplied historical data reflects that black people have a higher recidivism rate, then the system may learn that race is an independent variable in determining re-offending rates, causing racial biases to appear in the output of the algorithm. No matter how structurally sound a system is built, if it is trained with data that exhibits bias, it will inevitably mimic these biases unless intentional action is taken to negate them.

As mentioned previously, the data given to a machine to train an algorithm is not the only source of algorithmic bias. The structure the algorithm is built upon can also be a cause for bias. People are responsible for the code that builds the foundation of an algorithm, and thus, they are also responsible for the problem-solving mechanisms that are employed and are not employed. Clearly, the programmers are making decisions that directly impact how the algorithm will function, and thus, if these decisions are made with bias the resulting algorithm will likely follow suit. In this context, bias is inevitable. This is due to the fact that different programming methods or threshold settings result in bias, but altering them may lead to a different kind of bias. In other words, when attempting to eliminate one form of bias, you are likely to introduce another. Therefore, a large duty of a good AI programmer is to limit bias as much as possible and not necessarily to eliminate it in its entirety.

Just like anything else, in order to scientifically understand the impacts of computing, the results of the systems that it produces must be gathered and properly analyzed. Thus, while algorithmic bias is something that has been discussed since the arrival of computer systems, its prevalence and adverse affects could only be determined after the widespread implementation of these systems [2]. Similarly, algorithmic oppression branches from algorithmic bias, so by definition the understanding of algorithmic oppression and its consequences can only be understood once a critical understanding of algorithmic bias is achieved. Given that algorithmic bias is a relatively new concept, it should be of no surprise that algorithmic oppression is even newer [4].

While the concept of algorithmic bias has gained traction within the computing community over the last couple of decades, algorithmic oppression has not taken off with nearly as much energy [4]. This could be explained by modern Neo-liberal definitions of “oppression”, causing some to shy away from its use. However, those who currently study and advocate for the abolition of algorithmic oppression justify the terminology due to the social power structures that these algorithms help to enforce and because of the ways in which its victims are impacted [4].

Overall, the concepts of algorithmic bias and oppression are relatively new, but this does not mean that their potential effects are not significant. In the following section, an ethical dilemma enveloping these ideas and exposing their potential severity is discussed.

Ethical Dilemma

An ethical dilemma is “a problem in the decision-making process between two possible options, neither of which is absolutely acceptable from an ethical perspective” [5]. Therefore, it is up to the discretion of those pondering the problem to determine the best possible solution. Regarding algorithmic bias and oppression, the following ethical dilemma may be presented:

Suppose you are a developer of a software program that is used to detect people within artwork posted on various social media sites (Instagram, Twitter, Tumblr, etc). The goal of the program is to select the set of artwork containing people so that these pieces can then be judged by a group of artists to determine the “best” art. The initial version of this program contains algorithmic bias, as it routinely has difficulty selecting artwork portraying people of color, but has relatively minor issues selecting artwork of white people. The task you are assigned is to eliminate this algorithmic bias. You discover that adjusting certain thresholds within the detection system can virtually eliminate this bias, however, each time you find a way to eliminate this bias, you produce another form of bias (e.g. gender, body shape, specific facial features, etc). What should be done in this scenario?

There are many approaches that philosophers, psychologists, and others have employed in an attempt to generalize ethical dilemmas to make the process of finding a moral solution more efficient. Two of these approaches along with how they can be used to resolve the given ethical dilemma will be explored in the next section.

Ethical Analysis

To gain understanding of an ethical dilemma, there are a few methodologies that may be used. Perhaps the most widely used and most robust is one defined by P. Aarne Vesilind in his work The Right Thing to Do: an Ethics Guide for Engineering Students [6]. There are eight steps within Vesilind’s approach and they will be outlined here in an attempt to gain understanding of the ethical dilemma presented in the previous section.

  1. What are the facts? [6]

The moral actor in the ethical dilemma (you) is a software developer. The program you are working on is one regarding the detection of people within artwork. In its current state, the program displays algorithmic racial bias. You are tasked with eliminating this bias.

2. What are the moral issues? [6]

Well, the program is having difficulty recognizing artwork that contains people of color. Since the idea behind the program is to be able to filter through artwork depicting people so that the best pieces may be found, then it is likely best for the program to be as precise as possible to ensure that all artwork that meets the criteria can have a chance to be judged. In its current state, the program is having difficulty selecting artwork depicting people of color, and therefore, the artwork selected for judging is likely to be skewed to over-represent white people. Artwork containing people of color will be less likely to be selected, arbitrarily decreasing the chances of them being selected as best. The algorithm is skewed towards picking out artwork that shows white people, and therefore, this presents a moral issue as artwork depicting people of color will have less of a chance to be judged.

3. Who is affected by the decision you have to make? [6]

Obviously, the artists are affected by this. If an artist doesn’t have their artwork included in the judging process simply because they only chose to include people of color within their art, this is negatively impactful because they have no chance to win. Perhaps those depicted within the artwork could also be impacted emotionally, as it may hurt to realize that an AI meant to detect people within artwork couldn’t detect them. It may also affect those who go on to win by the judges decision, as they may feel that their victory is not as substantial due to the fact that their piece(s) weren’t compared against all possible selections.

4. What are your options for action? [6]

There appears to be three practical solutions to this dilemma:

  1. Choose thresholds that cause bias towards the least oppressed group, as this evens the playing field for historically marginalized groups to be properly represented.
  2. Choose thresholds that cause bias towards the least present/popular group, as this will minimize the probability that the bias will be applied to any given artwork.
  3. Scrap the entire program because it is inherently flawed, evidenced by it producing bias towards a specific group regardless of how its thresholds are configured.

5. What are the expected outcomes of each possible action? [6]

If option 1 is employed, then there would likely have to be an investigation by you to determine the most and least marginalized/oppressed groups that will be present within artwork. Choosing this option could level the playing field for those most historically discriminated against, allowing them to have as fair of a chance to be selected for judging as normative groups. However, there will always be differing opinions regarding the least and most historically oppressed groups, which could lead to controversy and further ethical dilemmas if it turns out adjusting the thresholds have adverse and unexpected outcomes.

If option 2 is chosen, then groups that are on the extreme end of being in the minority will have the most bias employed against them. While this may minimize the frequency of bias occurring, it could significantly impact the most historically marginalized groups. This clearly appears immoral, as the most infrequent groups will be have intentional systemic bias against them. Less bias does not necessarily correlate with increased morality.

If option 3 is chosen, all manpower and economic resources spent to develop the program thus far will be wasted. However, if criteria is defined correctly from the start, it is possible that the algorithm will not show any type of algorithmic racial bias. This is not guaranteed, but this is the only option that allows for the bias to potentially be completely eliminated rather than just minimized or pushed onto a different group.

6. What are the personal costs associated with each possible action? [6]

For the moral actor, there are consequences for each option chosen. If option 1 and 2 are chosen, you are simply “pushing” the bias evoked on certain groups onto others. Bias is not being eliminated, rather, it is being minimized and rearranged. Because of this, you may be personally responsible for invoking bias onto other groups when not necessary and this may have consequences from the users of the program and from those above you within the chain of command. If option 3 is chosen, it is likely that those above you will not be pleased with the resources that had to go to waste. However, if success is found in rebuilding the program from scratch, then you properly did your duty by eliminating bias and those subjected to the program will all be on a level playing field.

7. Where can you get some help in thinking through the problem? [6]

There are two ethical theories that may be able to help us if employed: act utilitarianism and virtue ethics.

Act utilitarianism is the idea that one should perform the action that will create the greatest net utility (i.e. cause the greatest good for the whole of the population) [7]. Those who follow act utilitarianism believe that situations need to be broken down into their most basic parts so that actions can be determined that will yield the greatest good for those impacted. By using act utilitarianism to approach this ethical dilemma, option 3 appears to be the clear winner as it allows for bias to be potentially eliminated for everyone that the algorithm may be used upon. However, if no such program can be developed that eliminates bias entirely, it appears that option 2 would be in line with the theory as well, as this causes an outcome whereby the least amount of people are affected by the bias. In other words, option 2 allows for the highest proportion of people to be treated fairly, and therefore, act utilitarianism argues that it should be the option picked.

Virtue ethics is a theory emphasizing the role of virtue in moral philosophy rather than doing one’s duty or acting in order to bring about good consequences [8]. In other words, virtue ethics can be seen as a concept that focuses on employing actions in situations that reflect ideal characteristics of an individual (e.g. charitableness, courage, etc). Someone who follows virtue ethics is not as focused on what one is ought to do based on their assigned functions or based on what impact their actions will have to society as a whole. Rather, they are primarily concerned with living a life whereby they exhibit virtuous characteristics within any given situation. Using virtue ethics, option 3 is quite obviously the best choice. A person following this theory would want to do their best to inflict no bias whatsoever, and thus, option 3 is the only possible option for a person following this approach. If option 3 is not practical, then it is likely that option 1 could be employed, as it brings about the most fair playing field for those most historically discriminated against.

8. The bottom line? [6]

Personally, it appears that without a doubt, option 3 is the best one. While you are putting yourself into a situation where your superiors may be upset due to the waste of resources, you were given a specific task to eliminate bias and it is obvious that having no bias is better than having any. Thus, option 3 allows you to properly fulfill your duty while also allowing you to develop a new program that possibly contains no bias, which is the best outcome for those being subjected to the algorithm used as well. This may be an extreme opinion, but I take the position that option 1 and 2 are never acceptable, as they may minimize bias but they never necessarily eliminate it and you are also personally responsible for deciding which groups this bias falls on to. Even if option 3 needs to be chosen several times because each iteration of the program exhibits bias, I still think that is better than ever choosing option 1 or 2 because those options never allow you to eliminate bias outright.

Conclusion

Even with modern resources available, the misconception that AI is flawless still persists. While seemingly harmless at first, after investigating the consequences of this misconception perpetuating, it becomes clear just how dangerous it can be. People that hold this misconception are unable to acknowledge that AI can be error-prone and that this can negatively impact individuals that are subjected to it. Algorithmic bias and algorithmic oppression are two examples of AI being flawed and they present severe consequences that can occur to those affected. The study of these concepts attempt to explain how they arise systemically, leading to their inevitable appearance within modern day technology. While algorithmic bias has become a relatively mainstream topic in recent decades, algorithmic oppression is still a relatively untouched subject. By listening to pioneers within the field and allowing for more research within the subject, perhaps we can develop awareness of the need for algorithmic bias and oppression to be proactively addressed so that the number of their potential victims and the severity of their impacts are limited or eliminated entirely. While AI has come a far way, there are still always aspects of it that can be improved upon and clearly algorithmic bias and oppression are two examples of this need.

References

[1] Alake, Richmond. “Algorithm Bias In Artificial Intelligence Needs To Be Discussed (And Addressed).” Medium, Towards Data Science, 28 Apr. 2020, towardsdatascience.com/algorithm-bias-in-artificial-intelligence-needs-to-be-discussed-and-addressed-8d369d675a70.

[2] Weizenbaum, Joseph. Computer Power and Human Reason: from Judgment to Calculation. Freeman, 1976.

[3] Cossins, Daniel. “Discriminating Algorithms: 5 Times AI Showed Prejudice.” New Scientist, New Scientist Ltd., 12 Apr. 2018, www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/.

[4] Hampton, Lelia Marie. Black Feminist Musings on Algorithmic Oppression, 1 Mar. 2021, dl.acm.org/doi/10.1145/3442188.3445929.

[5] “Ethical Dilemma — Definition, How to Solve, and Examples.” Corporate Finance Institute, 17 Sept. 2020, corporatefinanceinstitute.com/resources/knowledge/other/ethical-dilemma/.

[6] Vesilind, P. Aarne. The Right Thing to Do: an Ethics Guide for Engineering Students. Lakeshore Press, 2004.

[7] “Act and Rule Utilitarianism.” Internet Encyclopedia of Philosophy, Internet Encyclopedia of Philosophy, iep.utm.edu/util-a-r/.

[8] “Virtue Ethics.” Internet Encyclopedia of Philosophy, iep.utm.edu/virtue/.

--

--