A central tenet of responsible artificial intelligence (AI) is algorithmic fairness, or the attempt to correct bias in algorithms. Algorithmic fairness is critical to ensure that algorithms do not detrimentally reproduce racial biases and make unfair predictions. Already, prediction algorithms used to make decisions in health, employment, law enforcement, and other fields have been shown to be more likely to make incorrect predictions for black persons as compared to white persons. For example, an algorithm used to predict health risk using cost as a proxy for health needs was found to incorrectly determine that black patients are healthier than equally sick white patients. Another algorithm used to predict defendants’ risk of recidivism was twice as likely to falsely predict that black defendants had a higher risk of recidivism compared to their white counterparts. Given the propensity for algorithms to reproduce societal injustices, it is important that algorithmic fairness methods are employed.

Frequently, protected classifications are considered in approaches to build fair algorithms. In particular, explicit consideration of race is often required to reduce algorithmic biases. Under the Equal Protection Clause, race-based classifications are subject to strict scrutiny, even when considering race-conscious remedies for racial disparities. Consequently, the consideration of race in the design of AI models raises the question of whether these approaches, when employed by state actors, might be deemed “algorithmic affirmative action,” potentially in violation of the Equal Protection Clause. Legal scholars have proposed different ways to answer this question. 

Professor Bent suggests that at least some race-aware fairness approaches constitute racial classifications under equal protection doctrine and therefore must satisfy strict scrutiny to be upheld as constitutional. Under strict scrutiny, a classification must be narrowly tailored to a compelling governmental interest. Professor Bent analyzes a theoretical employment algorithm that relies on race-aware fairness constraints under existing equal protection doctrine. To find a compelling government interest in this example, the state actor could attempt to argue that the algorithm is used to remedy the effects of prior intentional discrimination or to achieve diversity in public employment. Once a compelling government interest is established, the court will consider whether the employment algorithm is narrowly tailored. A finding of narrow tailoring might depend on how the algorithm was built and used, as well as whether race-neutral options were considered. Ultimately, fairness constraints that take race into account should be able to survive strict scrutiny, especially when used judiciously to achieve the narrow interest of preventing algorithms from producing disparate impacts. 

While Daniel Ho and Alice Xiang likewise agree that algorithmic fairness strategies might satisfy strict scrutiny, they propose to view them under the logic set forth in the government contracting line of cases, instead of cases involving affirmative action in higher education. They make the case that approaches to adjust algorithms using racial information are incompatible with the equal protection doctrine articulated in the line of cases surrounding affirmative action in higher education, in its rigid focus on anticlassification and colorblindness. Thus, algorithmic fairness approaches, which depend on classification of protected attributes, are likely to be considered “algorithmic affirmative action” under the higher education cases. Ho and Xiang argue that algorithmic fairness strategies should instead be viewed under the logic set forth in the government contracting line of cases. In these cases, a government action that takes race into account satisfies strict scrutiny where the state actor can show that specific instances of past discrimination contributed to current racial disparities and that the means chosen are narrowly tailored to remedy those past instances of discrimination. Applied to the algorithmic context, the government contracting case law provides a doctrinal framework that would enable technologists to keep empirical data about historical discrimination and use that data to calibrate the correctional techniques to achieve algorithmic fairness to the extent of historical discrimination. This would encourage technologists to make more tailored interventions when they use racial classifications in corrective techniques. 

Professor Pauline Kim argues that before we even begin to consider whether algorithmic fairness techniques satisfy strict scrutiny, we must ask whether taking race into account triggers strict scrutiny to begin with. According to Kim, race-conscious decision-making is not categorically prohibited, nor does it automatically trigger strict scrutiny. For example, the government routinely acts in race-aware ways that are not subject to constitutional challenge, such as conducting the Census, analyzing racial disparities in public health, and relying on race-based information in voting redistricting decisions. The particular ways in which race-conscious decisions are made determines whether strict scrutiny is triggered. According to Professor Kim, there is a distinction between racial classifications and race-consciousness. Government practices that rely on racial classifications, grouping individuals by race in a rigid and mechanical manner, must be subjected to strict scrutiny. On the other hand, race conscious practices that merely take racial disparities into account when shaping policy goals do not trigger strict scrutiny. Given this distinction, many approaches to build fair algorithms, such as ensuring that datasets are representative or adjusting the target variable, do not involve racial classifications, and therefore do not present a risk of “algorithmic affirmative action.” Before assuming that algorithmic fairness approaches that take race into account call for strict scrutiny, we should scrutinize the specific manner in which race was considered.

Although these legal scholars present different perspectives about how courts should interpret race-conscious algorithmic fairness strategies, they all agree that at least some approaches towards algorithmic fairness can be employed without violating the Equal Protection Clause. Yet, the logic used to argue that these race-conscious algorithmic fairness approaches are constitutional is narrow and uncertain. It is evident that current legal doctrine is largely inadequate to address the emerging challenges involved in AI and algorithmic decision-making. Given the importance of algorithmic fairness, we must reconcile technical approaches with potential legal barriers to the achievement of fair AI.