Imagine walking into a job interview and talking to a friendly chat bot asking behavioral questions. Or picture a hiring manager using Artificial Intelligence (“AI”) tools to sift through hundreds of resumes in ten minutes. Thanks to AI tools like HireVue and Paradox, these scenes are now becoming a norm at most companies worldwide.
In recent years, companies have increasingly integrated AI to streamline their recruiting process. While these technological advancements may boost the efficiency and objectivity for the process, they have also sparked important conversations about algorithm discrimination in hiring. Just as human recruiters have unconscious biases, AI systems might perpetuate existing social inequalities depending on how they are coded and the data used for training.
For instance, consider a tech company that has historically hired majority male engineers. If an AI system is trained on past hiring data, it might unintentionally favor male applicants over equally qualified female candidates, simply because that is the pattern it has learned. Increasing awareness in both academic and legal fields has expanded advocacy for legislations and regulations in AI recruitment.
Legal Landscape and Regulatory Framework
Federal Guidance
On the federal level, the proposed Algorithmic Accountability Act of 2022 (S. 3572) requires certain businesses, notably those that rely on automated decision systems to make critical decisions, to study and report about the impact of those systems. These assessments must evaluate potential bias and impact on consumers and ensure that algorithms do not discriminate on the basis of race, gender, age, and other protected characteristics.
State and Local Initiatives
While lacking universal consensus, some states and cities have implemented or proposed legislations that specifically address AI bias in hiring. In July 2023, New York City passed the Automated Employment Decision Tools Act ("AEDT"), prohibiting employers from using an automated employment decision tool without a bias audit within one year of the use. New York City employers can face fines up to $1,500 per violation for failing to conduct audits or provide required notices.
Subsequently, Illinois passed H.B. 3773 in August 2024. H.B. 3773 makes it unlawful for employers making hiring decisions to use AI that discriminates against potential employees on the basis of protected characteristics like race or zip code. H.B. 3773 also establishes requirements for disclosing the use of artificial intelligence tools in employment decisions.
In May of 2024, Colorado enacted SB 24-205, requiring a developer of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. SB 24-205 is broader in scope than the NYC and Illinois regulations and includes language on the use of “high-risk” AI systems in realms beyond employment, such as lending and housing.
The definition of AI in the regulations play a critical role in determining their range of impact. For instance, the Algorithmic Accountability Act of 2022 includes systems that utilize machine learning, statistical models, and other automated decision-making technologies, focusing on evaluating their impact to prevent discrimination. Similarly, New York City’s AEDT Act adopts a broad definition, encompassing the computational process derived from machine learning, statistical modeling, or data analytics. Colorado’s SB 24-205 takes an even broader view, covering high-risk systems that automate decision-making using predictive or adaptive algorithms, including hiring tools and extending to areas like lending and housing. These variations in definitions raise significant questions about regulatory coverage. For example, could AI resume-screening tools evade regulation if employers argue they are merely producing summaries rather than making definitive decisions? Such nuanced interpretations will likely shape compliance strategies and enforcement mechanisms as companies navigate these standards.
EU Regulations and Global Perspectives
In the European Union (“EU”), the proposed EU Artificial Intelligence Act exercises a comprehensive ban on certain “prohibited AI behaviors” and designates a list of high-risk AI behaviors, including the use of AI in employment. The law requires AI developers to provide a complete risk management plan, including recordkeeping, technical documentation, and ensuring human oversight on the program. Globally, countries such as Canada, Australia, and China are also developing AI laws affecting recruitment and hiring.
The legal landscape governing AI in the US and EU reflects two distinct regulatory philosophies. In the US, regulations like the AEDT Act and SB 24-205 vary by state, allowing jurisdictions to tailor rules to local needs. This decentralized approach fosters innovation and adaptability but can lead to inconsistencies across states. The EU, on the other hand, adopts a centralized framework with its Artificial Intelligence Act, which imposes uniform standards across member states. The EU’s precautionary approach prioritizes proactive risk mitigation but could result in higher compliance costs for businesses.
Legal Liabilities
In the United States, improper use of AI might create legal risks for employers. For one, failure to disclose the use of AI in hiring or failure to provide candidates with required information could result in legal penalties, including fines of up to $1,500 per violation. Even with disclosure, employers that use AI systems that produce biased results can lead to violations of anti-discrimination laws under joint hiring and vendor liability rules.
Conclusion
As AI continues to change the hiring landscape, eliminating algorithmic bias is not only an ethical concern, but also a legal one. Organizations must be aware of changing regulations and invest in tools and strategies to ensure fair AI hiring practices. Recently, to comply with these legal requirements, companies are turning to tools such as IBM’s AI Fairness 360 toolkit. This open-source library provides more than 70 metrics and 11 algorithms to help developers detect and reduce bias in machine learning models. As regulations continue to evolve, staying informed and adaptable will enable companies to harness the benefits of AI while upholding the principles of equality and non-discrimination in their hiring practices.