By Bre Timko & Dave Schmidt, DCI Consulting
Artificial intelligence (AI) continues to revolutionize many industries, and the employment space is no exception. According to the Society for Human Resource Management (SHRM), almost one in four organizations utilize automation or AI for HR-related activities. Among these organizations, the majority (79%) apply AI specifically in selection (e.g., recruitment and hiring processes). There are varied uses of AI in employee selection and recruitment, including automating applicant searches, reviewing or screening applicant resumes, pre-selecting applicants for interviews, administering or scoring skill assessments, conducting video interviews, and administering and scoring game-based assessments (SHRM, 2022).
While the use of AI in the selection space is increasing, it is not without concerns and criticisms. For example, AI tools may face backlash if there is a lack of transparency and explainability with the algorithms being used. In other words, when AI systems are complex and difficult to understand, it makes it challenging for both candidates and recruiters (and sometimes even the algorithm developers) to understand why certain decisions are made (Ravi, 2023). A lack of algorithmic transparency may decrease trust in the recruitment or hiring process and, as a result, increase concerns about fairness and accountability (Ravi, 2023). The AI concerns in employee selection have led to a significant increase in regulation in this space. Most recently, New York City’s Local Law 144 began to be enforced on July 5, 2023. This law applies to situations where an Automated Employment Decision Tool (AEDT) is used to hire or promote individuals for a job that is located in an office in NYC, at least part-time; a job that is fully remote but the location associated with it is an office in NYC; or where the location of the employment agency using the AEDT is in NYC.
Local Law 144 requires an annual bias audit to be conducted by an independent auditor and results of the audit must be published publicly on the employer’s website. The law also requires that applicants be notified1 of an organization’s use of an AEDT in the selection process, including the job qualifications and characteristics AEDT evaluates, the type of data used, the source of such data, the organization’s data retention policy, and instructions on how applicants can request an alternate selection process or accommodation.2 It’s worth noting that NYC defines the concept of an AEDT in a way that extends beyond AI methods like machine learning and natural language processing. This law may apply not only to selection procedures using these advanced technologies but also to many existing procedures that rely on complex algorithms. All such processes must align with this law to ensure compliance.
New York City’s law is just the tip of the iceberg when it comes to regulating the use of AI in employee selection; we are likely to continue to see new state and local laws on this proposed and enacted for the foreseeable future. There are several such laws in effect or under consideration. DCI’s State Legislation Tracker provides an overview of each. In addition to Local Law 144, there are three other laws in this space currently enacted. These laws either require applicant consent before AI-based technology can be used (i.e., the Illinois Video Interview Act and Maryland House Bill 1202) or require an inventory of AI technology used by state agencies (i.e., Connecticut Senate Bill 1103).
Laws currently in the proposed stage, six in total as of the writing of this blog, may place a substantially higher burden on organizations. Each of these proposed AEDT-focused laws includes some reference to the need for algorithmic explainability and transparency, as well as an applicant notice requirement. Some also include adverse impact analyses (i.e., a “bias audit”), consideration of job-relatedness (i.e., validity), provision of alternatives, and data privacy and/or data retention clauses.
Below, we dive more deeply into the two proposed laws far along in the development process—one in California and one in Washington, D.C. —where we will likely see increased activity within the next year.
California’s proposed law—set to go into effect on January 1, 2025—would require notice, a bias audit to be conducted and submitted to the CA Civil Rights Department, and an alternative, non-AEDT assessment for candidates opting out of the AEDT “if technically feasible.” As currently written, the proposed law indicates that the bias audit must include:
The addition of the mandate to assess the AEDT for validity and job relevance, specifically, distinguishes this law from many other laws centered around AI. The requirement to evaluate validity is noteworthy given its importance in the broader legal framework for evaluating selection procedures, as well as its utility for determining the effectiveness of a selection tool. For these reasons, validation is something frequently recommended for organizations to consider.
Like many of the enacted and proposed AEDT laws, Washington, D.C.’s “Stop Discrimination by Algorithms Act of 2023” also requires notice to applicants that an AEDT will be used in selection before any algorithm-based tool is used. As currently written, the proposed law indicates that the notice must:
A bias audit must be conducted annually and submitted to the Office of the Attorney General of D.C. If any risks or unlawful disparate impact are identified in the audit, the audit must also include an identification of reasonable measures to address such risks.
Consideration of AI-related laws extends beyond employers utilizing AI-based assessments. It encompasses not only organizations employing AI technology in their selection processes but also the vendors responsible for developing such technology. Take, for instance, New Jersey’s Assembly Bill 4909, which mandates a bias audit before the sale of an AEDT. The results of this audit must be provided to the purchaser, bundled with the tool at no additional cost. Even when a vendor isn’t the primary target of an AI-based law, their role is pivotal. Vendors play a crucial part in supplying clients with information about their assessments, enabling each employer to assess the relevance of specific laws to their usage and ensuring they possess the necessary data for compliance, often achieved through the completion of a bias audit.
There is a lot of activity surrounding the use of AI-based technology in the context of employee selection. While the focus of this blog was on unpacking state and local laws seeking to regulate this area, several converging forces in the federal space will significantly shape the regulation landscape over the next 12-18 months, including the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence released by the Whitehouse October 30, 2023. This landscape will continue to get more complex, and we will undoubtedly see more in this space in the coming months and years.
Explore the latest in legislative updates and compliance insights from our strategic partner, DCI Consulting Group. Their State Legislation Tracker provides up-to-date information on proposed and enacted laws and regulations, as well as implications and guidance for compliance.
1Per an FAQ document provided by New York City, notices must be published on the employer’s job website at least 10 days prior to the first use of the AEDT.
2 Notably, an organization is not required to provide a non-AEDT alternative unless called for by other laws (e.g., The Americans with Disabilities Act (ADA)).