Do your AI hiring tools violate ADA, EEOC laws?

As AI use in hiring becomes more common, employers must examine whether the technology is truly unbiased, or if it harms those with disabilities.

Employers are responsible for vetting potential bias in AI-based hiring tools. (Credit: Yeexin Richelle/Shutterstock)

The U.S. Equal Employment Opportunity Commission (EEOC) has released guidance on how the abundance of software and artificial intelligence (AI) tools allowing employers to hire and assess candidates (with little to no human interaction) may be violating existing requirements under Title I of the Americans with Disabilities Act (ADA). Employers are responsible for vetting potential bias in AI-based hiring tools and the EEOC warns of three common applications where these tools could violate the ADA. For example:

The guidance provides other practical steps for reducing the chances that algorithmic decision-making will screen out an individual because of a disability, including:

Lauren Daming, Greensfelder attorney and CIPP says: “While the EEOC’s new guidance is a big step toward helping employers evaluate their hiring practices for potential disability bias, it leaves many issues unaddressed. For example, the guidance recognizes that algorithmic decision-making tools may also negatively affect applicants due to other protected characteristics such as race or sex, but the guidance is limited to disability-related considerations alone.”

She adds that, “although the EEOC’s action highlights the potential for disability discrimination related to hiring tools, there are a variety of other employment and privacy laws potentially affecting the use of algorithmic decision-making in the workplace.”

Related: