The Risks of AI Discrimination in Hiring

 

The U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ) published guidance that warns employers and workers that AI used in hiring processes can unfairly discriminate against people with disabilities. This is an important development because employers could be found legally liable under the Americans with Disabilities Act (ADA) if they fail to provide accommodations to applicants or if they knowingly or unknowingly screen out candidates who would qualify for a job with reasonable accommodations. In other words, even if employers are purchasing hiring tools from a vendor that employ AI, the employer can be found liable for any violation of theADA. In addition to hiring, the new guidance also has implications for evaluations of existing employees that involve AI.

These new warnings from the federal government highlight the legal risk for organizations using AI hiring tools. The EEOC enforces alleged violations of the ADA by filing charges against the employer, which can lead to compensatory and punitive damages. The EEOC’s public warning of employer liability means that ADA claims can be made by job candidates (or employees) based on AI discrimination and be awarded damages. Similarly, the Dept. of Justice can sue businesses for discriminatory practices.

AI hiring tools are increasingly relied upon by companies to screen candidates, with a majority of managers looking to implement this technology in the immediate future. These AI tools include software that measures speech and facial expressions, game-like online tests that assess job skills, and other ways of automatically screening candidates. It’s not a guarantee that the vendors producing these technologies adequately account for discriminatory practices in their AI tools. While the EEOC warns that it will enforce ADA violations on employees, other organizations such as the Federal Trade Commission could be responsible for enforcing unfair and discriminatory practices for the vendors of these technologies. This enforcement structure is unrefined, with FTC commissioner Christine Wilson having made clear that their organization lacks adequate technologists and staff to pursue claims. There could be AI hiring technologies on the market that are legally problematic and discriminatory, which might only come to light via an ADA claim against employers rather than the AI vendor.

It's worth noting that the EEOC guidance also applies to discrimination against existing employees with disabilities that might have their job performance evaluated using AI. This discrimination includes tests that use keylogging, reaction times, or speech identification to measure the productivity or identity of employees. AI discrimination against people with disabilities is something that employers should be particularly cautious of because of how many systems and processes, that do not even involve AI, already fail to account for disability. Diversity and inclusion efforts often overlook disability when compared to other forms of discrimination, which means it may be similarly overlooked in AI applications.

To guard against disability discrimination by AI tools, organizations can follow the recommendations outlined in the EEOC’s technical assistance document, which include conducting due diligence on AI vendors and ensuring transparency of any AI hiring processes so that applicants can decide to seek reasonable accommodation if needed. This due diligence should include an analysis that considers how the vendor’s AI hiring product might collect data in a way that would disadvantage people with disabilities. Audits of the tool should occur both before and after the AI technology is implemented, and may require training data, designs, applicant information, assessment criteria and outputs, and final hiring decisions that are made. This is a significant amount of data that AI vendors need to assess and is information that they are unlikely to provide to customers. Given the kinds of data needed to execute an audit, employers have an opportunity to obligate vendors to judicious audit standards by negotiating those terms during the acquisition of AI tools. This assurance that a vendor has and will continually audit its AI hiring tools for bias against people with disabilities can help protect companies using these tools down the line.

 
Previous
Previous

Student Testing AI Proctorio Fails When Put to the Test

Next
Next

Alexa is “Totally Cool With Being Single,” So Back Off