Employers continue to find new ways to automate business practices, including using software programs that provide for artificial intelligence (AI) in a variety of employment practices, including recruitment.
These new technologies, while promising in many respects, have garnered the attention of the Equal Employment Opportunity Commission, which last year launched the Artificial Intelligence and Algorithmic Fairness Initiative. The EEOC announced the initiative’s intended mission “to ensure that the use of software, including artificial intelligence (AI), machine learning and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.”
The EEOC announced, “Through the initiative, the EEOC will examine more closely how existing and developing technologies fundamentally change the ways employment decisions are made. The initiative’s goal is to guide employers, employees, job applicants and vendors to ensure that these technologies are used fairly and consistently with federal equal employment opportunity laws.”
In a clear sign of the potential dangers of AI, the EEOC filed suit earlier this month against iTutorGroup for age discrimination after the EEOC determined that the company programmed its online software to automatically reject more than 200 older otherwise qualified applicants.
The trilogy of companies provide English-language tutoring services to students in China. According to the lawsuit, in 2020 the company programmed its tutor application software to automatically reject female applicants age 55 or older and male applicants age 60 or older.
If true, this would violate the Age Discrimination in Employment Act, which protects applicants and employees age 40 and over.
EEOC Chair Charlotte A. Burrows said of the suit, “Age discrimination is unjust and unlawful. Even when technology automates the discrimination, the employer is still responsible.” She added, “This case is an example of why the EEOC recently launched an Artificial Intelligence and Algorithmic Fairness Initiative. Workers facing discrimination from an employer’s use of technology can count on the EEOC to seek remedies.”
The EEOC also just issued its first of promised technical assistance, which explains the Americans with Disabilities Act and the Use of Software, Algorithms and Artificial Intelligence to Assess Job Applicants and Employees. This guidance will be discussed in more detail of my AI Part 2 column next week.
According to the EEOC in its new guidance, AI can be used in a variety of software programs, including automatic résumé-screening, hiring, chatbot software for hiring and workflow, video interviewing, analytics, employee monitoring and worker management.
These software programs are often coupled with algorithms, which the EEOC defines as “a set of instructions that can be followed by a computer to accomplish some end.” These algorithms in the employment setting provide tools for algorithmic decision-making, which can be used in all stages of an employment life cycle from hiring to termination.
AI adds another layer of complexity whereby AI can be used when developing algorithms to assist and provide efficiencies to employers in helping them make decisions. The EEOC cites Congress’ definition of AI as “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”
Examples cited by the EEOC include:
- Resume scanners that prioritize applications using certain keywords.
- Employee-monitoring software that rates employees on the basis of their keystrokes or other factors.
- “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
- Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
- Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived “cultural fit” based on their performance on a game or on a more traditional test.
Employers need to ask what types of software and AI programs exist in their current human resources functions and make sure that their own data analytics provide for fairness and non-discrimination in these technologies.
Employers should also be cautious about walking blindly into promises of efficiency in new technologies.
Next week, I will discuss how AI can create disability discrimination.
More information can be found at EEOC.gov.