AI Improves Background Checks but Human Touch Still Needed Due to Discrimination Concerns in 2019

AI Improves Background Checks but Human Touch Still Needed Due to Discrimination Concerns

By Thomas Ahearn—Dec. 17, 2018

The use of technology such as Artificial Intelligence (AI) algorithms will continue to improve background checks for employment purposes but the “human touch” will still be needed due to discrimination concerns in 2019. This trend has been chosen by global background check provider Employment Screening Resources® (ESR) as seventh on the list of “ESR Top Ten Background Check Trends” for 2019.

“Artificial intelligence” or “AI” is intelligence demonstrated by machines and not the natural intelligence displayed by humans and animals. AI research is defined as the study of “intelligent agents,” any device that perceives its environment and takes actions that maximize chances of achieving goals. The term “AI” is used when machines mimic functions of the human mind such as learning and problem solving.

The use of AI in the background check process is relatively new. In December 2018, Forbes reported the use of AI for screening job applicants and employees could raise “thorny ethical issues” about how much private life matters in the workplace even though the application of AI algorithms in the background check process could “help reduce employment bias by better classifying information deemed relevant.”

However, Reuters reported in October 2018 that online retail giant Amazon had “scrapped” a secret AI recruiting tool for showing bias against women when hiring. Amazon built computer programs to review resumes “with the aim of mechanizing the search for top talent” but the system “was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.”

The discrimination against women in Amazon’s AI recruiting tool was due to the fact that the computer models “were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period” and the vast majority of these resumes being observed by AI came from men because of the overall male dominance in the tech industry, Reuters reported.

A report from CNBC in December 2018 explained how “biased AI” can be created by faulty algorithms or insufficient data: “AI programs are made up of algorithms, or a set of rules that help them identify patterns so they can make decisions with little intervention from humans. But algorithms need to be fed data in order to learn those rules — and, sometimes, human prejudices can seep into the platforms.”

The use of AI for background checks can be controversial. In November 2018, The Verge reported that popular social network sites Facebook, Instagram, and Twitter began limiting the amount of accessible data to a startup company that used “advanced artificial intelligence” to screen potential babysitters after a report by The Washington Post detailing the company’s methods attracted widespread criticism.

The government is also watching AI. In September 2018, seven members of Congress sent letters to the U.S. Equal Employment Opportunity Commission (EEOC), Federal Bureau of Investigation (FBI), and Federal Trade Commission (FTC) voicing concerns over the use of AI and asking these agencies if they vetted the potential biases of AI algorithms being used for, among other things, hiring employees.

The letter to the EEOC asked if AI could violate Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990. The letter to the FBI voiced concerns over facial recognition technology. The letter to the FTC was concerned that AI could “perpetuate gender, racial, age, and other biases” and its use “may violate civil rights laws and could be unfair and deceptive.”

The EEOC – the U.S. government agency enforcing federal laws prohibiting employment discrimination – enforces Title VII which prohibits employment discrimination based on race, color, religion, sex, or national origin. Job applicants can sue employers for employment discrimination if they believe they were not hired due to the traits covered by Title VII. AI use in screening could increase these lawsuits.

Read more at Employment Screening Resources.

Racial Bias Training at Starbucks

Racial Bias Training at Starbucks

The following is a letter to the editor submitted to the New York Times by Andrea Johnson, our senior advisor and professional development coordinator. To the Editor: Re “Starbucks Will Close 8,000 Stores for Training” (news article, April 18): I would like to applaud...

Join Our Next Courageous Conversation

Sign up to receive exclusive content, stories about racial equity transformation, events and more.