What You Need to Know About AI, Algorithms & Assessment

Highlights from Cappfinity’s Expert Panel Discussion

Artificial intelligence (AI) and algorithms are being increasingly used in the recruitment process to make hiring decisions faster and more efficient. However, there are concerns around AI systems making decisions outside of a job analysis, succumbing to bias, or compromising people’s privacy.

To address these concerns, global talent management and acquisition firm Cappfinity hosted a webinar featuring a panel of experts including a labor and employment law attorney and several esteemed industrial psychologists. The group emphasized that AI selection tools must be transparent, based on clear job characteristics, and relevant to the employer to be effective. The tools must also be predictive of organizational outcomes and provide fairness.

New Legislation on AI in NYC

In New York City, a new AEDT (automated employment decision tools) law will come into effect April 15, 2023 that requires employers or employment agencies that use AI in the hiring process to perform a bias audit that focuses on race, gender or their intersection. The law requires the use of an independent third party to run the audit, and the results must be posted publicly to give candidates the opportunity to opt-out or choose an assessment tool that does not use AI. The company must also publish a report each year detailing the system’s impact on the hiring process and its effects on diversity, equity, and inclusion.

As labor and employment attorney and panelist Mark J. Girgouard pointed out in the webinar, new AEDT rules provide more clarity on what it means to have AI “substantially assist” or provide “discretionary decision-making” in the hiring process. Is the tool relied on exclusively with no other factors considered, or weighed more heavily than all other human decisions? Are you actually using it to assist or replace?

According to industrial psychologist and Cappfinity president Nicky Garcea – who also served as moderator for the webinar – these are all questions that are troubling New York City businesses, like big banks and law firms, as they try to navigate the new norm. She also says that the new law is likely to spike a trend in other states and authorities to follow this legislative pattern.

Among the panelists was Rice University professor and industrial/ psychologist Dr. Fred Oswald who stressed that the spirit of the New York law was just as important as its logistical components. He explained that there are concerns about AI making hiring decisions that lie outside of a job analysis or scientific dataset and that only I/O psychology can measure. What are characteristics relevant to the job or at the specific point in the interview/selection process? Which characteristics are most important for an employer to measure? What candidate strengths will come into play in the real world?  Oswald says there are certain aspects you “just can’t scrape the web for” with AI if you want to be true to I/O science and maintain a level of fairness among applicants.

Addressing Cheating Among Candidates

Another important aspect of AI recruitment is the prevention of cheating among candidates, particularly as we see tools like ChatGPT pick up steam. According to Helen Dovey, Chief Assessment Officer for Cappfinity,  there has been an uptick of “unusual candidate behavior” and cheating can actually be detected in a few ways.  First, by checking response times. If the applicant is responding in a timeframe that is much faster than humanly possible, it can be an indication of cheating. It can also be detected in the difficulty of the questions answered correctly – if a difficult question was answered perfectly, for instance, while a simple question was answered incorrectly.

According to Dovey’s colleague Dr. Stephen Mueller, who heads up the consulting practice for Cappfinity, the best way to combat cheating is to make applicants feel more accountable. Explaining to an applicant that it’s not in their best interest to cheat, for example, may deter them from using AI. There is also technology that uses applicants’ webcams to record them taking their assessments, or tech that flags when another tab is opened to copy and paste content. He suggested waiting until multiple indicators of cheating arise before making a claim.

Perhaps most importantly, Mueller pointed out that equitable hiring practices are at risk if we revert to old methods due to concerns about candidates cheating with AI.

In summary, the panel agreed that there can be a place for AI and algorithms in the recruitment process, but it’s critical to address concerns around transparency, fairness, and privacy. Employers must also be mindful of the legal requirements and regularly update their bias audit reports to ensure that their selection tools comply with regulations. To watch the full webinar, click HERE.