There is an increasing number of AI-based tools that are being used in the hiring and HR process. I am not sure whether this is a benefit to job seekers and to the employment business. Certainly, there are plenty of horror stories, such as this selection from 2020’s most significant AI-based failures such as deepfake bots, biased predictions of pre-criminal intent, and so forth. (And this study by Pew is also worth reading.)
I would argue that AI has more of a PR than HR problem, with the mother lode being traced back to the Terminator movies and Minority Report, with Asimov’s Three Laws of Robotics thrown in for good measure. In a post that I did for Avast’s blog last fall, I examined some of the ethical and bias issues around AI. Part of the issue is that we still need to encode human judgment into some digital form. And people aren’t as consistent as machines — which sometimes is a useful thing. I will give you an example at the end of this post.
But let’s examine what is going on with HR-related AI. In a study done last year by HRExaminer, identified a dozen hiring-based AI tools, with half of them focusing on the recruiting function. I would urge you to examine this list and see if any of them are being used at your workplace, or as part of your own job search and hiring process.
One of the ones on the list is HiredScore, which offers an all-purpose HR solution using various AI methods to rank potential job candidates, recommend internal employees for open positions, and measure inclusion and diversity. That is a lot of places where the doomsday “Skynet” scenario of the machines taking over could happen, and is probably one of the few plot lines that Philip K. Dick never anticipated. Still, the company claims they have trained their machine learning algorithms with more than 25M resumes and twice as many job postings.
There are other niche products, such as Xref’s online reference checking or the testing prowess of TestGorilla. The latter offers a library of more than 135 “scientifically validated tests” for job-specific skills like coding or digital marketing, as well as more general skills like critical thinking. That one struck another nerve for me. The reason I put that phrase in quotes is because I can’t validate its claim.
As many of you who have followed my work have found out, my first job in publishing was working for PC Week when it was part of the Ziff Davis corporation. ZD had a rule that required every potential hire to submit to a personality test before getting a job offer. I have no recollection of the actual test questions all these years later, but obviously I passed and so began my writing career.
In the modern era, we now have vendors that use AI tools to help screen applicants. I am not sure I would have passed these tests if ZD had them available back in the day. That doesn’t make me feel better about using AI to help assist in this process.
Let me give you a final example. When I went to visit my daughter last month, I was given a specific time period that I was allowed to enter Israel. Only it wasn’t specific: the approval was granted for “two weeks” but not starting from any specific time of day. I interpreted it one way. The German gate agents at Frankfort interpreted another way. Ultimately, the Israeli authorities at the airport agreed with my point of view and let me board my final flight. If a machine had screened me, I would have probably not been allowed to enter Israel.
In my post for Avast’s blog last year, I mention several issues surrounding bias: in the diversity of the programming team creating the algorithms, in understanding the difference between causation and correlation, and in interpreting the implied ethical standards of the actual algorithms themselves. These are all tricky issues, and made even more so when you are deciding on the fate of potential job applicants. Proceed with caution.