There is an increasing number of AI-based tools that are being used in the hiring and HR process. I am not sure whether this is a benefit to job seekers and to the employment business. Certainly, there are plenty of horror stories, such as this selection from 2020’s most significant AI-based failures such as deepfake bots, biased predictions of pre-criminal intent, and so forth. (And this study by Pew is also worth reading.)
I would argue that AI has more of a PR than HR problem, with the mother lode being traced back to the Terminator movies and Minority Report, with Asimov’s Three Laws of Robotics thrown in for good measure. In a post that I did for Avast’s blog last fall, I examined some of the ethical and bias issues around AI. Part of the issue is that we still need to encode human judgment into some digital form. And people aren’t as consistent as machines — which sometimes is a useful thing. I will give you an example at the end of this post.
But let’s examine what is going on with HR-related AI. In a study done last year by HRExaminer, identified a dozen hiring-based AI tools, with half of them focusing on the recruiting function. I would urge you to examine this list and see if any of them are being used at your workplace, or as part of your own job search and hiring process.
One of the ones on the list is HiredScore, which offers an all-purpose HR solution using various AI methods to rank potential job candidates, recommend internal employees for open positions, and measure inclusion and diversity. That is a lot of places where the doomsday “Skynet” scenario of the machines taking over could happen, and is probably one of the few plot lines that Philip K. Dick never anticipated. Still, the company claims they have trained their machine learning algorithms with more than 25M resumes and twice as many job postings.
There are other niche products, such as Xref’s online reference checking or the testing prowess of TestGorilla. The latter offers a library of more than 135 “scientifically validated tests” for job-specific skills like coding or digital marketing, as well as more general skills like critical thinking. That one struck another nerve for me. The reason I put that phrase in quotes is because I can’t validate its claim.
As many of you who have followed my work have found out, my first job in publishing was working for PC Week when it was part of the Ziff Davis corporation. ZD had a rule that required every potential hire to submit to a personality test before getting a job offer. I have no recollection of the actual test questions all these years later, but obviously I passed and so began my writing career.
In the modern era, we now have vendors that use AI tools to help screen applicants. I am not sure I would have passed these tests if ZD had them available back in the day. That doesn’t make me feel better about using AI to help assist in this process.
Let me give you a final example. When I went to visit my daughter last month, I was given a specific time period that I was allowed to enter Israel. Only it wasn’t specific: the approval was granted for “two weeks” but not starting from any specific time of day. I interpreted it one way. The German gate agents at Frankfort interpreted another way. Ultimately, the Israeli authorities at the airport agreed with my point of view and let me board my final flight. If a machine had screened me, I would have probably not been allowed to enter Israel.
In my post for Avast’s blog last year, I mention several issues surrounding bias: in the diversity of the programming team creating the algorithms, in understanding the difference between causation and correlation, and in interpreting the implied ethical standards of the actual algorithms themselves. These are all tricky issues, and made even more so when you are deciding on the fate of potential job applicants. Proceed with caution.
David Claiborne writes:
I never took any test with ZD, although I was never an actual employee. I was a contributing editor for PC Tech Journal, PC Week, and PC Sources. Actually ZD was a very loose organization. I never received any “rules” regarding vendor contact. My articles were never reviewed by a “committee.” Also I was an original contributing editor for Windows magazine. No interview, nothing signed, no rules. It was a better time.
There has been a recent discussion on Twitter about the interviewing’s reliance on specific technical questions. Today, almost any technical answer can be found with a few keystrokes in Google. The skills you need are not minutiae, which is defined as “the small, precise, or trivial details of something.”
It all gets back to “spheres of trust.” When you delegate your “trust” to others – government agencies, computers, AI, etc., you lose control.
Anyway, I am getting too old to look for new jobs.
💯
These snake oil tests are used without valid data to support and companies are duped or complicit imho
The use of of a clinical tool (MMPI) is an affront and dates back years
It’s sad as this has real impact on real people