Remember when people worked at jobs for most of their lives? It was general practice back in the 1950s and 1960s. My dad worked for the same employer for 30 or so years. I recall his concern when I changed jobs after two years out of grad school, warning me that it wouldn’t bode well for my future prospects.
So here I am, ironically now 30-plus years into working for my own business. But this high-frequency job hopping has also accelerated the number of resumes that flood a hiring manager, which in turn has motivated many vendors to jump on board various automated tools to screen them. You might not have heard of companies in this space such as HireVue, APTMetrics, Curious Thing, Gloat, Visier, Eightfold and Pymetrics.
Add two things to this trend. First is the rise in quiet quitting, or employees who just put in the minimum to their jobs. The concept is old, but the increase is significant. Second and the bigger problem is another irony: now we have a very active HR market segment that is fueled by AI-based algorithms. The combination is both frustrating and toxic, as I learned from reading a new book entitled The Algorithm, How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. It should be on your reading list. It is by Hilke Schellmann, a journalism professor at NYU, and it examines the trouble with using AI to make hiring and other staffing decisions. Schellmann takes a deep dive into understanding the four core technologies that are now being deployed by HR departments around the world to screen and recommend potential new job candidates, along with other AI-based tools that come into play to evaluate employees performance and try to inform other judgments as to raises, promotions, or firing. It is a fascinating look at this industry, fascinating and scary too.
Thanks to digital tools such as LinkedIn, Glassdoor and the like, sending in your resume to apply for an opening has never been easier. Just a few clicks and your resume is sent electronically to a hiring manager. Or so you thought. Nowadays, AI is used to automate the process: These are automated resume screeners, automated social media content analyzers, gamified qualification assessments, and one-way video recordings that are analyzed by facial and tone-of-voice AIs. All of them have issues, aren’t completely understood by both employers and prospects alike, have spurious assumptions and can’t always quantify the important aspects of a potential recruit that would ensure success at a future job.
What drew me into this book was that Schellmann does plenty of hands-on testing of the various AI services, using herself as a potential job seeker or staffer. For example, in one video interview, she replies to her set questions in German rather than English, and receives a high score from the AI.
She covers all sorts of tools, not just ones used to evaluate new hires, but others that fit into the entire HR lifecycle. And the “human” part of HR is becoming less evident as the bots take over. By take over, I don’t mean the Skynet path, but relying on automated solutions does present problems.
She raises this question: “Why are we automating a badly functioning system? In human hiring, almost 50 percent of new employees fail within the first year and a half. If humans have not figured out how to make good hires, why do we think automating this process will magically fix it?” She adds, “An AI skills-matching tool that is based on analyzing résumés won’t understand whether someone is really good at their job.” What about tools that flag teams that have had high turnover? It could be two polar opposite causes: a toxic manager or a tremendous manager that is good at developing talent and encouraging them to leave for greener pastures.
Having my own freelance writing and speaking business for more than 35 years, I have a somewhat different view of the hiring decision than many people. You could say that I either had infrequent times that I was hired for full-time employment, or that I face that decision multiple times a year whenever I get an inquiry from a new client, or a previous client that is now working for a new company. Some editors I have worked for decades as they have moved from pub to pub, for example. They hire me because they are familiar with my work and value my perspective and analysis that I bring to the party. No AI is going to figure that out anytime soon.
One of the tools that I have come across in the before-AI times is the DISC assessment that is part of the Myers-Briggs, which is a psychological tool that has been around for decades. I wrote about my test when I was attending a conference at Ford Motor Co. back in 2013. They were demonstrating how they use this tool to figure out the type of person who is most likely to buy any particular car model. Back in 2000, I wrote a somewhat tongue-in-cheek piece about how you can use Myer-Briggs to match up our personality with that of our computing infrastructure.
But deciding if someone is an introvert or an extrovert is a well-trod path, with plenty of testing experience over the decades. These AI-powered tools don’t have much of this history, are based on data sets that are shaky with all sorts of assumptions. For example HireVue’s facial analysis algorithm is trained on video interviews with people already employed by the company. That sounds like a good first step, but having done one of those one-sided video interviews — basically where you are just talking to the camera and not interacting with an actual human asking the question — means you aren’t getting any feedback from your interviewer, either with subtle facial or audio clues that are part of normal human discourse. Eventually, in 2021, the company stopped using both tone-of-voice and facial-based algorithms entirely, claiming that natural language processing had surpassed both of them.
Another example is capturing when you use your first person pronouns during the interview — I vs. we for example. Is this a proxy for what kind of team player you might be? HireVue says they base their analysis on thousands of questions such as this, which doesn’t make me feel any better about their algorithms. Just because a model has multiple parameters doesn’t necessarily make it better or more useful.
Then there is the whole dust-up on overcoming built-in AI bias, something that has been written about over the years going back to when Amazon first unleashed their AI hiring tool and found it selected white men more often. I am not going there in this post, but her treatment runs deep and shows the limitations of using AI, no matter how many variables they try to correlate with their models. What is important, something Mark Cuban touches on frequently with his posts, is that diverse groups of people produce better business results. And that diversity can be defined in various ways, not just race and gender, but by people with disabilities both mental and physical. The AI modelers have to figure out — as all modelers do — what is the connection between playing a game, or making a video recording, and how that relates to job performance? You need large and diverse training samples to pull this off, and even then you have to be careful about your own biases in constructing the models. She quotes one source who says, “Technology, in many cases, has enabled the removal of direct accountability, putting distance between human decision-makers and the outcomes of these hiring processes and other HR processes.”
Another dimension of the AI personnel assessment problem is the tremendous lack of transparency. Potential prospects don’t know what the AI-fueled tests entail, don’t know how they were scored or whether they were rejected from a job because of a faulty algorithm or bad training data or some other computational oddity.
When you step back and consider the sheer quantity of data that can be collected by an employer: keystrokes on your desktop, website cookies that record the timestamp of your visits, emails, Slack and Teams message traffic, even Fitbit tracking stats — it is very depressing. Do these captured signals reveal anything about your working habits, job performance, or anything really? HR folks are relying more and more on AI-assistance, and now can monitor just about every digital move that an employee makes in the workplace, even when that workplace is the dining room table and the computer is shared by the employee’s family. (There are several chapters on this subject in her book.)
This book will make you think about the intersection of AI and HR, and while there is a great deal of innovation happening, there is still much work to be done. As she says, context often gets lost. Her book will provide plenty of context for you to think about.
One of my readers (who wrote an early review of HireVue here) writes:
There’s not enough said about the ongoing corruption of HR, recruiting and hiring by use of “solutions” designed by database jockeys – from Monster.com’s early job boards to LinkedIn’s “AI-assisted” job boards. Automated dumpster-diving is not good. More is not better – not for employers or job seekers. Dumbing down a task that requires human judgment is not good. Yet it’s all over but the crying. The marketing of Applicant Tracking Systems have been a resounding success. HR and job seekers alike have been brainwashed beyond recognition. Astonishing that boards of directors remain squeamish about wading into the HR cesspool to do some outcomes analysis. Meanwhile, “There aren’t enough skilled workers!”
On your point of faulty algorithms or bad training data, NYC has attempted to regulate and audit these systems. I wonder if Hilke looked at their efforts.
https://news.bloomberglaw.com/us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits