A new week and new threats to worry about

This week I saw two stories that sent a chill up my spine. They indicate that cybersecurity is an ever-evolving universe where exploits continue to find new ground, and defenders have to look carefully and remain ever-vigilant. Let’s take a look.

The first one sounds almost comical: two UC Santa Cruz students figured out how to get their clothes cleaned at their dorms’ laundry room for free. But behind the stick-it-to-the-man college prank lies a more sobering tale. The students were able to analyze the data security posture of the laundry vendor and find a combination of weak programs to run endless wash-and-dry cycles almost surgically. The vendor wasn’t some small-time operator either: they run a network of a million machines at hotels and campuses installed around the world. But despite this footprint, they have miserable application security. To wit, there is no way for anyone to report any vulnerabilities, either online or via phone.  The company’s mobile app, used to pay to activate a specific machine, has no authentication mechanisms and so the students were able to top off their accounts without spending any actual money. The APIs used by their apps don’t verify the users, so the students could issue commands to the washers and dryers, commands that were easily discovered with the company’s own documentation.

The students were responsible, although they did “top off” their accounts to the tune of a million dollars, just to make a point. The only thing the laundry vendor did do was zero these accounts out but didn’t fix any of the other flaws. Nor did the company reach out to them or work with them (or any actual security researcher, at least according to what I read), again showing a complete cluelessness.

I will let you spin up the various morals from this story. I was impressed with the level of professionalism that the students demonstrated, and would imagine that they will have no problem getting infosec jobs and will do well once they have to leave the halls of academia and have to start paying for their own laundry operations.

Let’s move on to the second story, about how your Wifi router can be used as another means of surveillance. Brian Krebs broke this one, based on research from two University of Maryland computer scientists. They discovered a way to de-anonymize the locations of Wifi routers based on the network communications of Apple and Google products that connect to them. The problem lies in the design of Wifi positioning systems that are seeking more precise geo-locations: think Apple AirTags and other GPS applications that are tracking your movements. Attackers could leverage these processes to figure out specific movements of people, even people that haven’t given any permission to be tracked and are just moving about the world. Someone with a portable travel router, for example, is ripe for this exploit. The research paper posits three situations that demonstrate what you can learn from this analysis, such as tracking the movements of Gazans after 10/7, the victims of the Maui wildfires last summer, or people involved on both sides of the war in Ukraine.

As they wrote in their paper, “This work identifies the potential for harm to befall owners of Wifi routers. The threat applies even to users that do not own devices for which the Wifi positioning systems are designed — individuals who own no Apple products, for instance, can have their router listed merely by having their Apple devices come within Wifi transmission range.”

For privacy-concerned folks, one solution is to append “_nomap” to the SSID name of your Wifi router, which will prevent Apple and Google from using its location data.

I remember Myst

I have long been a fan of quirky museums and collections, and heard last week about the Museum of Play, based in Rochester NY. They recently awarded their latest round of “hall of fame” computer video games, and on this year’s list is Myst. This 30+ year old game was a significant moment in its day and a big hit back as I wrote on a blog from 2012. It sold more than six million copies and raised the bar from crude graphics and beeping computer generated noises that were found in many of the early games from that era. After hearing about the news, I wanted to dust off my software and try to take it for another spin (which is what I did back in 2012), but alas, I didn’t have the right vintage of OS and drivers to make it work.

Myst gets a fully modern update in this realtime 3D Masterpiece Edition - PolygonAs I wrote back then in my blog, I observed that Myst’s graphics were not anything like more modern games. Its genius was giving equal weight to both graphics and audio, and while I was clicking about its landscape, I left the sounds of the ocean lapping up against the rocky island playing while I was working, and it was very soothing. The museum says it was slow paced and contemplative but inspired wonder. I concur.

Myst was not a first-person-shooter, but a game that involved solving puzzles, puzzles that had very inscrutable clues that were easy to miss at first glance. It easily got frustrating, and I often found myself going back over ground that I thought I had covered, only to find another hidden puzzle that unlocked a new landscape. Eventually, I bough a cheater book to get to the end of the game, thereby sealing my fate as a forever-novice gamer.

Myst came along at a time when PCs were just getting CD-ROMs installed: I remember buying this add-on package from Soundblaster because those early computers didn’t have any audio support either. And figuring out that puzzle of drivers, OS updates, and rooting around inside my computer to connect everything up was my first foray into building the kind of computer that we now take for granted where sound and optical media (and writable multi-speed ones at that) are part of the package.

Well, at least we can take the sound features for granted —  we seem to be moving away from having DVD drives as standard equipment in the name of streaming and having ultra-thin laptops and tablets. It also came at a time when color monitors were very new to the Mac world and graphics cards came with very little additional memory. This meant that the ability to do full-motion high-resolution video was still far off. Now we have graphic processors that have more horsepower than the CPUs in the same machine, and companies like Nvidia and AMD are finding new markets in providing the GPUs for doing machine learning and AI processing.

And software such as Photoshop and QuickTime were very much v1.0, barely able to keep up with the demands by the game’s two brothers who created it. Creating the three-dimensional images wasn’t easy: rendering took hours per image, because of software and hardware limitations.

And it especially wasn’t easy because the internet hadn’t yet taken off: the Myst dev team had to resort to “tire net” — meaning driving around the latest builds on removable media that were probably all of a 100MB in capacity and delivering them to various team members.

The Miller brothers would also star as actors in the video segments that a player would uncover in the game itself.

Myst was also ahead of its time when it came to non-linear storytelling: we have since had various feature films that are so constructed, such as Sin City in 2005 and Pulp Fiction, just to name a few of them. Rand Miller in a long interview with Ars done a few years ago speaks about how real life is all about embedding stories, and Myst was the first time that a game used this technique to make it more realistic and compelling. It was as if the made-up world was talking back to you, the gamer, directly. Again, now we take this situation very much for granted in modern games.

So I am glad after all these years that Myst is receiving some recognition, even if it is in a quirky Rochester museum, and even if all of my aging PCs can’t run it because they are n’t old enough. But if this essay has piqued your interest and you want to run Myst for yourself, act now and offer to pay the Fedex delivery and it could be yours. I will pick one reader to get the 3-CD package of Myst and its successor games — if you have a vintage machine that is old enough to run it.

The battlefield smartphone: a progress report

Thaddeus Grugq’s latest newsletter opines on the role of the smartphone in how warfare is reported on by the media, calling it a revolution in military media relations. Things have certainly changed since battlefield reporters began reporting on wars: events are posted in near-real-time, with streaming color video transmitted via social media networks, shrinking the distance from the war zone to the reporter and viewed around the world. “The information environment is truly beyond the control of the military,” he writes.

This is perhaps the ultimate in media disintermediation. There are no gatekeepers, everyone with a smartphone and a You Tube channel is now a “citizen journalist” with a ready-made audience.

It isn’t just for the reporters: there are benefits for smartphone-toting warfighters as well. There have been plenty of articles that have documented how soldiers have exploited smartphones over the past several years, including this one that documents what is going on in Israel. It enables the troops to better communicate with their families, something that I am personally familiar with my Israeli son-in-law, who has been deployed several times since the war began. When he was deployed in Gaza, his regular phone didn’t work, so it was always stressful. But when he was deployed in Israel, he was in touch with us, which seemed surreal. Even foxholes now have Wifi.

And of course smartphones and citizen journalists aren’t restricted to the war zone either, such as coverage of the riots during the Ferguson summer of 2014 and this spring’s college encampments. Some of that reporting was better than the mainstream media, to be sure. That link explores the concept of citizen journalists that I wrote during that summer.

But we have crossed a Rubicon of sorts with the Israeli government literally shuttering Al Jazeera’s Jerusalem studios this week. My daughter, who has been living there for many years, and I disagree on this action (she is in favor of the shutdown, I think the network should be allowed to continue to broadcast). Imagine if Biden were to shutter Newsmax. Or if police raided a newspaper in Kansas. (Wait, that did happen last year.)  For many years, I watched the early morning news coverage from Al Jazeera English’s programming. It was mostly fair. I haven’t seen much since the war began last October, and I am not sure how I would react to hearing misinformation being broadcast now.

The record of independent journalism in the Israel-Hamas war is a difficult one, because no one can really do research. But imagine if nearly 100 journalists were killed by the US in one of our recently wars — that is the current tally of who has been killed in Gaza and the West Bank, according to the CPJ. None of these people were engaged in any military capacity, at least according to their documentation. And Israel has also blocked any journalists from entering Gaza, making matters more difficult.

Let’s look at the coverage of the college protests. We saw the furniture barricades at Hamilton Hall on Columbia’s campus: is that a peaceful protest? Did the police act responsibly? With all the live streams, including some from the police bodycams, it is hard to say. Now imagine having very limited access to what is going on (which some colleges are trying to do). For all the real-time streaming, the fog of war becomes very thick indeed.

I wrote after the Ferguson riots, that if we are going to be a shining example of a working democracy, we need a strong and independent press that can document police abuses. Otherwise, we are no better than the countries we criticize for trumped up charges and wrongly arresting people. The same is true for wartime journalism.

Managing your identity theft protection

World Password Day is Thursday, I know all of my readers are gearing up major parties to celebrate. What, you don’t know about this day in flackery?  Read on.

I know my inbox runneth over with WPD PR pitches. Perhaps you have already planned your day, such as noting yet another account of yours that has been breached? Another chance to reuse that password from 1992? Time to get another password manager other than Lastpass? Or perhaps just have a cupcake decorated with ones and zeros? (Image credit: Google’s Gemini)

Here is how I am celebrating. I am actually reviewing the two free identity protection services that I have been granted, thanks to two recent and massive data breaches. One is from the credit bureau Experian, the other from a company called IdentityDefense.com. Normally, these outfits charge anywhere from $10 to $30 a month, and in the past I have not been motivated to use these, or any other service. Here is the problem: being a privacy paranoid person, I don’t want to give out any of my numbers. Yet to sign up for these services, you have to lay it all out there: SSN, birth date, previous addresses, drivers license, phone numbers and so forth.

Some things you might want to know: my wife and I have had spurious credit card charges over the years — one just recently where someone kept trying to charge a rideshare in San Francisco repeatedly. And I think her credit is still frozen (although I don’t recall when we got it or if we actually unfroze it).

The dashboard for IdentityDefense looks like this:

You’ll notice that it shows you a bunch of dark web alerts (where a bunch of passwords have been collected after a breach by some baddie), my credit score (nice), and a bunch of other stuff. The alerts all date from when I initiated the service last month and haven’t been updated. Some of these alerts are less than meaningful, such as the breach of Xss.js that was found in May of 2018 or the one called Combolist_bundles_solenya from December of 2017. I have no idea what these were, and if actually wanted to change my password, where to go about doing so. On some of the other dark web listings, the breach id’ed an actual website where I didn’t ever have an account. So right away, you can see that this information isn’t very helpful.

One thing that IdentityDefense does have is a way to file online credit freezes for the three credit agencies. You could probably find the web pages for these on your own, but still, it is nice to have this all here in one place.

Let’s look at the Experian ID works dashboard. It is less than useful:

This is because almost everything that you want to know about will require a lot of clicking around, For example, you see the “CreditLock” panel — that is slightly more than a freeze, because you can lock and unlock it in real time, and of course this is just for Experian. When you find your way to the dark web alert report, you will also see a lot of useless data, such as an email address for me that I have never used, although attached to my actual phone number. One alert had both the right phone and email for a breach from Apollo.io in July 2018, never heard of them, and when I tried to reset my password on their site, it claimed no one with that email has an account.

There is another service that businesses use to manage their dark web and other threats that I have used from time to time from CyberSixGill.com, where I wrote a white paper for them a few years ago. That paper spoke to this situation of not having very complete information about what was breached, or how metadata on the breach wasn’t of sufficiently high-enough quality or complete enough to be actionable. I wrote that you should be able to visualize the context of the threat and figure out where you were compromised, and what you should do in the future to prevent something similar from happening. That is still very much the case.

And if you are in the market for one of these services, you can read Paul Bischoff’s hands-on review of these and other services here on Comparitech. He puts them through more rigorous testing, and recommends services depending on how much of your life you want to divulge and then protect, and how complex a financial situation you might have.

So you should know by now that when something is free, it may or may not have any value to you. That latter situation is certainly the case with these protect-after-breach situations. Far better to have stronger (long and complex) passwords that are unique and managed by a service other than LastPass (I use Zoho Vault, which is free and does have value).

And if you are still in the mood to celebrate WPD, this comment from a security nerd from 2018 is instructive: “Happy WorldPasswordDay. Or in 90 days, WorldPassword1 Day.” Last year, I wrote: “Maybe on WPD in 2024 we can finally break out the bubbly and celebrate their actual demise.” Nope, not yet, put that bottle back in the fridge.

Beware of the pink slime website

Jack Brewster built his own hyperlocal news website in a couple of days and with a grand total investment of $105. What is significant is the circumstances by which he accomplished this. He used these funds to hire a programmer that he never met. Although Brewster had no other specialized expertise, he was able to launch a fully automated, AI-generated “pink slime” site capable of publishing thousands of articles a day. What is scary is that he could tune the AI to create whatever partisan bent and nearly all of the articles were rewritten without credit from legitimate news sources. Brewster is a reporter for the Wall Street Journal and describes his process here. “The appearance of legitimacy is everything online, and pink-slime websites are a serious menace,” he concluded.

This is the first time I have heard the term. It is certainly evocative, and dates back a few years. I last wrote about this condition in the pre-AI era, when actual people were being paid close to nothing to create this so-called content. That link has a bunch of resources to help you spot these fakes, but as AI gets better at sounding like some overblown windbag commentator, it will certainly get harder to discriminate what is real and what isn’t.

Apparently, slime pays. His programmer has built hundreds of these types of slimery, and is one of many, many people who advertise their services on Fiverr and other employment-as-a-service websites. What they are doing isn’t (yet) illegal, but makes me (and Brewster for that matter) uncomfortable. He set up his site behind a paywall, but the WSJ piece has a screencap where you can see what it looks like.

Speaking of Fiverr, long ago and in a galaxy far, far away I set up my own site to sell my freelancing services. Needless to say, I had no takers. My rate was a lot higher than the programmer Brewster hired for his website.

Brewster does misinformation tracking for a living, so it is somewhat ironic that he paid to produce his own slime site. His operation, Newsguardtech.com, has tracked more than a thousand slimy sites, and offers browser extensions and various other tools to rate news sites, both slimy and (supposedly) legit ones.

Of course, that isn’t the only development of genAI content. This movie trailer looks so airbrushed that it is hard to watch. One reviewer wrote:

It is not clear whether the trailer is bouncing between different characters, or if TCL has been unable to figure out how to keep them consistent between scenes. The lip-synching is wildly off, the scenes are not detailed, walking animations do not work properly, and people and environments warp constantly.

All I can say, this is one bad movie trailer, and I am sure an even worse movie.

I guess it is a testament to the progress of genAI that we have come so far, so fast. And perhaps this is yet another reduction of the circumference of the noose around my own neck, or an indication of how my astronomical pay rates (at least, seen in this AI/Fiverr context) really are.

Dark Reading: Electric vehicle charging stations still have major cybersecurity flaws

The increasing popularity of electric vehicles isn’t just a favorite for gas-conscious consumers, but also for cyber criminals that focus on using their charging stations to launch far-reaching attacks. This is because every charging point, whether they are inside a private garage or on a public parking lot, is online and running a variety of software that interacts with payment systems and the electric grid, along with storing driver identities. In other words, they are an Internet of Things (IoT) software sinkhole.

In this post for Dark Reading, I review some of the issues surrounding deployment of charging stations, what countries are doing to regulate them, and why they deserve more attention than other connected IoT devices such as smart TVs and smart speakers.

Forget TikTok bans. Think about connected Chinese cars.

This week our Congress is crafting legislation to remove TikTok from our lives. It is as misplaced as Nancy Reagan’s “Just Say No to Drugs” campaign — and perhaps as empty a gesture. Yes, there are real issues with all that social media metadata ending up on some Chinese hard drive, and the notion that ByteDance can separate its US operations and clouds from Chinese ones shows how little our lawmakers understand technology.

Instead, I would like you to think about the following companies: Nio, Inceptio, XPeng and Zeekr. Ever heard of any of them? They are all major Chinese EV companies, and all of them pose a much bigger threat to our data privacy and national security than TikTok. By way of reference, China has hundreds of car makers, and they are all obligated to transmit real-time data to their government. Now they want to sell them here and are doing road tests.

Last fall, another bipartisan group of lawmakers sent letters to these and other Chinese EV makers, wanting to get more transparency about the data they collect on their cars . I haven’t seen the responses, but guess the truthful answer is “we collect a lot of stuff that we aren’t going to tell you about, and we have to share it with the CCP.”

Last week, the Commerce Department issued its own request and asked for public comments as part of its role to consider its own series of regulations. The department is investigating the risks of EV and other connected vehicles on national security and potential supply chain impacts of these technologies. Interestingly, it is finally acting on a Trump Executive Order. Another bipartisan effort. The document linked above asks for a lot of details about obvious data collection methods. If I were running a Chinese car company,  I would think about designing systems that would be less obvious. One of the things these Chinese car makers are quickly learning is how to become better software companies, thanks to the Tesla business model. (Tesla also makes and sells its cars in China BTW.)

While there are hundreds of millions of TikTok US users, some of whom are adults, the threat from car metadata is much more pernicious, especially when it could be paired with phone location data from passengers sitting in the same vehicle. What they both have in common is that all this data is being collected without the user’s knowledge, consent, or understanding who is actually collecting it.

Those phones have been recording our movements for quite some time, without any help from China. There are so many stories about tracking the jogging routes of US service members at foreign military bases, or tracking a spouse’s movements, or figuring out where CIA employees stop for lunchtime assignations near Langley, etc. But that pales in comparison to what a bunch of CPUs and scanners sitting under the hood can accomplish on their own.

Remember war driving? That term referred to someone in a car with a Wifi scanner who could hack into a nearby open network. That seems so quaint now that a car could be doing all the work without the need for an actual human occupant. I guess I will go back to watching a few Taylor vids on TikTok, at least until the app is removed by Congress. In the meantime, you might want to review your own location services settings on your phones.

The coming dark times for tech won’t be anything like the 2000s

My former colleague Dave Vellante has written a nice comparison of the current tech  contraction with the dot-com-bust of 2000. He makes interesting points about several factors, such as the roles played by Netscape and OpenAI as innovators and Nvidia and Cisco as major players, the stock market bubbles, and risks and rewards along the way. However, he is missing one critical element: the population of tech workers has been shrinking and the pace of the layoffs is increasing. And the way people were laid off now and then has some big differences.
Granted, back in 1999-2000 there were fewer overall tech workers, (as an example, Microsoft went from around 40k in its 2000 staff to 200k today, Amazon grew from a few thousand to >1M) and many of the tech companies were small, in some cases very small. The big difference then and now was the pace of the layoffs. Back then, they happened quickly. But now tech co’s have been laying off workers since the pandemic, but in big numbers by comparison.
In the past few years there have been several rounds of layoffs at Spotify, ByteDance, Amazon, Twillio, LinkedIn, SecureWorks, Microsoft, Meta, and Twitter which added tens of thousands to the unemployment lines. And sure, there are plenty of startups that even got their series A’s that went under in the past couple of years — that is to be expected. But the contemporary situations are from established companies that are having their first serious contractions.
Will some of these folks start their own companies? Sure. But tens of thousands? Not so sure.
But part of the problem — perhaps most of the problem, apart from the lowering business demand in the tech sector — is the way we all are returning to work in the spaces previously known as our offices. Back when we were in the midst of the pandemic, remote work took on new relevance and meaning, and caught on quickly around the world in many different ways, some good and some bad. Take Slack for example: they went 100% to remote work back in 2020. Other tech companies were less enthusiastic, such as Google. And what I have seen is these less enthusiastic companies were some of the first to revoke home-working policies and mandate people to return to one of their offices.
Early on in the pandemic, I put together this pod with my partner Paul Gillin about some things to consider for the newly minted home worker. Those were more practical suggestions on what equipment to purchase and how to best secure your home. For a somewhat different treatment, I wrote this blog for Avast on how to craft equitable policies to encourage and evaluate home workers. Those pieces seem rather quaint now, and they assumed that once all this remote stuff was unleashed, we would stay that way.
That is not the case anymore. Four years later, many tech workers are told to return to their offices. And the changes are confusing as companies try to adjust and populate their expensive downtown real estate. This makes no sense to me, and the latest dictums from Dell (for example) are guaranteed to have them lose more people, which could be the hidden reason for them. It is almost that we forgot the productivity gains during Covid when people worked from home. Or companies were eager to see their workforce sitting in those awful bullpens where everyone was on headsets.
The return to the office says one thing about tech: they have done a lousy job at developing middle managers, who are insecure about handling underlings that they can’t see or be physically nearby. It really is a shame: all this remote access tooling that has been developed over the decades, and the one group of companies that you would think would figure this out are the first in line to recall their staffs.
Also gone from today’s tech offices are some of the lavish benefits that were put in place to attract talent. Anyone getting free massages, catered meals and taking yoga classes these days? It would be an interesting cohort for some research project.
Finally, there is my own cohort — tech journalists, who are being laid off once again in this latest cycle. The difference between now and 20-some years ago was we had printed magazines that were supported by millions in ad revenues to pay the way. Then the web wiped out that business model and giants such as PC Week and Infoworld went scrambling. Some of the large tech-oriented websites such as Vice have shut down, and I am sure more will follow.
Yes, AI is exciting, and there is a lot of work being done — even by humans — in the field. But it requires real capital and real brainpower, and not just sock puppets and a cute dot com name. Or at least, I hope so. And building a trust with your remote employees: the best ones will eventually migrate to companies with more liberal remote policies.

Fighting election misinformation

Last week I wrote about the looming AI bias in the HR field. Here is another report about the potential threats of AI in another arena. But first, do you know what the states of California, Georgia, Nevada, Oregon, and Washington have in common? Sadly, all of them have election offices that received suspicious letters in the mail last year. This year is already ramping up and many election workers have received death threats just trying to do their — usually volunteer — jobs. Many have quit, after logging decades of service.

I have been following election misinformation campaigns for several years, such as writing about whether the 2020 election was rigged or not for Avast’s blog here. By now you should know that it wasn’t. But this latest round of physical threats — many of which have been criminally prosecuted — is especially toxic when fueled with AI misinformation campaigns. The stakes are certainly higher, especially given the number of national races — CISA has released this set of guidelines.

And the election threats aren’t just a domestic problem. This year will see more than 70 elections in 50 countries — many of them where people are voting for their heads of state, including India, Taiwan, Indonesia and others. Taken together, 2024 will see a third of the world’s population enter the voting booth. Some have seen huge increases in online voters: India’s last national election was in 2019, and since then they have added 250 M internet users, thanks to cheap smartphones and mobile data plans. That could spell difficulties for first-time online voters.

All this comes at a time when social media trust and safety teams have all but disappeared from the landscape, indeed the whole name for these groups will become a curiosity a few years from now. Instead, hate mongers and fear mongers celebrate their attention and unblocked access to the network. (To be fair, Facebook/Meta announced a new effort to fight deepfakes on WhatsApp just after I posted this.)

While the social networks were busily disinvesting in any quality control, more and better AI-laced misinformation campaigns have sprouted, thanks to new tools that can combine voices and images with clickbait headlines that can draw attention. That is not a good combination. Many of the leading AI tech firms — such as OpenAI and Anthropic — are trying to fill the gap. But it is a lopsided battle.

While it is nice that someone has taken up the cause for truthiness (to use a phrase from that bygone era), I am not sure that giving AI firms this responsibility is going to really work.

An early example happened in the New Hampshire presidential primary, where voters reported receiving deep fake robocalls with President Biden’s voice. As a result, the account used for this activity was subsequently banned. Expect things to get worse. Deepfakes such as this have become as easy as crafting a phishing attack (and are often combined too), and thanks to AI they are getting more realistic.It is only a matter of time before these attacks spill over into influencing the vote.

But deepfakes aren’t the sole problem. Garden-variety hacking is a lot easier. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. That stresses the security defenses to organizations that never were on the forefront of technology, something that CISA and others have tried to help with various tools and documents, such as the one mentioned at the top of this post. And now we have certain elements of Congress that want to defund CISA just in time for the fall elections. Bad idea.

Contributing to the mess is that media can’t be trusted to provide a safe harbor for election results. Look what happened to the Fox News decision team after it called — correctly — Arizona for Biden back in 2020. Many of their staff were fired for doing a solid job. And while it is great that Jon Stewart is back leading Comedy Central’s Monday night coverage, I don’t think you are going to see much serious reporting there (although his debut show last week was hysterical and made me wish he was back five days a week.

Of course, it could be worse: we could be voting in Russia, where no one doubts what the outcome will be. The only open question is will its czar-for-life get more than 100% of the vote.

The looming AI bias in hiring and staffing decision-making

Remember when people worked at jobs for most of their lives? It was general practice back in the 1950s and 1960s. My dad worked for the same employer for 30 or so years. I recall his concern when I changed jobs after two years out of grad school, warning me that it wouldn’t bode well for my future prospects.

So here I am, ironically now 30-plus years into working for my own business. But this high-frequency job hopping has also accelerated the number of resumes that flood a hiring manager, which in turn has motivated many vendors to jump on board various automated tools to screen them. You might not have heard of companies in this space such as HireVue, APTMetrics, Curious Thing, Gloat, Visier, Eightfold and Pymetrics.

Add two things to this trend. First is the rise in quiet quitting, or employees who just put in the minimum to their jobs. The concept is old, but the increase is significant. Second and the bigger problem is another irony: now we have a very active HR market segment that is fueled by AI-based algorithms. The combination is both frustrating and toxic, as I learned from reading a new book entitled The Algorithm, How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.Hilke Schellmann It should be on your reading list. It is by Hilke Schellmann, a journalism professor at NYU, and it examines the trouble with using AI to make hiring and other staffing decisions. Schellmann takes a deep dive into understanding the four core technologies that are now being deployed by HR departments around the world to screen and recommend potential new job candidates, along with other AI-based tools that come into play to evaluate employees performance and try to inform other judgments as to raises, promotions, or firing. It is a fascinating look at this industry, fascinating and scary too.

Thanks to digital tools such as LinkedIn, Glassdoor and the like, sending in your resume to apply for an opening has never been easier. Just a few clicks and your resume is sent electronically to a hiring manager. Or so you thought. Nowadays, AI is used to automate the process: These are automated resume screeners, automated social media content analyzers, gamified qualification assessments, and one-way video recordings that are analyzed by facial and tone-of-voice AIs. All of them have issues, aren’t completely understood by both employers and prospects alike, have spurious assumptions and can’t always quantify the important aspects of a potential recruit that would ensure success at a future job.

What drew me into this book was that Schellmann does plenty of hands-on testing of the various AI services, using herself as a potential job seeker or staffer. For example, in one video interview, she replies to her set questions in German rather than English, and receives a high score from the AI.

She covers all sorts of tools, not just ones used to evaluate new hires, but others that fit into the entire HR lifecycle. And the “human” part of HR is becoming less evident as the bots take over. By take over, I don’t mean the Skynet path, but relying on automated solutions does present problems.

She raises this question: “Why are we automating a badly functioning system? In human hiring, almost 50 percent of new employees fail within the first year and a half. If humans have not figured out how to make good hires, why do we think automating this process will magically fix it?” She adds, “An AI skills-matching tool that is based on analyzing résumés won’t understand whether someone is really good at their job.” What about tools that flag teams that have had high turnover? It could be two polar opposite causes: a toxic manager or a tremendous manager that is good at developing talent and encouraging them to leave for greener pastures.

Having my own freelance writing and speaking business for more than 35 years, I have a somewhat different view of the hiring decision than many people. You could say that I either had infrequent times that I was hired for full-time employment, or that I face that decision multiple times a year whenever I get an inquiry from a new client, or a previous client that is now working for a new company. Some editors I have worked for decades as they have moved from pub to pub, for example. They hire me because they are familiar with my work and value my perspective and analysis that I bring to the party. No AI is going to figure that out anytime soon.

One of the tools that I have come across in the before-AI times is the DISC assessment that is part of the Myers-Briggs, which is a psychological tool that has been around for decades. I wrote about my test when I was attending a conference at Ford Motor Co. back in 2013. They were demonstrating how they use this tool to figure out the type of person who is most likely to buy any particular car model. Back in 2000, I wrote a somewhat tongue-in-cheek piece about how you can use Myer-Briggs to match up our personality with that of our computing infrastructure.

But deciding if someone is an introvert or an extrovert is a well-trod path, with plenty of testing experience over the decades. These AI-powered tools don’t have much of this history, are based on data sets that are shaky with all sorts of assumptions. For example HireVue’s facial analysis algorithm is trained on video interviews with people already employed by the company. That sounds like a good first step, but having done one of those one-sided video interviews — basically where you are just talking to the camera and not interacting with an actual human asking the question — means you aren’t getting any feedback from your interviewer, either with subtle facial or audio clues that are part of normal human discourse. Eventually, in 2021, the company stopped using both tone-of-voice and facial-based algorithms entirely, claiming that natural language processing had surpassed both of them.

Another example is capturing when you use your first person pronouns during the interview — I vs. we for example. Is this a proxy for what kind of team player you might be? HireVue says they base their analysis on thousands of questions such as this, which doesn’t make me feel any better about their algorithms. Just because a model has multiple parameters doesn’t necessarily make it better or more useful.

Then there is the whole dust-up on overcoming built-in AI bias, something that has been written about over the years going back to when Amazon first unleashed their AI hiring tool and found it selected white men more often. I am not going there in this post, but her treatment runs deep and shows the limitations of using AI, no matter how many variables they try to correlate with their models. What is important, something Mark Cuban touches on frequently with his posts, is that diverse groups of people produce better business results. And that diversity can be defined in various ways, not just race and gender, but by people with disabilities both mental and physical. The AI modelers have to figure out — as all modelers do — what is the connection between playing a game, or making a video recording, and how that relates to job performance? You need large and diverse training samples to pull this off, and even then you have to be careful about your own biases in constructing the models. She quotes one source who says, “Technology, in many cases, has enabled the removal of direct accountability, putting distance between human decision-makers and the outcomes of these hiring processes and other HR processes.”

Another dimension of the AI personnel assessment problem is the tremendous lack of transparency. Potential prospects don’t know what the AI-fueled tests entail, don’t know how they were scored or whether they were rejected from a job because of a faulty algorithm or bad training data or some other computational oddity.

When you step back and consider the sheer quantity of data that can be collected by an employer: keystrokes on your desktop, website cookies that record the timestamp of your visits, emails, Slack and Teams message traffic, even Fitbit tracking stats — it is very depressing. Do these captured signals reveal anything about your working habits, job performance, or anything really? HR folks are relying more and more on AI-assistance, and now can monitor just about every digital move that an employee makes in the workplace, even when that workplace is the dining room table and the computer is shared by the employee’s family. (There are several chapters on this subject in her book.)

This book will make you think about the intersection of AI and HR, and while there is a great deal of innovation happening, there is still much work to be done. As she says, context often gets lost. Her book will provide plenty of context for you to think about.