The coming dark times for tech won’t be anything like the 2000s

My former colleague Dave Vellante has written a nice comparison of the current tech  contraction with the dot-com-bust of 2000. He makes interesting points about several factors, such as the roles played by Netscape and OpenAI as innovators and Nvidia and Cisco as major players, the stock market bubbles, and risks and rewards along the way. However, he is missing one critical element: the population of tech workers has been shrinking and the pace of the layoffs is increasing. And the way people were laid off now and then has some big differences.
Granted, back in 1999-2000 there were fewer overall tech workers, (as an example, Microsoft went from around 40k in its 2000 staff to 200k today, Amazon grew from a few thousand to >1M) and many of the tech companies were small, in some cases very small. The big difference then and now was the pace of the layoffs. Back then, they happened quickly. But now tech co’s have been laying off workers since the pandemic, but in big numbers by comparison.
In the past few years there have been several rounds of layoffs at Spotify, ByteDance, Amazon, Twillio, LinkedIn, SecureWorks, Microsoft, Meta, and Twitter which added tens of thousands to the unemployment lines. And sure, there are plenty of startups that even got their series A’s that went under in the past couple of years — that is to be expected. But the contemporary situations are from established companies that are having their first serious contractions.
Will some of these folks start their own companies? Sure. But tens of thousands? Not so sure.
But part of the problem — perhaps most of the problem, apart from the lowering business demand in the tech sector — is the way we all are returning to work in the spaces previously known as our offices. Back when we were in the midst of the pandemic, remote work took on new relevance and meaning, and caught on quickly around the world in many different ways, some good and some bad. Take Slack for example: they went 100% to remote work back in 2020. Other tech companies were less enthusiastic, such as Google. And what I have seen is these less enthusiastic companies were some of the first to revoke home-working policies and mandate people to return to one of their offices.
Early on in the pandemic, I put together this pod with my partner Paul Gillin about some things to consider for the newly minted home worker. Those were more practical suggestions on what equipment to purchase and how to best secure your home. For a somewhat different treatment, I wrote this blog for Avast on how to craft equitable policies to encourage and evaluate home workers. Those pieces seem rather quaint now, and they assumed that once all this remote stuff was unleashed, we would stay that way.
That is not the case anymore. Four years later, many tech workers are told to return to their offices. And the changes are confusing as companies try to adjust and populate their expensive downtown real estate. This makes no sense to me, and the latest dictums from Dell (for example) are guaranteed to have them lose more people, which could be the hidden reason for them. It is almost that we forgot the productivity gains during Covid when people worked from home. Or companies were eager to see their workforce sitting in those awful bullpens where everyone was on headsets.
The return to the office says one thing about tech: they have done a lousy job at developing middle managers, who are insecure about handling underlings that they can’t see or be physically nearby. It really is a shame: all this remote access tooling that has been developed over the decades, and the one group of companies that you would think would figure this out are the first in line to recall their staffs.
Also gone from today’s tech offices are some of the lavish benefits that were put in place to attract talent. Anyone getting free massages, catered meals and taking yoga classes these days? It would be an interesting cohort for some research project.
Finally, there is my own cohort — tech journalists, who are being laid off once again in this latest cycle. The difference between now and 20-some years ago was we had printed magazines that were supported by millions in ad revenues to pay the way. Then the web wiped out that business model and giants such as PC Week and Infoworld went scrambling. Some of the large tech-oriented websites such as Vice have shut down, and I am sure more will follow.
Yes, AI is exciting, and there is a lot of work being done — even by humans — in the field. But it requires real capital and real brainpower, and not just sock puppets and a cute dot com name. Or at least, I hope so. And building a trust with your remote employees: the best ones will eventually migrate to companies with more liberal remote policies.

Fighting election misinformation

Last week I wrote about the looming AI bias in the HR field. Here is another report about the potential threats of AI in another arena. But first, do you know what the states of California, Georgia, Nevada, Oregon, and Washington have in common? Sadly, all of them have election offices that received suspicious letters in the mail last year. This year is already ramping up and many election workers have received death threats just trying to do their — usually volunteer — jobs. Many have quit, after logging decades of service.

I have been following election misinformation campaigns for several years, such as writing about whether the 2020 election was rigged or not for Avast’s blog here. By now you should know that it wasn’t. But this latest round of physical threats — many of which have been criminally prosecuted — is especially toxic when fueled with AI misinformation campaigns. The stakes are certainly higher, especially given the number of national races — CISA has released this set of guidelines.

And the election threats aren’t just a domestic problem. This year will see more than 70 elections in 50 countries — many of them where people are voting for their heads of state, including India, Taiwan, Indonesia and others. Taken together, 2024 will see a third of the world’s population enter the voting booth. Some have seen huge increases in online voters: India’s last national election was in 2019, and since then they have added 250 M internet users, thanks to cheap smartphones and mobile data plans. That could spell difficulties for first-time online voters.

All this comes at a time when social media trust and safety teams have all but disappeared from the landscape, indeed the whole name for these groups will become a curiosity a few years from now. Instead, hate mongers and fear mongers celebrate their attention and unblocked access to the network. (To be fair, Facebook/Meta announced a new effort to fight deepfakes on WhatsApp just after I posted this.)

While the social networks were busily disinvesting in any quality control, more and better AI-laced misinformation campaigns have sprouted, thanks to new tools that can combine voices and images with clickbait headlines that can draw attention. That is not a good combination. Many of the leading AI tech firms — such as OpenAI and Anthropic — are trying to fill the gap. But it is a lopsided battle.

While it is nice that someone has taken up the cause for truthiness (to use a phrase from that bygone era), I am not sure that giving AI firms this responsibility is going to really work.

An early example happened in the New Hampshire presidential primary, where voters reported receiving deep fake robocalls with President Biden’s voice. As a result, the account used for this activity was subsequently banned. Expect things to get worse. Deepfakes such as this have become as easy as crafting a phishing attack (and are often combined too), and thanks to AI they are getting more realistic.It is only a matter of time before these attacks spill over into influencing the vote.

But deepfakes aren’t the sole problem. Garden-variety hacking is a lot easier. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. That stresses the security defenses to organizations that never were on the forefront of technology, something that CISA and others have tried to help with various tools and documents, such as the one mentioned at the top of this post. And now we have certain elements of Congress that want to defund CISA just in time for the fall elections. Bad idea.

Contributing to the mess is that media can’t be trusted to provide a safe harbor for election results. Look what happened to the Fox News decision team after it called — correctly — Arizona for Biden back in 2020. Many of their staff were fired for doing a solid job. And while it is great that Jon Stewart is back leading Comedy Central’s Monday night coverage, I don’t think you are going to see much serious reporting there (although his debut show last week was hysterical and made me wish he was back five days a week.

Of course, it could be worse: we could be voting in Russia, where no one doubts what the outcome will be. The only open question is will its czar-for-life get more than 100% of the vote.

The looming AI bias in hiring and staffing decision-making

Remember when people worked at jobs for most of their lives? It was general practice back in the 1950s and 1960s. My dad worked for the same employer for 30 or so years. I recall his concern when I changed jobs after two years out of grad school, warning me that it wouldn’t bode well for my future prospects.

So here I am, ironically now 30-plus years into working for my own business. But this high-frequency job hopping has also accelerated the number of resumes that flood a hiring manager, which in turn has motivated many vendors to jump on board various automated tools to screen them. You might not have heard of companies in this space such as HireVue, APTMetrics, Curious Thing, Gloat, Visier, Eightfold and Pymetrics.

Add two things to this trend. First is the rise in quiet quitting, or employees who just put in the minimum to their jobs. The concept is old, but the increase is significant. Second and the bigger problem is another irony: now we have a very active HR market segment that is fueled by AI-based algorithms. The combination is both frustrating and toxic, as I learned from reading a new book entitled The Algorithm, How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.Hilke Schellmann It should be on your reading list. It is by Hilke Schellmann, a journalism professor at NYU, and it examines the trouble with using AI to make hiring and other staffing decisions. Schellmann takes a deep dive into understanding the four core technologies that are now being deployed by HR departments around the world to screen and recommend potential new job candidates, along with other AI-based tools that come into play to evaluate employees performance and try to inform other judgments as to raises, promotions, or firing. It is a fascinating look at this industry, fascinating and scary too.

Thanks to digital tools such as LinkedIn, Glassdoor and the like, sending in your resume to apply for an opening has never been easier. Just a few clicks and your resume is sent electronically to a hiring manager. Or so you thought. Nowadays, AI is used to automate the process: These are automated resume screeners, automated social media content analyzers, gamified qualification assessments, and one-way video recordings that are analyzed by facial and tone-of-voice AIs. All of them have issues, aren’t completely understood by both employers and prospects alike, have spurious assumptions and can’t always quantify the important aspects of a potential recruit that would ensure success at a future job.

What drew me into this book was that Schellmann does plenty of hands-on testing of the various AI services, using herself as a potential job seeker or staffer. For example, in one video interview, she replies to her set questions in German rather than English, and receives a high score from the AI.

She covers all sorts of tools, not just ones used to evaluate new hires, but others that fit into the entire HR lifecycle. And the “human” part of HR is becoming less evident as the bots take over. By take over, I don’t mean the Skynet path, but relying on automated solutions does present problems.

She raises this question: “Why are we automating a badly functioning system? In human hiring, almost 50 percent of new employees fail within the first year and a half. If humans have not figured out how to make good hires, why do we think automating this process will magically fix it?” She adds, “An AI skills-matching tool that is based on analyzing résumés won’t understand whether someone is really good at their job.” What about tools that flag teams that have had high turnover? It could be two polar opposite causes: a toxic manager or a tremendous manager that is good at developing talent and encouraging them to leave for greener pastures.

Having my own freelance writing and speaking business for more than 35 years, I have a somewhat different view of the hiring decision than many people. You could say that I either had infrequent times that I was hired for full-time employment, or that I face that decision multiple times a year whenever I get an inquiry from a new client, or a previous client that is now working for a new company. Some editors I have worked for decades as they have moved from pub to pub, for example. They hire me because they are familiar with my work and value my perspective and analysis that I bring to the party. No AI is going to figure that out anytime soon.

One of the tools that I have come across in the before-AI times is the DISC assessment that is part of the Myers-Briggs, which is a psychological tool that has been around for decades. I wrote about my test when I was attending a conference at Ford Motor Co. back in 2013. They were demonstrating how they use this tool to figure out the type of person who is most likely to buy any particular car model. Back in 2000, I wrote a somewhat tongue-in-cheek piece about how you can use Myer-Briggs to match up our personality with that of our computing infrastructure.

But deciding if someone is an introvert or an extrovert is a well-trod path, with plenty of testing experience over the decades. These AI-powered tools don’t have much of this history, are based on data sets that are shaky with all sorts of assumptions. For example HireVue’s facial analysis algorithm is trained on video interviews with people already employed by the company. That sounds like a good first step, but having done one of those one-sided video interviews — basically where you are just talking to the camera and not interacting with an actual human asking the question — means you aren’t getting any feedback from your interviewer, either with subtle facial or audio clues that are part of normal human discourse. Eventually, in 2021, the company stopped using both tone-of-voice and facial-based algorithms entirely, claiming that natural language processing had surpassed both of them.

Another example is capturing when you use your first person pronouns during the interview — I vs. we for example. Is this a proxy for what kind of team player you might be? HireVue says they base their analysis on thousands of questions such as this, which doesn’t make me feel any better about their algorithms. Just because a model has multiple parameters doesn’t necessarily make it better or more useful.

Then there is the whole dust-up on overcoming built-in AI bias, something that has been written about over the years going back to when Amazon first unleashed their AI hiring tool and found it selected white men more often. I am not going there in this post, but her treatment runs deep and shows the limitations of using AI, no matter how many variables they try to correlate with their models. What is important, something Mark Cuban touches on frequently with his posts, is that diverse groups of people produce better business results. And that diversity can be defined in various ways, not just race and gender, but by people with disabilities both mental and physical. The AI modelers have to figure out — as all modelers do — what is the connection between playing a game, or making a video recording, and how that relates to job performance? You need large and diverse training samples to pull this off, and even then you have to be careful about your own biases in constructing the models. She quotes one source who says, “Technology, in many cases, has enabled the removal of direct accountability, putting distance between human decision-makers and the outcomes of these hiring processes and other HR processes.”

Another dimension of the AI personnel assessment problem is the tremendous lack of transparency. Potential prospects don’t know what the AI-fueled tests entail, don’t know how they were scored or whether they were rejected from a job because of a faulty algorithm or bad training data or some other computational oddity.

When you step back and consider the sheer quantity of data that can be collected by an employer: keystrokes on your desktop, website cookies that record the timestamp of your visits, emails, Slack and Teams message traffic, even Fitbit tracking stats — it is very depressing. Do these captured signals reveal anything about your working habits, job performance, or anything really? HR folks are relying more and more on AI-assistance, and now can monitor just about every digital move that an employee makes in the workplace, even when that workplace is the dining room table and the computer is shared by the employee’s family. (There are several chapters on this subject in her book.)

This book will make you think about the intersection of AI and HR, and while there is a great deal of innovation happening, there is still much work to be done. As she says, context often gets lost. Her book will provide plenty of context for you to think about.

I find more central office lore in Seattle

I have a thing about the telephone central office (CO). I love spotting them in the wild, giving me some sense of the vast connectedness that they represent, the legal wrangling that took place over their real estate, and their history in our telecommunications connectedness. That is a lot to pack into a series of structures, which is why I am attracted to them.

I wrote that six years ago and it still holds true. This week I added a new CO to my “collection” of favorite places, one that calls itself a connections museum that occupies the top floors of a working CO in an industrial area of Seattle. It is an interesting place, but the label isn’t quite appropriate: it is more an interactive time machine that will take you back over the past 100 years of telecom history. Yes, you will find working models of phones of yesteryear (such as this 1908 wall phone model shown here), but the real treat — especially for this networking geek — are the electromechanical switch fabrics that were once found in every CO on the planet, and now are extinct.

In the pre-TCPIP, analog landline days, every phone had to be connected via a slender pair of copper wires from one’s home (or business) to the CO. That is a lot of wire. Once that pair entered the CO premises, it was connected to these huge machines to make and receive phone calls. That is a lot of wire that is presently unused, such as what can be found in my own home. I think I used my last landline around 2002 or so.

What is both impressive and hard to comprehend in the Seattle CO is how enormous this equipment is, and how small the current digital switches and IP-based networking gear is by comparison. I have seen numerous mainframe computers and they are no small objects. But panel frames and crossbar switches loom large and have a distinctly oily smell, which gives away their mechanical nature. Even with all of its moving parts — and there are thousands of them — these beasts worked flawlessly for decades to place our phone calls.

The other thing that becomes clear walking around the Seattle CO is how extensible phone tech was. The phone network connected gear of many different vintages — there even were examples of those quirky 1960s-era video phones that we now carry around in our pockets and think nothing unusual of making such calls.

The place is an active test bed for the old phone tech, and numerous volunteers have devoted many hours to try to resurrect the tech into some operational semblance. That is quite an achievement, because the surviving documentation is incomplete or incomprehensible or both. Trial by error and patience are important skills to make this stuff come back to life.

Museum docents will take you around the CO, patiently explain what is going on, and show you the process of completing a phone call from one phone to another. There are also phone switchboards that Ernestine would be at home operating, and visitors can do the one-ringy-dingy themlseves.

One thing that I had forgotten about was the importance of real estate with these COs. Back in the 2000s, when DSL technology was coming into vogue, the local phone companies weren’t too happy about having some competition for their communications and tried to stop the DSL vendors from installing their gear in the COs. What became obvious as they attempted to create legal roadblocks was there was plenty of room for the new stuff. This is because as these old crossbar switches were replaced, there was plenty of floor space to hold a couple of 19-inch racks of digital gear. (BTW, that standard harks back to the 1890s.)

As I was leaving the CO, a tour group was coming in. The group was dressed up in what they called steam punk costumes (it looked more Dickensian to my untrained eye) but seemed very appropriate: people who understand the broad sweep of history and wanted to recall a bygone era. While I didn’t need any change of clothes, I recognized kindred spirits.

Can Movable Type become a useful AI writer’s tool?

Once upon a time, when blogs were just beginning to become A Thing, the company to watch was Six Apart. They have blogging software called Movable Type. Then the world shifted to WordPress, and soon there were other blogging platforms that turned Movable Type into the Asa Hutchinson of that particularly market. (What? They are still around? Yes and account for about one percent of all blogs.)

Well, Asa no more, because the company has fully embraced AI in a way that even Sports Illustrated (they recently fired their human writers) would envy. If you have never written a book, you can have a ready-made custom outline in a few minutes. All it takes is a prompt and a click. You don’t even have to have a fully-formed idea, understand the nature of research (either pre- or post-internet), or even know how to write word one. (There are other examples on their website if you want to check them out.)

MovableType’s AI creates “10 chapters spanning 150+ pages, and a whopping 35k+ words” (or so they say) of… basically gibberish. They of course characterize it somewhat differently, saying its AI output is “highly specific & well researched content,” It isn’t: there are no citations or links to the content. The output looks like a solid book-like product with chapters and sub-heads but is mostly vacuous drivel. The company claims it comes tuned to match your writing style, but again, I couldn’t find any evidence of that. And while “each chapter opens with a story designed to keep your readers engaged,” my interest waned after page 15 or so.

Perhaps this will appeal to some of you, especially those of you that haven’t yet written your own roman a clef. Or who are looking to turn your online bon mots into the next blockbuster book. But I don’t think so. Writing a book is hard work, and while it is not growing crops or working in a factory, you do have to know what you are doing. The labor involved helps you create a better book, and the process of editing your own work is a learned skill. I don’t think AI can provide any short cuts, other than to produce something subpar.

I have written three books the old fashioned way: by typing every word into Word. Two of them got published, one got shelved as the market for OS/2 moved into the cellar from the time of the book proposal. I got tired of rewriting it (several times!) for the next big movie moment of IBM’s beleaguered OS that never happened. The two published books never made much money for anyone. But I did learn how to write a non-fiction book, and more importantly, write an outline that was more of a roadmap and a strategy and structure document. This is not something that you can train AI to do, at least not yet.

When I read a book, I cherish the virtual bond between me and the author, whether I read my go-to mystery fiction or a how-to business epic. I want to bathe in the afterglow of what the author is telling me, through characters, plot points, anecdotes, and stories. That is inherently human, and something that the current AI models can’t (yet) do. While MovableType’s AI is an interesting experiment, I think it is a misplaced one.

Is it time to upgrade my hearing aid?

More than five years ago, I wrote about my journey acquiring my first hearing aid. With a new insurance plan with hearing benefits, I thought it was time to take another look and see where the latest aids can help me with my high frequency hearing loss and tinnitus.

Now there are three components of your hearing that will motivate you to get an aid. First, you have some kind of hearing loss (as I said, for me it is the higher frequencies, which is typical for older folks) and you want to hear things better. Second, the type of sounds you “hear” with your Tinnitus, and whether you want these masked with an aid or some combination of audio processing and masker. Tinnitus can vary by time of day, whether you have gotten enough sleep, or by stress levels. If this describes you, then ideally you want to be able to adjust the masking technology according to give you the greatest comfort. Part of the issue here is that you may not want to become a DIY audiologist or software engineer.
Finally, how the aids interact with each other to place you in a sonic environment so you can understand what is going on around you. Given that I am completely deaf in one ear and that I need only one aid, this isn’t relevant for me.
If you haven’t read my original blog post when I first bought my aid, now would be a good time to go do that to remind yourself of that process.

But buying the first aid may seen easy when you consider a complicated journey to upgrade your aid. This is because the hearing aid medical industrial complex is just that — complicated. And while there are professionals that can be helpful, you have to first know the right questions to ask, second know what the aids can and cannot do, and have a great deal of patience being the hearing-deprived patient. Oh, and be prepared to spend lots of time and money when you do get your aid.

You would think that already having an aid would mean that you have already dealt with these issues. But you would be wrong. The replacement market is truly a different ball game. This is because being human, our hearing changes as we age. And the aid technology marches on, which means your beforetime knowledge is outdated. And that you now have a standard — your existing aid — to compare things with introduces new complexities.

There is one other factor, that you can now purchase aids over the counter. That is fine if you have a simple hearing loss, don’t have to muck around with the frequency controls, and don’t have a lot of Tinnitus. Yes, no and no for me. So this wasn’t an option. The OTC aids generally cost less, but don’t include much in the way of hand-holding and servicing. This is not like buying a blender: instructions and personal demonstrations are essential to their operations. You might not like the initial fit of the instrument, or be confounded with its numerous settings.

The OTC aids don’t really give you the best price comparison either. When you buy an aid from an audiologist, you are actually paying for a service contract for a period of time and this contract may or may not cover all problems or has exceptions (like water damage).

As I mentioned previously, for the past five plus years I have been using one of the Starkey models. I was generally happy with it. I made an appointment with a different audiologist than the one I had been seeing for the Starkey, just to see how the two approach solving my problems. I give the new audiologist, a woman whom I will call B, props in thoroughness, and knowledge, and service. She spent nearly two hours on my first visit, and wanted to schedule several follow up visits. She told me that I am her first Tinnitus patient who is not a new hearing aid user. We will get to why that matters shortly.

B has worn aids since childhood, a perspective of which I liked. She had some very fancy gear to test your aid’s programming, which I liked as well. Think about the device that an optometrist uses to determine if your glasses prescription matches the actual optics. She puts every aid she sells through this device, to ensure it is programmed properly.
Why is this important? Modern hearing aids are more software than hardware and can be programmed in one of three ways: At the factory when they are assembled and make use of various automatic sound processing features (to soften noisy environments, to enhance the frequencies used in human speech, to change the microphone coverage area to the sides or in front of you, and other things).
But one piece of programming is very important to me, and that includes figuring out the Tinnitus masking sounds, which can be tricky to deal with. Everyone has different “ringing” sounds as part of their Tinnitus affliction, some relatively simple sounds (such as I have) and some that vary in terms of frequency, period, and loudness. The aids do some counter-programming — meaning they produce their own sounds — to try to keep your attention away from the tinnitus, at least that is the working theory of the moment.
Second, the audiologist has the ability to change some of its programming that affects its audio processing and also set up pre-set conditions that you can access with the buttons on the aid or on its smartphone app. I mentioned to B that one night my wife accidentally slammed a cabinet door — the sound of which, amplified by my new aid, almost made me jump. She told me that she could program the “door slam sound” (yes, this is a thing) to soften it up.
Yes, there is an app for that, and the different vendors do a varying job on their apps. The app — the third piece in this puzzle — is what you fiddle with yourself (assuming you have a smartphone and that you want to do this). Each aid manufacturer has different models with varying features — which you may or may not need. Tinnitus masking generally is included in the higher end (and pricier) products, just my luck.
One of the things that I will be doing over the next week or so is to try them out in different sonic environments and see if one of them is worth the cost of the new aid, or I can still get by with the old aid with a few simple adjustments.
So B recommended that I check out the Resound models. She likes their app, which has a lot of controls, as you can see from the screenshots. This was very obvious when I compared it to the app that controls my old Starkey aid, which doesn’t have as many Tinnitus pre-set controls either available to the audiologist or on the app itself.
I should mention one other complicating factor. My old aid had regular batteries that needed replacement every week or so. But most of the newer aids have rechargeable batteries that last about a day on a charge and probably need complete replacement every three years. I don’t mind the non-rechargeable kind but you may feel otherwise.
Then there is the matter of insurance. My insurance plan covers $2000 per aid, but this is a deceptive situation, as some audiologists don’t take insurance (such as B) because they don’t want to bother with the reimbursement. But she is very upfront about the services she provides to make sure your aids are the right choice for you, and includes several visits the first year you buy an aid from her.
The insurance issue is a vexing one. For years, my plans had no coverage, so it was easier. There are three problems: first, as you say, limited reimbursement on part of the costs, which still leaves them expensive. Second, a general lack of transparency about pricing. Each audiologist can set prices independent of others, which as I mentioned bundles in a certain level of service. This makes it hard to shop around. Selling OTC aids was supposed to make things more transparent, but it hasn’t.
So let’s take a step back here. The issue for the aids is what problem are you trying to solve. For example, I would like better Bluetooth fidelity (especially when I am outside, which renders my old aid almost useless), better control over the masker, and more options in general for the smartphone app. Not all aids deliver all of these features. And sometimes you don’t know what is important until you try out the aid and hear it for yourself. Or see the app’s controls and decide whether you want to spend your day fiddling with them.
After spending a few days with my new Resound aid, I decided I would stick with the tried and true Starkey. Much as I would like the latest and greatest tech, I just couldn’t justify the extra bucks. Perhaps when it finally bites the dust that will force my decision, but by then there will be something even more techy.

A brief history of domain squatting

A long time ago at the dawn of the internet era, a tech journalist bought the mcdonalds.com domain name. Actually, “bought” isn’t really correct, because back then in order to obtain a dot com, you merely had to know how to send email to the right destination, and within days, the domain was all yours. That was how I first got my own strom.com domain back in 1993. Free of charge, even. It was the wild west. (Some may say it still is.)

The journalist was Josh Quittner, who was writing a story for Wired magazine about domain squatting, although it wasn’t glorified with an actual name back then. Josh noticed that the name wasn’t yet taken, so he tried to do the responsible thing and called a PR person at McD’s to try to figure out why they weren’t online and hadn’t yet grabbed it. Of course, back then, almost no one had gotten their names — Burger King didn’t yet obtain their own domain name, btw.

The PR person, bless their heart, asked, “Are you finding that the Internet is a big thing?” Yeah, kinda. As he wrote, “There are no rules that would prohibit you from owning a bitchin’ corporate name, trademarked or not.” So he grabbed the domain and refused to turn it over until McDonald’s agreed to provide high-speed Internet access for a public school in Brooklyn. Eventually, the company figured out that they really wanted the domain for their own business, and domain squatting has never been the same since then.

Domain squatting now has a wide and varied subculture. Here is a 2020 report from Unit42 that goes into further details, for example.This includes homographic attacks (using non-Roman character sets), combo squats (that add subdomains to make them appear more legit), level squats (using a very long character string, counting on the browsers to truncate them and make them more believable) and I am sure many more perfidious techniques.

A complicating factor is we now have all kinds of domains like .xyz and .lawyer to contend with, which only increases the threat space that bad actors can occupy in domain impersonation. Josh emailed me today and said, “I figured that with the creation of so many top-level domains the shenanigans around domain-name squatting would abate but it just created loads of new problems. For instance, some scammers pretending to be decrypt.co (my crypto news site) created a mirror site with a very similar name and used it for phishing. They periodically send out email to millions of people claiming to be decrypt and urge people to connect their crypto wallets to collect tokens. Emailing the host site and alerting them to the scam did no good.” Yeah, wild west indeed.

I was reminded of this story when I saw that yet another business had let their domain registration lapse and was purchased by tech consultants who are trying to give it back to its rightful owner. Why do these things happen? One major reason: domain ownership ultimately relies on humans to pay attention, and renew them at the right times. (I just renewed a bunch of mine, which I had wisely setup years ago to expire on January 1.) Also, in big companies like McDonalds there may be several different domain owners spread among various departments, especially if a company has been acquired or has created new subsidiaries.

Actually, there is a second reason: greed. Criminals have adopted these squatting techniques to lure victims in. Just a few days ago I thought I was buying some stamps from USPS.com, but was brought to some other domain that looked like them. You can’t be too careful. (And the USPS doesn’t discount their stamps by 40%, which should be a red flag.)

I know it when I see the URL

The phrase in today’s screed is adapted from an infamous 1964 Supreme Court decision, when Potter Stewart was asked to define obscenity. In a report issued today by Stanford researchers, the new phrase (when referring to materials relating to abused children) has to do with recognizing a URL. And if you thought Stewart’s phrase made it hard to create appropriate legal tests, we are in for an even harder time when it comes to figuring out how to prevent this in the age of GenAI and machine learning. Let me explain.

If you are trying to do research into what is called child abuse materials (abbreviated CSAM, and you can figure out the missing word on your own), you have a couple of problems. Firstly, you can’t actually download the materials to your own computer, not unless you work for law enforcement and do the research under conditions akin to when intelligence operatives are in a secured facility (now made infamous and called a SCIF).

This brings me to my second point. Since you can’t examine the actual images, what you are looking at are bunches of URLs that point to the actual images. URLs are also used instead of the actual images because of copyright restrictions. And that means looking at metadata, which could be in a variety of languages, because let’s face it, CSAM knows no geographic boundaries. The images are found by sending out what is called “crawlers” that examine every web page they can find at a point in time.

Next, and this comes as no surprise to anyone who has spent at least one minute browsing the web, there is a lot of CSAM out there: billions of files as it turns out. The Stanford report found more than three thousand suspected images. Now that doesn’t seem like a lot, but when you consider that they probably didn’t catch most of it (they acknowledge a severe undercounting), it is still somewhat depressing.

Also, we (and by that, I mean most everyone in the world) are too late to try to prevent this stuff from being disseminated. That is a more complicated explanation and has to do with the way the GenAI and mathematical models have been constructed. The optimum time to have done this would have been, oh, two or more years ago, back before AI became popular.

The main villain in this drama is something called the large-scale AI open network or LAION-5B model, which contains 5.85B data elements, half of which are in English. This is the data that is being used to train the more popular AI tools, such as Stable Diffusion, Google’s Imagen, and others.

The Stanford paper lays out the problems in examining LAION and the methodology and tools that they used to find the CSAM images. They found that anyone using this model has thousands of illegal images in their possession. “It is present in several prominent ML training data sets,” they state. While this is a semi-academic research paper, it is notable in that they provide some mitigation measures to remove this material. There are various steps that are mostly ineffective and some that are difficult to pull off, especially if your goal is to remove the material entirely from the model itself. I won’t get into the details here, but there is one conclusion that is most troubling:

The GenAI models are good at creating content, right? This means we can take a prototype CSAM image and have it riff on creating various versions, using say the face of the same child in a variety of poses for example. I am sure that is being done in one forum or another, which make me sick.

Finally, there is the problem of timing. “if CSAM content was uploaded to a platform in 2018 and then added to a hash database in 2020, that content could potentially stay on the platform undetected for an extended period of time.” You have to update your data, which is certainly being done but there is no guarantee that your AI is using the most recent data, which has been well documented. The researchers recommend better reporting feedback mechanisms when a user finds these materials by Hugging Face and others. UC Berkeley professor Hany Farid wrote about this issue a few years ago, and said these materials “should not be so easily found online. Neither human moderation nor AI is currently able to contend with the spread of harmful content.”

If you use iCloud, make sure it is properly secured — now

A friend told me this tale of woe that someone he knows had all their Mac Things compromised to the point where they were no longer working. Before I describe the situation, if you use iCloud, do these three things now:

  1. Change your iCloud password now. Pick something unique, complex enough to satisfy all of Apple’s requirements (lower case, upper case, a number and a symbol). For easy typing on phones, I use a series of words with the other adornments. I know changing passwords is a pain. But please do this now. Really. I will wait.
  2. Go to the iCloud security settings page and make sure you are using a two-factor method that isn’t SMS-based (and if you dare, uses passkeys).
  3. Go to your photo collection, and delete pictures of your ID documents, like driver’s license or passport. If you travel (remember travel?), one of the things they tell you is to make copies of your ID in your photo stream. I don’t think that is safe advice now, and will explain later. If you want to keep copies of these documents, make a printed photocopy and keep it in a different place from your actual documents.

Now, why go through all this? If you don’t know about SIM swapping, take a moment to click on that piece that I wrote a few years ago and learn more about it. Basically, once a criminal knows your cell phone number, they can impersonate you and get your phone number reassigned to their own phone and the fun begins.

What if you don’t use iCloud but use Google’s Account? You should follow a similar path, particularly if you have an Android phone.

Now, why the business of deleting your identity docs? This is because once someone has control over your iCloud, they look through your photo stream and find these things, and then use that as the authentication process to recover your other accounts. And if you employ the “fake birthday” dodge (as I do and described here) you will have additional pain and suffering if you have to show your ID and the person you are talking to can’t match it to your fake birthday that you set up when you first created your FaceTwitTok account.

Happy holidays folks. Don’t respond to texts from out of the blue. Don’t click on anything in email, even from someone you correspond with. And don’t reuse your passwords and eat your veggies while you are at it too.

My 30-year love affair with TCP/IP

Is it possible to fall in love with a protocol? I mean, really? I know I am a nerd, and I guess this is yet further evidence of my nerdom. But to properly tell this story, we have to go in the Wayback Machine with Mr. Peabody to 50 years ago, when Vint Cerf and Bob Kahn were working at Stanford and inventing these protocols. I was too young to appreciate the events at the time, but later my life would change drastically as I learned more about TCP/IP and how to get it working in my professional life.

You can read the original 1974 paper here as well as watch an interview with both men that was recorded earlier this year.

In the mid 1990s, I would meet Vint and so began our correspondence that has lasted to this day. I posted an interview with him in 2005 here that is still one of my favorite profiles. This was when he was about to start at Google and when I was running Tom’s Hardware. I asked him to recall the most significant moments of TCP/IP’s development:

  • 1/1/1983 – The cutover on Arpanet to TCP/IP
  • 6/1986 — The beginning of NSFNET
  • 1994 — Netscape supports HTTP over TCP/IP and when Berkeley BSD 4.2 unix release with support for TCP/IP
  • 2007 – The introduction of the iPhone

That is a pretty broad piece of computing history.

TCP/IP spent its first couple of decades growing up. Few people used it, and those that did were more akin to being members of a secret society, the keepers of the flame called Unix. (Unix would evolve into Linux, as well as the MacOS, and then into containers.) But then something called the Internet caught hold in the early 1990s. I wrote a blog post not too many years ago about the early tools we had to suffer with during that era to get TCP/IP working on other computers, such as DOS and Windows and Netware. It was far from easy, and many businesses had all sorts of pain points to get TCP/IP working properly. BTW, that link also has a hilarious clip about “the internet” that has held up well.

Netware is actually where my love for the protocols blossomed. Many of you might recall how powerful this early network operating system was, and how it could run multiple protocols with relative ease. They saw the importance of TCP/IP and invested heavily in equipment that would bring it to ordinary desktops, and by ordinary I mean the versions of Windows that we had to suffer with back then. Setting up a computer to connect to something else then was made a lot easier with Netware’s TCP/IP support.

But it wasn’t just Netware, but the web that really turbo-charged TCP/IP. That also took off during the 1990s, and it went from curiosity to standard practice seemingly overnight. The web really changed how we interacted with information. In my own case, I saw the publications that were making millions of dollars selling printed magazines go to a much reduced online form, and editorial staffs drop dozens of people from their mastheads. Now it is rare that a publication has more than a single full-time editor, which is great if you are a freelancer (which I am) but then budgets continue to shrink too, which is not great.

But in spite of these cataclysmic moments, I still say that I love TCP/IP. I don’t blame the protocol for the transformation of my industry. Au contraire, it made my computing life so much easier. Its beauty was its extensibility, its universal connectedness that was useful in so many different situations. And it also enabled so many apps, both then and now. And every app tells another story, which is after all my bread and butter.

This week, I bought a lighting controller that supports TCP/IP, for example. And that brings up another point. Today, we don’t give TCP/IP much attention, because it has been woven into the fabric of our computing systems so well. It is pervasive: you would be hard pressed to name a computer that doesn’t support TCP/IP. And by computer, I mean our smart TVs and other home appliances, our cable modems, our networks, our cars.

Vint wrote me after he read this essay: “TCP/IP has been improved over the years by people like Van Jacobson and David Taht among others. Google introduced QUIC which provides TCP-like functionality with some additional features. But it has certainly been a workhorse for the world wide web and its applications.” Note what he is doing here: giving credit to other innovators and extensions who have built some interesting things on what he and Kahn came up with 50 years ago. A class act.

So much love to spread around. I count myself lucky to have been present for the last 30 years of the tenure of TCP/IP, and chronicle its growth and popularity.