Fighting election misinformation

Last week I wrote about the looming AI bias in the HR field. Here is another report about the potential threats of AI in another arena. But first, do you know what the states of California, Georgia, Nevada, Oregon, and Washington have in common? Sadly, all of them have election offices that received suspicious letters in the mail last year. This year is already ramping up and many election workers have received death threats just trying to do their — usually volunteer — jobs. Many have quit, after logging decades of service.

I have been following election misinformation campaigns for several years, such as writing about whether the 2020 election was rigged or not for Avast’s blog here. By now you should know that it wasn’t. But this latest round of physical threats — many of which have been criminally prosecuted — is especially toxic when fueled with AI misinformation campaigns. The stakes are certainly higher, especially given the number of national races — CISA has released this set of guidelines.

And the election threats aren’t just a domestic problem. This year will see more than 70 elections in 50 countries — many of them where people are voting for their heads of state, including India, Taiwan, Indonesia and others. Taken together, 2024 will see a third of the world’s population enter the voting booth. Some have seen huge increases in online voters: India’s last national election was in 2019, and since then they have added 250 M internet users, thanks to cheap smartphones and mobile data plans. That could spell difficulties for first-time online voters.

All this comes at a time when social media trust and safety teams have all but disappeared from the landscape, indeed the whole name for these groups will become a curiosity a few years from now. Instead, hate mongers and fear mongers celebrate their attention and unblocked access to the network. (To be fair, Facebook/Meta announced a new effort to fight deepfakes on WhatsApp just after I posted this.)

While the social networks were busily disinvesting in any quality control, more and better AI-laced misinformation campaigns have sprouted, thanks to new tools that can combine voices and images with clickbait headlines that can draw attention. That is not a good combination. Many of the leading AI tech firms — such as OpenAI and Anthropic — are trying to fill the gap. But it is a lopsided battle.

While it is nice that someone has taken up the cause for truthiness (to use a phrase from that bygone era), I am not sure that giving AI firms this responsibility is going to really work.

An early example happened in the New Hampshire presidential primary, where voters reported receiving deep fake robocalls with President Biden’s voice. As a result, the account used for this activity was subsequently banned. Expect things to get worse. Deepfakes such as this have become as easy as crafting a phishing attack (and are often combined too), and thanks to AI they are getting more realistic.It is only a matter of time before these attacks spill over into influencing the vote.

But deepfakes aren’t the sole problem. Garden-variety hacking is a lot easier. Cloudflare reported that from November 2022 to August 2023, it mitigated more than 60,000 daily threats to US elections groups it surveyed, including numerous denial-of-service attacks. That stresses the security defenses to organizations that never were on the forefront of technology, something that CISA and others have tried to help with various tools and documents, such as the one mentioned at the top of this post. And now we have certain elements of Congress that want to defund CISA just in time for the fall elections. Bad idea.

Contributing to the mess is that media can’t be trusted to provide a safe harbor for election results. Look what happened to the Fox News decision team after it called — correctly — Arizona for Biden back in 2020. Many of their staff were fired for doing a solid job. And while it is great that Jon Stewart is back leading Comedy Central’s Monday night coverage, I don’t think you are going to see much serious reporting there (although his debut show last week was hysterical and made me wish he was back five days a week.

Of course, it could be worse: we could be voting in Russia, where no one doubts what the outcome will be. The only open question is will its czar-for-life get more than 100% of the vote.

The looming AI bias in hiring and staffing decision-making

Remember when people worked at jobs for most of their lives? It was general practice back in the 1950s and 1960s. My dad worked for the same employer for 30 or so years. I recall his concern when I changed jobs after two years out of grad school, warning me that it wouldn’t bode well for my future prospects.

So here I am, ironically now 30-plus years into working for my own business. But this high-frequency job hopping has also accelerated the number of resumes that flood a hiring manager, which in turn has motivated many vendors to jump on board various automated tools to screen them. You might not have heard of companies in this space such as HireVue, APTMetrics, Curious Thing, Gloat, Visier, Eightfold and Pymetrics.

Add two things to this trend. First is the rise in quiet quitting, or employees who just put in the minimum to their jobs. The concept is old, but the increase is significant. Second and the bigger problem is another irony: now we have a very active HR market segment that is fueled by AI-based algorithms. The combination is both frustrating and toxic, as I learned from reading a new book entitled The Algorithm, How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.Hilke Schellmann It should be on your reading list. It is by Hilke Schellmann, a journalism professor at NYU, and it examines the trouble with using AI to make hiring and other staffing decisions. Schellmann takes a deep dive into understanding the four core technologies that are now being deployed by HR departments around the world to screen and recommend potential new job candidates, along with other AI-based tools that come into play to evaluate employees performance and try to inform other judgments as to raises, promotions, or firing. It is a fascinating look at this industry, fascinating and scary too.

Thanks to digital tools such as LinkedIn, Glassdoor and the like, sending in your resume to apply for an opening has never been easier. Just a few clicks and your resume is sent electronically to a hiring manager. Or so you thought. Nowadays, AI is used to automate the process: These are automated resume screeners, automated social media content analyzers, gamified qualification assessments, and one-way video recordings that are analyzed by facial and tone-of-voice AIs. All of them have issues, aren’t completely understood by both employers and prospects alike, have spurious assumptions and can’t always quantify the important aspects of a potential recruit that would ensure success at a future job.

What drew me into this book was that Schellmann does plenty of hands-on testing of the various AI services, using herself as a potential job seeker or staffer. For example, in one video interview, she replies to her set questions in German rather than English, and receives a high score from the AI.

She covers all sorts of tools, not just ones used to evaluate new hires, but others that fit into the entire HR lifecycle. And the “human” part of HR is becoming less evident as the bots take over. By take over, I don’t mean the Skynet path, but relying on automated solutions does present problems.

She raises this question: “Why are we automating a badly functioning system? In human hiring, almost 50 percent of new employees fail within the first year and a half. If humans have not figured out how to make good hires, why do we think automating this process will magically fix it?” She adds, “An AI skills-matching tool that is based on analyzing résumés won’t understand whether someone is really good at their job.” What about tools that flag teams that have had high turnover? It could be two polar opposite causes: a toxic manager or a tremendous manager that is good at developing talent and encouraging them to leave for greener pastures.

Having my own freelance writing and speaking business for more than 35 years, I have a somewhat different view of the hiring decision than many people. You could say that I either had infrequent times that I was hired for full-time employment, or that I face that decision multiple times a year whenever I get an inquiry from a new client, or a previous client that is now working for a new company. Some editors I have worked for decades as they have moved from pub to pub, for example. They hire me because they are familiar with my work and value my perspective and analysis that I bring to the party. No AI is going to figure that out anytime soon.

One of the tools that I have come across in the before-AI times is the DISC assessment that is part of the Myers-Briggs, which is a psychological tool that has been around for decades. I wrote about my test when I was attending a conference at Ford Motor Co. back in 2013. They were demonstrating how they use this tool to figure out the type of person who is most likely to buy any particular car model. Back in 2000, I wrote a somewhat tongue-in-cheek piece about how you can use Myer-Briggs to match up our personality with that of our computing infrastructure.

But deciding if someone is an introvert or an extrovert is a well-trod path, with plenty of testing experience over the decades. These AI-powered tools don’t have much of this history, are based on data sets that are shaky with all sorts of assumptions. For example HireVue’s facial analysis algorithm is trained on video interviews with people already employed by the company. That sounds like a good first step, but having done one of those one-sided video interviews — basically where you are just talking to the camera and not interacting with an actual human asking the question — means you aren’t getting any feedback from your interviewer, either with subtle facial or audio clues that are part of normal human discourse. Eventually, in 2021, the company stopped using both tone-of-voice and facial-based algorithms entirely, claiming that natural language processing had surpassed both of them.

Another example is capturing when you use your first person pronouns during the interview — I vs. we for example. Is this a proxy for what kind of team player you might be? HireVue says they base their analysis on thousands of questions such as this, which doesn’t make me feel any better about their algorithms. Just because a model has multiple parameters doesn’t necessarily make it better or more useful.

Then there is the whole dust-up on overcoming built-in AI bias, something that has been written about over the years going back to when Amazon first unleashed their AI hiring tool and found it selected white men more often. I am not going there in this post, but her treatment runs deep and shows the limitations of using AI, no matter how many variables they try to correlate with their models. What is important, something Mark Cuban touches on frequently with his posts, is that diverse groups of people produce better business results. And that diversity can be defined in various ways, not just race and gender, but by people with disabilities both mental and physical. The AI modelers have to figure out — as all modelers do — what is the connection between playing a game, or making a video recording, and how that relates to job performance? You need large and diverse training samples to pull this off, and even then you have to be careful about your own biases in constructing the models. She quotes one source who says, “Technology, in many cases, has enabled the removal of direct accountability, putting distance between human decision-makers and the outcomes of these hiring processes and other HR processes.”

Another dimension of the AI personnel assessment problem is the tremendous lack of transparency. Potential prospects don’t know what the AI-fueled tests entail, don’t know how they were scored or whether they were rejected from a job because of a faulty algorithm or bad training data or some other computational oddity.

When you step back and consider the sheer quantity of data that can be collected by an employer: keystrokes on your desktop, website cookies that record the timestamp of your visits, emails, Slack and Teams message traffic, even Fitbit tracking stats — it is very depressing. Do these captured signals reveal anything about your working habits, job performance, or anything really? HR folks are relying more and more on AI-assistance, and now can monitor just about every digital move that an employee makes in the workplace, even when that workplace is the dining room table and the computer is shared by the employee’s family. (There are several chapters on this subject in her book.)

This book will make you think about the intersection of AI and HR, and while there is a great deal of innovation happening, there is still much work to be done. As she says, context often gets lost. Her book will provide plenty of context for you to think about.

CSOonline: How to strengthen your Kubernetes defenses

Kubernetes-focused attacks are on the rise. Here is an overview of the current threats and best practices for securing your clusters. The runaway success of Kubernetes adoption by enterprise software developers has created motivation for attackers to target these installations with specifically designed exploits that leverage its popularity. Attackers have become better at hiding their malware, avoiding the almost trivial security controls, and using common techniques such as privilege escalation and lateral network movement to spread their exploits across enterprise networks. While methods for enforcing Kubernetes security best practices exist, they aren’t universally well known and require specialized knowledge, tools, and tactics that are very different from securing ordinary cloud and virtual machine use cases.

In this post for CSO, I examine the threat landscape, what exploits security vendors are detecting, and ways that enterprises can better harden their Kubernetes installations and defend themselves.examine the threat landscape, what exploits security vendors are detecting, and ways that enterprises can better harden their Kubernetes installations and defend themselves.

I find more central office lore in Seattle

I have a thing about the telephone central office (CO). I love spotting them in the wild, giving me some sense of the vast connectedness that they represent, the legal wrangling that took place over their real estate, and their history in our telecommunications connectedness. That is a lot to pack into a series of structures, which is why I am attracted to them.

I wrote that six years ago and it still holds true. This week I added a new CO to my “collection” of favorite places, one that calls itself a connections museum that occupies the top floors of a working CO in an industrial area of Seattle. It is an interesting place, but the label isn’t quite appropriate: it is more an interactive time machine that will take you back over the past 100 years of telecom history. Yes, you will find working models of phones of yesteryear (such as this 1908 wall phone model shown here), but the real treat — especially for this networking geek — are the electromechanical switch fabrics that were once found in every CO on the planet, and now are extinct.

In the pre-TCPIP, analog landline days, every phone had to be connected via a slender pair of copper wires from one’s home (or business) to the CO. That is a lot of wire. Once that pair entered the CO premises, it was connected to these huge machines to make and receive phone calls. That is a lot of wire that is presently unused, such as what can be found in my own home. I think I used my last landline around 2002 or so.

What is both impressive and hard to comprehend in the Seattle CO is how enormous this equipment is, and how small the current digital switches and IP-based networking gear is by comparison. I have seen numerous mainframe computers and they are no small objects. But panel frames and crossbar switches loom large and have a distinctly oily smell, which gives away their mechanical nature. Even with all of its moving parts — and there are thousands of them — these beasts worked flawlessly for decades to place our phone calls.

The other thing that becomes clear walking around the Seattle CO is how extensible phone tech was. The phone network connected gear of many different vintages — there even were examples of those quirky 1960s-era video phones that we now carry around in our pockets and think nothing unusual of making such calls.

The place is an active test bed for the old phone tech, and numerous volunteers have devoted many hours to try to resurrect the tech into some operational semblance. That is quite an achievement, because the surviving documentation is incomplete or incomprehensible or both. Trial by error and patience are important skills to make this stuff come back to life.

Museum docents will take you around the CO, patiently explain what is going on, and show you the process of completing a phone call from one phone to another. There are also phone switchboards that Ernestine would be at home operating, and visitors can do the one-ringy-dingy themlseves.

One thing that I had forgotten about was the importance of real estate with these COs. Back in the 2000s, when DSL technology was coming into vogue, the local phone companies weren’t too happy about having some competition for their communications and tried to stop the DSL vendors from installing their gear in the COs. What became obvious as they attempted to create legal roadblocks was there was plenty of room for the new stuff. This is because as these old crossbar switches were replaced, there was plenty of floor space to hold a couple of 19-inch racks of digital gear. (BTW, that standard harks back to the 1890s.)

As I was leaving the CO, a tour group was coming in. The group was dressed up in what they called steam punk costumes (it looked more Dickensian to my untrained eye) but seemed very appropriate: people who understand the broad sweep of history and wanted to recall a bygone era. While I didn’t need any change of clothes, I recognized kindred spirits.

Building an unusual 30-year career in IT at the Catholic Health Association

Janey Brummett CAE PMPI had a chance to speak to Janey Brummett who has spent three decades working in the IT department for the Catholic Health Association, the national leadership organization of the Catholic health ministry, representing the largest nonprofit providers of health care services in the nation. She came to the association as a paralegal who got an early taste for computers, back when PCs were first coming into businesses and when she was helping to spec out mainframe systems. “I was the conduit to talk to the programmers back then,” she said. Over the years she worked her way up the IT org chart until retiring this year in a position that most of us would characterize as the CIO.

I recall those early years with a lot of fondness, as does Janey. Back then, we were pioneers in building local area networks that used very thick wires that was expensive to install. Wifi didn’t exist, and PCs had the massive 40 MB hard drives — well, they seemed massive at the time. Now you can’t even get that little memory in anything.

Those early LANs were running Novell Netware and Groupwise, an application that was an early collaborative tool that did email, shared calendars and documents.

The big switch came in the early 1990s for CHA when they went from DOS-based desktops to Windows. She had a major upgrade of their Netware server that was an all-nighter due to some data migration problems and access rights that didn’t transfer over. “That was a horrible experience,” as she recalls.

Now CHA is using Microsoft Copilot, and Teams to communicate, and they are developing their own AI-based tools to access a common data platform. “We are building a virtual data analyst that we can query and build charts and collect presentation talking points.” That is a sign of the times to be sure.

Janey remembers supporting a speaker at an annual association meeting in the early 2000s. “The speaker came to me a few minutes before their talk with a virus-infected floppy disk. That was typical of the times, and I sure am glad that systems have gotten a lot more stable and straightforward since then! Nowadays, there is more of a focus on end user tools and it all works really well.” I completely agree.

CHA was an early adopter of the internet, and Janey recalls teaching the first internal classes on how to use it in the mid-1990s. That was the same timeline for me (I started my Web Informant newsletters in the fall of 1995, BTW) and it was pretty exciting times to be sure.

“The pandemic years really changed our operations,” she told me. Back then, we had no one working remotely whatsoever. But we were fortunate to have put in place the infrastructure to support remote workers and had just started rolling out Teams. We had a lot of resistance before the pandemic, not to mention that less than half of our staff had laptops and we had to get that in place. Now we are almost all remote workers, with two or three days per month that people need to be in the office. Having Teams got us to jump light years ahead to collaborate to where it is second nature.”

How has she managed to stay at the same organization for all this time? “It comes down to constantly learning and innovating. Plus I enjoy what I do and my job is continually changing and evolving. IT should really stand for innovation technology.”

To read more interviews with long-standing IT managers, check out this three-part series that I wrote in the fall of 2022.

Can Movable Type become a useful AI writer’s tool?

Once upon a time, when blogs were just beginning to become A Thing, the company to watch was Six Apart. They have blogging software called Movable Type. Then the world shifted to WordPress, and soon there were other blogging platforms that turned Movable Type into the Asa Hutchinson of that particularly market. (What? They are still around? Yes and account for about one percent of all blogs.)

Well, Asa no more, because the company has fully embraced AI in a way that even Sports Illustrated (they recently fired their human writers) would envy. If you have never written a book, you can have a ready-made custom outline in a few minutes. All it takes is a prompt and a click. You don’t even have to have a fully-formed idea, understand the nature of research (either pre- or post-internet), or even know how to write word one. (There are other examples on their website if you want to check them out.)

MovableType’s AI creates “10 chapters spanning 150+ pages, and a whopping 35k+ words” (or so they say) of… basically gibberish. They of course characterize it somewhat differently, saying its AI output is “highly specific & well researched content,” It isn’t: there are no citations or links to the content. The output looks like a solid book-like product with chapters and sub-heads but is mostly vacuous drivel. The company claims it comes tuned to match your writing style, but again, I couldn’t find any evidence of that. And while “each chapter opens with a story designed to keep your readers engaged,” my interest waned after page 15 or so.

Perhaps this will appeal to some of you, especially those of you that haven’t yet written your own roman a clef. Or who are looking to turn your online bon mots into the next blockbuster book. But I don’t think so. Writing a book is hard work, and while it is not growing crops or working in a factory, you do have to know what you are doing. The labor involved helps you create a better book, and the process of editing your own work is a learned skill. I don’t think AI can provide any short cuts, other than to produce something subpar.

I have written three books the old fashioned way: by typing every word into Word. Two of them got published, one got shelved as the market for OS/2 moved into the cellar from the time of the book proposal. I got tired of rewriting it (several times!) for the next big movie moment of IBM’s beleaguered OS that never happened. The two published books never made much money for anyone. But I did learn how to write a non-fiction book, and more importantly, write an outline that was more of a roadmap and a strategy and structure document. This is not something that you can train AI to do, at least not yet.

When I read a book, I cherish the virtual bond between me and the author, whether I read my go-to mystery fiction or a how-to business epic. I want to bathe in the afterglow of what the author is telling me, through characters, plot points, anecdotes, and stories. That is inherently human, and something that the current AI models can’t (yet) do. While MovableType’s AI is an interesting experiment, I think it is a misplaced one.

Is it time to upgrade my hearing aid?

More than five years ago, I wrote about my journey acquiring my first hearing aid. With a new insurance plan with hearing benefits, I thought it was time to take another look and see where the latest aids can help me with my high frequency hearing loss and tinnitus.

Now there are three components of your hearing that will motivate you to get an aid. First, you have some kind of hearing loss (as I said, for me it is the higher frequencies, which is typical for older folks) and you want to hear things better. Second, the type of sounds you “hear” with your Tinnitus, and whether you want these masked with an aid or some combination of audio processing and masker. Tinnitus can vary by time of day, whether you have gotten enough sleep, or by stress levels. If this describes you, then ideally you want to be able to adjust the masking technology according to give you the greatest comfort. Part of the issue here is that you may not want to become a DIY audiologist or software engineer.
Finally, how the aids interact with each other to place you in a sonic environment so you can understand what is going on around you. Given that I am completely deaf in one ear and that I need only one aid, this isn’t relevant for me.
If you haven’t read my original blog post when I first bought my aid, now would be a good time to go do that to remind yourself of that process.

But buying the first aid may seen easy when you consider a complicated journey to upgrade your aid. This is because the hearing aid medical industrial complex is just that — complicated. And while there are professionals that can be helpful, you have to first know the right questions to ask, second know what the aids can and cannot do, and have a great deal of patience being the hearing-deprived patient. Oh, and be prepared to spend lots of time and money when you do get your aid.

You would think that already having an aid would mean that you have already dealt with these issues. But you would be wrong. The replacement market is truly a different ball game. This is because being human, our hearing changes as we age. And the aid technology marches on, which means your beforetime knowledge is outdated. And that you now have a standard — your existing aid — to compare things with introduces new complexities.

There is one other factor, that you can now purchase aids over the counter. That is fine if you have a simple hearing loss, don’t have to muck around with the frequency controls, and don’t have a lot of Tinnitus. Yes, no and no for me. So this wasn’t an option. The OTC aids generally cost less, but don’t include much in the way of hand-holding and servicing. This is not like buying a blender: instructions and personal demonstrations are essential to their operations. You might not like the initial fit of the instrument, or be confounded with its numerous settings.

The OTC aids don’t really give you the best price comparison either. When you buy an aid from an audiologist, you are actually paying for a service contract for a period of time and this contract may or may not cover all problems or has exceptions (like water damage).

As I mentioned previously, for the past five plus years I have been using one of the Starkey models. I was generally happy with it. I made an appointment with a different audiologist than the one I had been seeing for the Starkey, just to see how the two approach solving my problems. I give the new audiologist, a woman whom I will call B, props in thoroughness, and knowledge, and service. She spent nearly two hours on my first visit, and wanted to schedule several follow up visits. She told me that I am her first Tinnitus patient who is not a new hearing aid user. We will get to why that matters shortly.

B has worn aids since childhood, a perspective of which I liked. She had some very fancy gear to test your aid’s programming, which I liked as well. Think about the device that an optometrist uses to determine if your glasses prescription matches the actual optics. She puts every aid she sells through this device, to ensure it is programmed properly.
Why is this important? Modern hearing aids are more software than hardware and can be programmed in one of three ways: At the factory when they are assembled and make use of various automatic sound processing features (to soften noisy environments, to enhance the frequencies used in human speech, to change the microphone coverage area to the sides or in front of you, and other things).
But one piece of programming is very important to me, and that includes figuring out the Tinnitus masking sounds, which can be tricky to deal with. Everyone has different “ringing” sounds as part of their Tinnitus affliction, some relatively simple sounds (such as I have) and some that vary in terms of frequency, period, and loudness. The aids do some counter-programming — meaning they produce their own sounds — to try to keep your attention away from the tinnitus, at least that is the working theory of the moment.
Second, the audiologist has the ability to change some of its programming that affects its audio processing and also set up pre-set conditions that you can access with the buttons on the aid or on its smartphone app. I mentioned to B that one night my wife accidentally slammed a cabinet door — the sound of which, amplified by my new aid, almost made me jump. She told me that she could program the “door slam sound” (yes, this is a thing) to soften it up.
Yes, there is an app for that, and the different vendors do a varying job on their apps. The app — the third piece in this puzzle — is what you fiddle with yourself (assuming you have a smartphone and that you want to do this). Each aid manufacturer has different models with varying features — which you may or may not need. Tinnitus masking generally is included in the higher end (and pricier) products, just my luck.
One of the things that I will be doing over the next week or so is to try them out in different sonic environments and see if one of them is worth the cost of the new aid, or I can still get by with the old aid with a few simple adjustments.
So B recommended that I check out the Resound models. She likes their app, which has a lot of controls, as you can see from the screenshots. This was very obvious when I compared it to the app that controls my old Starkey aid, which doesn’t have as many Tinnitus pre-set controls either available to the audiologist or on the app itself.
I should mention one other complicating factor. My old aid had regular batteries that needed replacement every week or so. But most of the newer aids have rechargeable batteries that last about a day on a charge and probably need complete replacement every three years. I don’t mind the non-rechargeable kind but you may feel otherwise.
Then there is the matter of insurance. My insurance plan covers $2000 per aid, but this is a deceptive situation, as some audiologists don’t take insurance (such as B) because they don’t want to bother with the reimbursement. But she is very upfront about the services she provides to make sure your aids are the right choice for you, and includes several visits the first year you buy an aid from her.
The insurance issue is a vexing one. For years, my plans had no coverage, so it was easier. There are three problems: first, as you say, limited reimbursement on part of the costs, which still leaves them expensive. Second, a general lack of transparency about pricing. Each audiologist can set prices independent of others, which as I mentioned bundles in a certain level of service. This makes it hard to shop around. Selling OTC aids was supposed to make things more transparent, but it hasn’t.
So let’s take a step back here. The issue for the aids is what problem are you trying to solve. For example, I would like better Bluetooth fidelity (especially when I am outside, which renders my old aid almost useless), better control over the masker, and more options in general for the smartphone app. Not all aids deliver all of these features. And sometimes you don’t know what is important until you try out the aid and hear it for yourself. Or see the app’s controls and decide whether you want to spend your day fiddling with them.
After spending a few days with my new Resound aid, I decided I would stick with the tried and true Starkey. Much as I would like the latest and greatest tech, I just couldn’t justify the extra bucks. Perhaps when it finally bites the dust that will force my decision, but by then there will be something even more techy.

A brief history of domain squatting

A long time ago at the dawn of the internet era, a tech journalist bought the mcdonalds.com domain name. Actually, “bought” isn’t really correct, because back then in order to obtain a dot com, you merely had to know how to send email to the right destination, and within days, the domain was all yours. That was how I first got my own strom.com domain back in 1993. Free of charge, even. It was the wild west. (Some may say it still is.)

The journalist was Josh Quittner, who was writing a story for Wired magazine about domain squatting, although it wasn’t glorified with an actual name back then. Josh noticed that the name wasn’t yet taken, so he tried to do the responsible thing and called a PR person at McD’s to try to figure out why they weren’t online and hadn’t yet grabbed it. Of course, back then, almost no one had gotten their names — Burger King didn’t yet obtain their own domain name, btw.

The PR person, bless their heart, asked, “Are you finding that the Internet is a big thing?” Yeah, kinda. As he wrote, “There are no rules that would prohibit you from owning a bitchin’ corporate name, trademarked or not.” So he grabbed the domain and refused to turn it over until McDonald’s agreed to provide high-speed Internet access for a public school in Brooklyn. Eventually, the company figured out that they really wanted the domain for their own business, and domain squatting has never been the same since then.

Domain squatting now has a wide and varied subculture. Here is a 2020 report from Unit42 that goes into further details, for example.This includes homographic attacks (using non-Roman character sets), combo squats (that add subdomains to make them appear more legit), level squats (using a very long character string, counting on the browsers to truncate them and make them more believable) and I am sure many more perfidious techniques.

A complicating factor is we now have all kinds of domains like .xyz and .lawyer to contend with, which only increases the threat space that bad actors can occupy in domain impersonation. Josh emailed me today and said, “I figured that with the creation of so many top-level domains the shenanigans around domain-name squatting would abate but it just created loads of new problems. For instance, some scammers pretending to be decrypt.co (my crypto news site) created a mirror site with a very similar name and used it for phishing. They periodically send out email to millions of people claiming to be decrypt and urge people to connect their crypto wallets to collect tokens. Emailing the host site and alerting them to the scam did no good.” Yeah, wild west indeed.

I was reminded of this story when I saw that yet another business had let their domain registration lapse and was purchased by tech consultants who are trying to give it back to its rightful owner. Why do these things happen? One major reason: domain ownership ultimately relies on humans to pay attention, and renew them at the right times. (I just renewed a bunch of mine, which I had wisely setup years ago to expire on January 1.) Also, in big companies like McDonalds there may be several different domain owners spread among various departments, especially if a company has been acquired or has created new subsidiaries.

Actually, there is a second reason: greed. Criminals have adopted these squatting techniques to lure victims in. Just a few days ago I thought I was buying some stamps from USPS.com, but was brought to some other domain that looked like them. You can’t be too careful. (And the USPS doesn’t discount their stamps by 40%, which should be a red flag.)

Book review: Micah Lee’s Hacks Leaks and Revelations

There has been a lot written about data leaks and the information contained therein, but few books that tell you how to do it yourself. That is the subject of Hacks, Leaks and Revelations that was recently published.

This is a very unique and interesting and informative book, written by Micah Lee, who is the director of information security for The Intercept and has written numerous stories about leaked data over the years, including a dozen articles on some of the contents of the Snowden NSA files. What is unique is that Lee will teach you the skills and techniques that he used to investigate these datasets, and readers can follow along and do their own analysis with this data and others such as emails from the far-right group Oath Keepers. There is also materials leaked from the Heritage Foundation, and chat logs from the Russian ransomware group Conti. This is a book for budding data journalists, as well as for infosec specialists who are trying to harden their data infrastructure and prevent future leaks from happening.

Many of these databases can be found on DDoSecrets, the organization that arose from the ashes of WikiLeaks and where Lee is an adviser.

Lee’s book is also unique in that he starts off his journey with ways that readers can protect their own privacy, and that of potential data sources, as well as ways to verify that the data is authentic, something that even many experienced journalists might want to brush up on. “Because so much of source protection is beyond your control, it’s important to focus on the handful of things that aren’t.” This includes deleting records of interviews, any cloud-based data or local browsing history for example. “You don’t want to end up being a pawn in someone else’s information warfare,” he cautions. He spends time explaining what not to publish or how to redact the data, using his own experience with some very sensitive sources.

One of the interesting facts that I never spent much time thinking about before reading Lee’s book is that while it is illegal to break into a website and steal data, it is perfectly legal for anyone to make a copy of that data once it has been made public and do your own investigation.

Another reason to read Lee’s book is that there is so much practical how-to information, explained in simple step-by-step terms that even computer neophytes can quickly implement them. Each chapter has a series of exercises, split out by operating system, with directions. A good part of the book dives into the command line interface of Windows, Mac and Linux, and how to harness the power of these built-in tools.

Along the way you’ll learn Python scripting to automate the various analytical tasks and use some of his own custom tools that he and his colleagues have made freely available. Automation — and the resulting data visualization — are both key, because the alternative is very tedious examination line by line of the data. He uses the example of searching the BlueLeaks data for “antifa” as an example (this is a collection of data from various law enforcement websites that document misconduct), making things very real. There are other tools such as Signal, an encrypted messaging app, and using BitTorrent. There is also advice on using disk encryption tools and password managers. Lee explains how they work and how he used them in his own data explorations.

One chapter goes into details about how to read other people’s email, which is a popular activity with stolen data.

The book ends with a series of case studies taken from his own reporting, showing how he conducted his investigations, what code he wrote and what he discovered. The cases include leaks from neo-Nazi chat logs, the anti-vax misinformation group America’s Frontline Doctors and videos leaked from the social media site Parler that were used during one of Trump’s impeachment trials. Do you detect a common thread here? These case studies show how hard data analysis is, but they also walk you through Lee’s processes and tools to illustrate its power as well.

Lee’s book is really the syllabus for a graduate-level course in data journalism, and should be a handy reference for beginners and more experienced readers. If you are a software developer, most of his advice and examples will be familiar. But if you are an ordinary computer user, you can quickly gain a lot of knowledge and see how one tool works with another to build an investigation. As Lee says, “I hope you’ll use your skills to discover and publish secret revelations, and to make a positive impact on the world while you’re at it.”

SiliconANGLE: The changing economics of open-source software

The world of open-source software is about to go through another tectonic change. But unlike earlier changes brought about by corporate acquisitions, this time it’s thanks to the growing series of tech layoffs. The layoffs will certainly change the balance of power between large and small software vendors, and between free and commercial software versions, and the role played by OSS in enterprise software applications could change.

In this post for SilicionANGLE, I talk about why these changes are important and what enterprise software managers should take away from the situation.