Time to move away from Twitter

Yes, I know what it is now known as. When the Muskification began two years ago, I wrote that this was the beginning of its demise. I said then, “Troll Tweeting by your CEO is not a way to set corporate (or national) policy.” How true, even now.

I haven’t posted there. I still have my account, mainly because I don’t want anyone else with my name to grab it. But I have focused my efforts in content promotion over on LinkedIn. This week I give a more coherent reason why you might do the same.

I got a chance to catch up with Sam Whitmore in this short video podcast. We discuss why PR pros should follow my example. Sam and I go way back nearly 40 years, when we both worked as reporters and editorial managers at PC Week (which has since been unsatisfactorily renamed too). Sam takes the position that PR folks need to stick with Twitter because of historical reasons, and because that is where they can get the best results of coverage by their clients and keep track of influential press people. I claim the site is a declining influence, and so toxic to anyone’s psyche, let alone their client’s brand equity.

In January 2023, I wrote a series of suggestions on Twitter’s future, including how hard it will be do content moderation (well, hard if they actually did it, which they apparently don’t) and how little operational transparency the social media operators have.

Is Sam or I right? You be the judge, and feel free to comment here or on LinkedIn if you’d like.

CSOonline: How to pick the best endpoint detection and response solution

Endpoint detection and response (EDR) security software has grown in popularity and effectiveness as it allows security teams to quickly detect and respond to a variety of threats. EDR software offers visibility into endpoint activity in real time, continuously detecting and responding to attacker activity on endpoint devices including mobile phones, workstations, laptops, and servers.

In this buyer’s guide for CSOonline, I explain some of the benefits, trends, and questions to ask before evaluating any products. I also briefly touch upon six of the more popular tools. One of them, Palo Alto Networks’ Cortex XDR, has a dashboard that looks like the below screencap.

 

How to succeed at social media in this age of outrage

Samuel Stroud, the British blogger behind GiraffeSocial has posted a column taking a closer look at how TikTok’s algorithm works — at least how he thinks it works. But that isn’t the point of the post for you, dear reader: he has some good advice on how to improve your own social media content, regardless of where it lands, and how it is constructed.

Before I get to his suggestions, I should first turn to why I used the word outrage in my hed. This is because a new Tulane University study shows that people are more likely to interact with online content that challenges their views, rather than agrees with them. In other words, they are driven by outrage. This is especially true when it comes to political engagement, which often stems from anger, and fuels a vicious cycle. I realize that this isn’t news to many of you. But do keep this in mind as you read through some of Stroud’s suggestions.

You might still be using Twitter, for all I know, and are about to witness yet another trolling of the service by turning all user blocks into mutes, which is Yet Another Reason I (continue to) steer clear of the thing. That, and its troller-in-chief. So now is a good time to review your social strategy and make sure all your content creators or social managers are up on the latest research.

Stroud points out several factors to keep track of:

  • Likes, shares and comments: the more engagement from others, the higher a post is promoted. And this also means you should respond to the comments too.
  • Watch time: Videos that are watched all the way through get boosted
  • New followers: posts that generates new followers signing up also get boosted
  • More meta is betta: Captions, keywords, hashtags, custom thumbnails — all of these help increase engagement, which means paying attention to these “housekeeping” matters almost as much as the actual content itself.
  • Your history matters: if you have had previous interactions with this creator, type of content, or other trackable habits

Now, most of this is common sense, and perhaps something you already knew if you have been using any social media platform anytime over the last couple of decades. But it still is nice to have it all packaged neatly in one place.

But here is the thing. The trick with social media success is being able to balance your verisimilitude with your outrage. It is a delicate balance, particularly if you are trying to promote your business and your brand. And if you are trying to communicate some solid info, and not just fuel the outrage fires, then what Stroud mentions should become second nature to your posting strategy.

Time to do an audio audit

I am not a tin-foil-hat kind of person. But last week, I replaced my voice mail greeting (made in my own voice) with a synthetic voice of an actor saying to leave a message. I will explain the reasoning behind this, and you can decide whether I should now accessorize my future outfits with the hat.

Last month, Techcrunch ran a story about the perils of audio deepfakes and mentions how the CEO of Wiz, an Israeli cybersecurity firm that I have both visited and covered in the past, had to deal with a deepfake phone call that was sent out to many of its employees. It sounded like the CEO’s voice on the call. Almost. And fortunately, enough people at Wiz were paying attention and realized it was a scam. The call was assembled from snippets of a recorded conference session. But even the most judicious audio editing still can’t be perfect, and people at Wiz caught the unnaturalness of the assemblage. The reason for the difference had nothing to do with AI, but everything to do with human nature. This is because his speaking voice is somewhat strained, because he is uncomfortable in front of an audience, and that isn’t his conversational voice.

But it is just a matter of time before the AI overlords figure this stuff out.

AI-based voice impersonations — or deepfakes or whatever you want to call them — have been around for some time. I have written about this technology for Avast’s blog in 2022 here. The piece mentioned impersonated phone calls from the mayor of Kyiv to several other European politicians. This deepfake timeline begins in 2017 but only goes up to the summer of 2021. Since then, there have been numerous advances in tech. For example, a team of Microsoft researchers have developed a text-to-speech program called VALL-E that can take a three-second audio sample of your voice and be used in an interactive conversation.

And another research report, written earlier this summer, “involves the automation of phone scams using LLM-powered voice-to-text and text-to-voice systems. Attackers can now craft sophisticated, personalized scams with minimal human oversight, posing an unprecedented challenge to traditional defenses.” One of the paper’s authors, Yisroel Mirsky, wrote that to me recently when I asked about the topic. He posits a “scam automation loop” where this is possible, and his paper shows several ways the guardrails of conversational AI can be easily circumvented, as shown here. I visited his Ben Gurion University lab in Israel back in 2022. There I got to witness a real-time deepfake audio generator. It needed just a sample of a few seconds of my voice, and then I was having a conversation with a synthetic replica of myself. Eerie and creepy, to be sure.

So now you see my paranoia about my voicemail greeting, which is a bit longer than a few seconds. It might be time to do an overall “audio audit” for lack of better words, as just another preventative step, especially for your corporate officers.

Still, you might argue that there is quite a lot of recorded audio of my voice that is available online, given that I am a professional speaker and podcaster. Anyone with even poor searching skills — let alone AI — can find copious samples where I drone on about something to do with technology for hours. So why get all hot and bothered about my voicemail greeting?

Mirsky said to me, “I don’t believe any vendors are leading the pack in terms of robust defenses against these kinds of LLM-driven threats. Many are still focusing on detecting deepfake or audio defects, which, in my opinion, is increasingly a losing battle as the generative AI models improve.” So maybe changing my voicemail greeting is like putting one’s finger in a very leaky dike. Or perhaps it is a reminder that we need alternative strategies (dare I say a better CAPTCHA? He has one such proposal in another paper here.) So maybe a change of headgear is called for after all.

CSOonline: Top 5 security mistakes software developers make

Creating and enforcing the best security practices for application development teams isn’t easy. Software developers don’t necessarily write their code with these in mind, and as the appdev landscape becomes more complex, securing apps becomes more of a challenge to handle cloud computing, containers, and API connections. It is a big problem: Security flaws were found in 80% of the applications scanned by Veracode in a recent analysis.

As attacks continue to plague cybersecurity leaders, I compiled a list of five common mistakes by software developers and how they can be prevented for a piece for CSOonline.

Me and the mainframe

I recently wrote a sponsored blog for VirtualZ Computing, a startup involved in innovative mainframe software. As I was writing the post, I was thinking about the various points in my professional life where I came face-to-CPU with IBM’s Big Iron, as it once was called. (For what passes for comic relief, check out this series of videos about selling mainframes.)

My last job working for a big IT shop came about in the summer of 1984, when I moved across country to LA to work for an insurance company. The company was a huge customer of IBM mainframes and was just getting into buying PCs for its employees, including mainframe developers and ordinary employees (no one called them end users back then) that wanted to run their own spreadsheets and create their own documents. There were hundreds of people working on and around the mainframe, which was housed in its own inner sanctum, raised-floor series of rooms. I wrote about this job here, and it was interesting because it was the last time I worked in IT before switching careers for tech journalism.

Back in 1984, if I wanted to write a program, I had to first create them by typing out a deck of punch cards. This was done at a special station that was the size of a piece of office furniture. Each card could contain instructions for a single line of code. If you made a mistake you had to toss the card and start anew. When you had your deck you would then feed it into a specialized card reader that would transfer the program to the mainframe and create a “batch job” – meaning my program would then run sometime during the middle of the night. I would get my output the next morning, if I was lucky. If I made any typing errors on my cards, the printout would be a cryptic set of error messages, and I would have to fix the errors and try again the next night. Finding that meager output was akin to getting a college rejection letter in the mail – the acceptances would be thick envelopes. Am I dating myself enough here?

Today’s developers probably are laughing at this situation. They have coding environments that immediately flag syntax errors, and tools that dynamically stop embedded malware from being run, and all sorts of other fancy tricks. if they have to wait more than 10 milliseconds for this information, they complain how slow their platform is. Code is put into production in a matter of moments, rather than the months we had to endure back in the day.

Even though I roamed around the three downtown office towers that housed our company’s workers, I don’t remember ever stepping foot in our Palais d’mainframe. However, over the years I have been to my share of data centers across the world. One visit involved turning off a mainframe for Edison Electric Institute in Washington DC in 1993, where I wrote about the experience and how Novell Netware-based apps replaced many of its functions. Another involved moving a data center from a basement (which would periodically flood) into a purpose-built building next door, in 2007. That data center housed more souped-up microprocessor-based servers which would form the beginnings of massive CPU collections that are used in today’s z Series mainframes BTW.

Mainframes had all sorts of IBM gear that required care and feeding, and lots of knowledge that I used to have at my fingertips: I knew my way around the proprietary protocols called Systems Network Architecture and proprietary networking protocols called Token Ring, for example. And let’s not forget that it ran programs written in COBOL, and used all sorts of other hardware to connect things together with proprietary bus-and-tag cables. When I was making the transition to PC Week in the 1980s, IBM was making the (eventually failed) transition to peer-to-peer mainframe networking with a bunch of proprietary products. Are you seeing a trend here?

Speaking of the IBM PC, it was the first product from IBM that was built with spare parts made by others, rather than its own stuff. That was a good decision, and this was successful because you could add a graphics card (the first PCs just did text, and monochrome at that) or extra memory or a modem. Or a adapter card that connected to another cabling scheme (coax) that turned the PC into a mainframe terminal. Yes, this was before wireless networks became useful, and you can see why.

Now IBM mainframes — there are some 10,000 of them still in the wild — come with the ability to run Linux and operate across TCP/IP networks, and about a third of them are running Linux as their main OS. This was akin to having one foot in the world of distributed cloud computing, and one foot back in the dinosaur era. So let’s talk about my client VirtualZ and where they come into this picture.

They created software – mainframe software – that enabled distributed applications to access mainframe data sets, using OpenAPI protocols and database connectors. The data stays put on the mainframe but is available to applications that we know and love such as Salesforce and Tableau.  It is a terrific idea, just like the original IBM PC in that it supports open systems. This makes the mainframe just another cloud-connected computer, and shows that the mainframe is still an exciting and powerful way to go.

Until VirtualZ came along, developers who wanted access to mainframe data had to go through all sorts of contortions to get it — much like what we had to do in the 1980s and 1990s for that matter. Companies like Snowflake and Fivetran made very successful businesses out of doing these “extract, transfer and load” operations to what is now called data warehouses. VirtualZ eliminates these steps, and your data is available in real time, because it never leaves the cozy comfort of the mainframe, with all of its minions and backups and redundant hardware. You don’t have to build a separate warehouse in the cloud, because your mainframe is now cloud-accessible all the time.

I think VirtualZ’s software will usher in a new mainframe era, an era that puts us further from the punch card era. But it shows the power and persistence of the mainframe, and how IBM had the right computer, just not the right context when it was invented, for today’s enterprise data. For Big Iron to succeed in today’s digital world, it needs a lot of help from little iron.

The Cloud-Ready Mainframe: Extending Your Data’s Reach and Impact

(This post is sponsored by VirtualZ Computing)

Some of the largest enterprises are finding new uses for their mainframes. And instead of competing with cloud and distributed computing, the mainframe has become a complementary asset that adds new productivity and a level of cost-effective scale to existing data and applications. 

While the cloud does quite well at elastically scaling up resources as application and data demands increase, the mainframe is purpose-built for the largest-scale digital applications. But more importantly, it has kept pace as these demands have mushroomed over its 60-year reign, and why so many large enterprises continue to use them. Having them as part of a distributed enterprise application portfolio could be a significant and savvy use case, and be a reason for increasing their future role and importance.

Estimates suggest that there are about 10,000 mainframes in use today, which may not seem a lot except that they can be found across the board in more than two-thirds of Fortune 500 companies, In the past, they used proprietary protocols such as Systems Network Architecture, had applications written in now-obsolete coding languages such as COBOL, and ran on custom CPU hardware. Those days are behind us: instead, the latest mainframes run Linux and TCP/IP across hundreds of multi-core  microprocessors. 

But even speaking cloud-friendly Linux and TCP/IP doesn’t remove two main problems for mainframe-based data. First off, many mainframe COBOL apps are their own island, isolated from the end-user Java experience and coding pipelines and programming tools. To break this isolation usually means an expensive effort to convert and audit the code. 

A second issue has to do with data lakes and data warehouses. These applications have become popular ways that businesses can spot trends quickly and adjust IT solutions as their customer’s data needs evolve. But the underlying applications typically require having near real-time access to existing mainframe data, such as financial transactions, sales and inventory levels or airline reservations. At the core of any lake or warehouse is conducting a series of “extract, transform and load” operations that move data back and forth between the mainframe and the cloud. These efforts only transform data at a particular moment in time, and also require custom programming efforts to accomplish.

What was needed was an additional step to make mainframes easier for IT managers to integrate with other cloud and distributed computing resources, and that means a new set of software tools. The first step was thanks to initiatives such as the use of IBM’s z/OS Connect. This enabled distributed applications to access mainframe data. But it continued the mindset of a custom programming effort and didn’t really provide direct access to distributed applications.

To fully realize the vision of mainframe data as equal cloud nodes required a major makeover, thanks to companies such as VirtualZ Computing. They latched on to the OpenAPI effort, which was previously part of the cloud and distributed world. Using this protocol, they created connectors that made it easier for vendors to access real-time data and integrate with a variety of distributed data products, such as MuleSoft, Tableau, TIBCO, Dell Boomi, Microsoft Power BI, Snowflake and Salesforce. Instead of complex, single-use data transformations, VirtualZ enables real-time read and write access to business applications. This means the mainframe can now become a full-fledged and efficient cloud computer. 

VirtualZ CEO Jeanne Glass says, “Because data stays securely and safely on the mainframe, it is a single source of truth for the customer and still leverages existing mainframe security protocols.” There isn’t any need to convert COBOL code, and no need to do any cumbersome data transformations and extractions.

The net effect is an overall cost reduction since an enterprise isn’t paying for expensive high-resource cloud instances. It makes the business operation more agile, since data is still located in one place and is available at the moment it is needed for a particular application. These uses extend the effective life of a mainframe without having to go through any costly data or process conversions, and do so while reducing risk and complexity. These uses also help solve complex data access and report migration challenges efficiently and at scale, which is key for organizations transitioning to hybrid cloud architectures. And the ultimate result is that one of these hybrid architectures includes the mainframe itself.

Distinguishing between news and propaganda is getting harder to do

Social media personalization has turned the public sphere into an insane asylum, where every person can have their own reality. So says Maria Ressa recently, describing the results of a report from a group of data scientists about the US information ecosystem. The authors are from a group called The Nerve that she founded.

I wrote about Ressa when she won the Nobel Peace prize back in 2021, for her work running the Philippine online news site Rappler. She continues to innovate and afflict the comfortable, as the saying goes. She spoke earlier this month at an event at Columbia University where she covered the report’s findings. The irony of the location wasn’t lost on me: this is the same place where students camped out, driven by various misinformation campaigns.

One of the more interesting elements of the report is the crafting of a new seven layer model (no, not that OSI one). This tracks how the online world manipulates us. And starts off with social media incentives which are designed around promoting more outrage and less actual news. This in turn fuels real-world violence, then amplified by efforts of authoritarian-run nations who target Americans and polarize the public sphere even further. The next layer changes info ops into info warfare, feeding more outrage and conflict. The final layer is our elections, aided by the lack of real news and no general agreement on facts.

Their report is a chilling account of the state of things today, to be sure. And thanks to fewer “trust and safety” staff watching the feeds, greater use of AI in online searches by Google and Microsoft, and Facebook truncating actual news in its social feeds and as a result referring less traffic to online news sites, we have a mess on our hands. News media now shares a shrinking piece of the attention pie with independent creators. The result is that “Americans will have fewer facts to go by, less news on their social media feeds, and more outrage, fear, and hate.” This week it has reached a fever pitch, and I wish I could just turn it all off.

The report focuses on three issues that have divided us, both generationally and politically: immigration, abortion, and the Israel/Hamas war. It takes a very data-driven approach. For example, the number of #FreePalestine hashtag views on TikTok outnumber #StandWithIsrael by 446M to 16M and on Facebook and Twitter the ratio is 18 and 32 times respectively. The measurement periods for each network varies, but you get the point.

The report has several conclusions. First, personalized social media content has formed echo chambers that are fed by hyper-partisan sources which blurs news and propaganda, Journalism and source vetting is becoming rarer, and local TV news is being remade as they compete with cable outrage channels. As more of our youth engage further in social media, they become more vulnerable to purpose-fed disinformation and manipulation, and less able to distinguish between news and propaganda too. And this generational divide continues to widen as the years pass.

Remember when the saying went, if you aren’t paying for the service, you are the product? That seems so naïve now. Social media is now a tool of geopolitics, and gone are the trust and safety teams that once tried to filter the most egregious posts. And as more websites deploy AI-based chatbots, you don’t even know if you are talking to humans. This just continues a trend about the worsening of internet platforms that Cory Doctorow wrote about almost two years ago (he used a more colorful term).

In her address to the Nobel committee back in 2021, Ressa said, “Without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality and no democracy.”

Book review: GenAI for Dummies by Pam Baker

Pam Baker has written a very useful resource for AI beginners and experts alike. Don’t let the “Dummies” title fool you into thinking otherwise. This is also a book that is hard to get your hands around – in that respect it mirrors what GenAI itself is like. Think of it as a practical tutorial into how to incorporate GenAI into your working life to make you a more productive and potent human. It is also not a book that you can read in some linear front-to-back sense: there are far too many tips, tricks, strategies and things to think about as you move through your AI journey. But it is a book that is absolutely essential, especially if you have been frustrated at learning how to better use AI.

Underlying it all is Baker’s understanding on what the winning formula for using GenAI is – to understand that the output from the computer sounds like a human. But to be really effective, the human must think like a machine and tell GenAI what you want with better prompt engineering. (She spends an entire chapter on that subject with lots of practical suggestions that combine the right mix of clarity, context and creativity.  And so you will find out there is a lot more depth to this than you think.) “You must provide the vision, the passion, and the impetus in your prompts,” Baker writes. Part of that exploration is understanding how to best collaborate with GenAI. To that end, she recommends starting with a human team to work together as moderators in crafting prompts and refining the results from the GenAI tool.” The more information the AI has, the more tailored and sophisticated the outputs will be,” she writes.

To that end, Baker used this strategy to create this very book and was the first such effort for its publisher Wiley. She says it took about half the time to write this, when compared to other books that she has written on technology. This gives the book a certain verisimilitude and street cred. This doesn’t mean ripping the output and setting it in type: that would have been a disaster. Instead, she used AI to hone her research and find sources, then go to those citations and find out if they really exist, adding to her own knowledge along the way. “It really sped up the research I needed to do in the early drafts,” she told me. “I still used it to polish the text in my voice. And you still need to draft in chunks and be strategic about what you share with the models that have a public internet connection, because I didn’t want my book to be incorporated into some model.” All of this is to say that you should use AI to figure out what you do best and that will narrow down the most appropriate tools to use to eliminate the more tedious parts.

Baker makes the point that instead of wasting effort on trying to use GenAI to automate jobs and cut costs, we should use it to create rather than eliminate. It is good advice, and her book is chock full of other suggestions on how you can find the sweet spot for your own creative expressions. She has an intriguing section on how to lie to the model to help improve your own productivity, what she calls a “programming correction.” The flip side of this is also important, and she has several suggestions on how to prevent models from generating false information.

She catalogs the various GenAI vendors and their GenAI tools into how they craft different text, audio and visual outputs, and then summarizes several popular uses, such as in generating photorealistic artworks from text descriptions, some of which she has included in this book. She also explodes several AI myths, such as AI will take over the world or lead towards massive unemployment. She has several recommendations on how to stay on top of AI developments, including continuously upskill your knowledge and tools, become more agile to embrace changes in the field, have the right infrastructure in place to run the models, and keep on top of ethical considerations for its use.

By way of context, I have known Baker for decades. We were trying to figure out when we first began working together, and both our memories failed us. Over time, one of us has worked for the other in numerous projects, websites and publications. She is an instructor for LinkedIn Learning, and has written numerous books, including another “Dummies” book on ChatGPT.

Behind the scenes at my local board of elections

If you have concerns about whether our elections will be free and fair, I suggest you take some time and visit your local elections board and see for yourself how they operate and ask your questions and air your concerns. That is what I did, and I will tell you about my field trip in a moment. I came away thinking the folks that staff this office are the kind of public servants that demand our respect for doing a very difficult job, and doing it with humor, grace, and a sense of professionalism that usually doesn’t get recognized. Instead, these folks are vilified and targeted by conspiracies that have no place in our society.

First, some background. I wrote in August 2020 about the various election security technologies that were being planned for the 2020 election here. And followed up with another blog in December 2020 with the results that those elections were carried out successfully and accurately, along with another blog written last August about further insights about election security gleaned from the Black Hat trade show.

A few weeks ago, I was attending a local infosec conference in town and got to hear Eric Fey, who is one of the directors of the St. Louis County election board. He spoke about ways they are securing the upcoming election. The county is the largest by population in the state, home to close to a million people.

He offered to give anyone at the conference a tour of his offices to see firsthand how they work and what they do to run a safe, secure and accurate election.

So naturally I took him up on it, and we spent an hour walking around the office and answering my numerous questions. Now he is a very busy man, especially this time of year, but I was impressed that a) he made good on his offer, and b) was so generous with his time with just an interested citizen who didn’t even live in his jurisdiction. (I live in nearby St. Louis City, which has its own government and elections board.)

There are about fifty people in the elections board offices, split evenly between Democrats and Republicans. Wait, what? You must declare your affiliation? Yes. That is the way the Missouri elections boards are run. Not every county is big enough to have an elections board: some of the smaller counties have a single county clerk running things. And Fey is the Democratic director. He introduced me to his Republican counterpart.

Part of the “fairness” aspect of our elections is that both parties must collaborate on how they are run, how the votes are tabulated, and how ballots are processed. And doing this within the various and ever-changing election laws in each state. We went into the tabulation room, which wasn’t being used. There were about a dozen computers that would be fired up a few days before the election, when they are allowed to start tabulating the absentee and mail ballots. These computers are not connected to the internet. They run special software from Hart InterCivic, one of the  election providers that the state has approved. These machines are never connected online. Okay, but what about the results?  Fey says, “We us brand new USB’s for a single transaction. We generate a report from the tabulation software and then load that report on the USB. That USB is then taken to an internet connected computer and the results are uploaded. That particular USB is then never used again in the tabulation room.”

Speaking of which, when the room is filled with workers, the door is secured by two digital locks and must be opened in coordination. Think of how nuclear missile silos are manned: in this case though, as you can see in the above photo, a Democrat must enter their passcode on their lock, and a Republican must enter their different passcode on their lock.

The Hart PCs have a hardware MFA key and are also password protected and have separate passwords for the two parties. What happens when they need new software? The county must then drive them to their Austin offices, where they are updated, in one of their vehicles, with both parties present at all times. This establishes a chain of custody and ensures they aren’t tampered with.

The elections board office is attached to a huge warehouse filled to the brim with several items: The voting machines that will be deployed to each polling place of course. The tablets that are used by poll workers (as shown here) to scan voters’ IDs (typically drivers licenses) and identify which ballot they need to use. These ballots are printed on demand, which is a good thing because that process eliminated a lot of human error in the past when voters got the wrong ballots. And loads of paper: the board is required to keep the last election’s ballots stored there. And commercial batteries spare parts for all the hardware too: because on election day, they travel around to keep everything up and running. Why batteries? In case of power failure in the polling place. Don’t laugh – it has happened.

Getting the right combination of polling places is more art than science, because the county has limited control over private buildings. One Y decided they didn’t want to be a polling place this year, and Fey’s staff found a nearby elementary school. Public buildings can’t decline their selection.

One thing Fey mentioned that I hadn’t thought about is how complex our ballots typically are. We vote for dozens of down-ballot races, propositions, and the like. In many countries, voters are just picking one or two candidates. We have a lot of democracy to deal with, and we shouldn’t take it for granted.

So how about ensuring that everyone who votes is legally entitled to vote? They have this covered, but basically it boils down to checking a new registration against a series of federal and other databases that indicate whether someone is a citizen, whether they live where they say they live, whether they are a felon, and whether they are deceased. These various checks convinced me that there aren’t groups of people who are trying to cast illegal votes, or bad actors who are harvesting dead voters. Fey and I spent some time going through potential edge cases and I was impressed that he has this covered. After all, he has been doing this for years and knows stuff. There have been instances where green card holders registered by mistake (they are allowed to vote in some Maryland and California local elections, but not here in Missouri) and then called the elections board to remove themselves from the voting rolls. They realize that a false registration can get them imprisoned or deported, so the stakes are high.

Let’s talk for a minute about accuracy. How are the votes tabulated? There are several ways. In Missouri, everyone votes using paper ballots. This isn’t typically a problem to process them, because as I said they are freshly printed out at the polling place and then immediately scanned in. This is how we can report our results within an hour of the polls closing. The ballots are collected and bagged, along with a cell phone to track their location, and then a pair drivers (D + R) head back to the office. Fey said there was one case where a car was in an accident, and the central war room that was tracking them called them before they had a chance to dial 911. They take their chain of custody seriously on election night.

If you opt for mail-in ballots, though, the ballot quality becomes an issue. Out of the hundreds of thousands of ballots the county office received in 2020, about four thousand or so looked like someone tried to light them on fire. Each of these crispy ballots had to be copied on to new paper forms so they could be scanned. Why so many? Well, it wasn’t some bizarre protest — it turns out that many folks were microwaving their ballots, because of Covid and sanitation worries. It was just another day of challenges for the elections board, but they took it in stride.

The paper ballots are then put through a series of audits. First the actual number of ballots are counted by machine to make sure the totals match up. They had one ballot that was marked with two votes, with one crossed out. So the team located the ballot and saw that the voter changed their mind, and corrected their totals. That is the level of detail that the elections board brings to the final count. They also pick random groups of ballots to ensure that the votes match what was recorded.

As you can see, they do their job, and I think they do it very well. If you are thinking about your own field trip, ballotpedia.org is a great resource if you want more details about how your state runs its elections, where and how to vote, and contacts at your local election agency.