Book review: A Hacker’s Mind by Bruce Schneier

I have known Bruce Schneier for many years, and met him most recently just after he gave one of the keynotes at this year’s RSA show. The keynote extends his thoughts in his most recent book, A Hacker’s Mind, which he wrote last year and was published this past winter. (I reviewed some of his earlier works in a blog for Avast here.)

Even if you are new to Schneier, not interested in coding, and aren’t all that technical, you should read his book because he sets out how hacking works in our everyday lives.

He chronicles how hacks pervade our society. You will hear about the term Double Irish with a Dutch Sandwich (how Google and Apple and others have hacked and thus avoided paying US taxes), the exploits of the Pudding Guy (the person who hacked  American Airlines frequent flyer system by purchasing thousands of pudding cups to obtain elite status), or when the St. Louis Browns baseball team hacked things by hiring a 3’7″ batter back in 1951. There are less celebrated hacks, such as when investment firm Goldman Sachs owned a quarter of the total US aluminum supply back in the 2010’s to control its spot price. What was their hack? They moved it around several Chicago-area warehouses each day: the spot price depends on the time material is delivered. Clever, right?

Then there are numerous legislative and political hacks, such as the infamous voter literacy tests of the 1950s before the Civil Rights laws were passed. Schneier calls them “devilishly designed, selectively administered, and capriciously judged.”

“Our cognitive systems have also evolved over time,” he says, showing how they can be easily hacked, such as with agreements and contracts. This is because they can’t be made completely airtight, and we don’t really need that anyway: just the appearance of complete trust is usually enough for most purposes.

A good portion of his book concerns technology hacks, of course. He goes into details about how Facebook’s and You Tube’s algorithms are geared towards polarizing viewers, and the company not only knew this but specifically ignored the issue to optimize profits. The last chapters touch on AI issues, which he categorically says “will be used to hack us, and AI systems themselves will become the hackers” and find vulnerabilities in various social, economic and political systems. He makes a case for a hacking governance system that should be put in place — something which isn’t on the radar but should be.

“The more you can incorporate fundamental security principles into your systems design, the more secure you will be from hacking. Hacking is a balancing act. On the one hand, it is an engine of innovation. On the other, it subverts systems.” The trick is figuring out how to tip that balance.

Book review: The Revenge List

I liked the conceit about this murder mystery novel by Hannah Mary McKinnon entitled The Revenge List: The central character attends an anger-management support group and makes a list of people who have wronged her in the past and to whom she should forgive. Trouble is, the list falls into the wrong hands and people start having grave accidents. The mystery is who is doing these dastardly deeds, and what does this have to do with the character’s flaws, which are many. The action takes place in and around Portland Maine and the supporting cast is engaging and just quirky enough to sustain the plot points. It makes you question your own attitude towards forgiveness and how we resolve issues with our past connections. The family dynamics are also very true-to-life, which adds to the novel’s credibility and complexity. Highly recommended.

SiliconANGLE: Infostealers get more lethal

The class of malware called infostealers continues to evolve into a more lethal threat. These threats are software that can steal sensitive data from a victim’s computer, typically login details, browser cookies, saved credit cards and other financial information. Unfortunately, criminals continue to enhance this malware genre, and two new reports released this week document their latest efforts. I describe what is new and how to recognize this attack method in my latest post for SiliconANGLE.

SiliconANGLE: CIOs’ relationship with AI is complicated, but they have hopes for a promising future

Artificial intelligence — its value, risks and utility in enterprise scenarios — not surprisingly dominated the discussion at this week’s MIT CIO Symposium, one of the year’s biggest gatherings of senior information technology executives. In this post for SiliconANGLE, Paul Gillin and I review what some of the CIO panelists revealed about the state of their domains, and their relationship with AI tools.

SiliconANGLE: We need more breach transparency, but a lot of obstacles are in the way

The U.K.’s National Cyber Security Center last week posted a joint blog with the nation’s regulatory commissioner’s office about the need for better cybersecurity breach transparency. They’re concerned about the unreported incidents, in particular ransomware cases, which are getting more dangerousmore prevalent and more costly. The situation creates a vicious cycle: “If attacks are covered up, the criminals enjoy greater success, and more attacks take place,” they wrote in the post.

In this analysis for SiliconANGLE, I look at the implications for designing the next generation of customer support systems using AI enhanced tools.

SiliconANGLE: AI-based chatbots can help improve customer support – if they’re done right

Most of us have been interacting with customer support agents for years. It can be a frustrating experience: Oftentimes the agent knows less than we do about their product or service, calls are dropped or transferred to other agents. About two years ago, I had such a bad experience with AT&T Inc.’s customer support that I ended up cancelling my cell and internet service with the company.

But now there are artificial intelligence chatbots and chat programs that are supposed to make our lives better. With all the attention focused on ChatGPT and other AI-based chatbots, a new long-term research study has found that AI can help improve support, but only under carefully controlled situations. Let’s examine the specific circumstances and what’s in store for the future of support. In this post for SiliconANGLE, I dive into what they found and make some recommendations on how to be more effective at deploying AI for customer support situations.

Invicti blog: Ask an MSSP about DAST for your web application security

When evaluating managed security service providers (MSSPs), companies should make sure that web application security is part of the offering – and that a quality DAST solution is on hand to provide regular and scalable security testing. SMBs should evaluate potential providers based on whether they offer modern solutions and services for dynamic application security testing (DAST), as I wrote for the Invicti blog this month.

SiliconANGLE: As cloud computing gets more complex, so does protecting it. Here’s how to make sense of the market

Whether companies are repatriating their cloud workloads back on-premises or to colocated servers, they still need to protect them, and the market for that protection is suddenly undergoing some major changes. Until the past year or so, cloud-native application protection platforms, or CNAPPs for short, were all the rage. Last year, I reviewed several of them for CSOonline here. But securing cloud assets will require a multi-pronged approach and careful analysis of the organization’s cloud infrastructure and data collections. Yes, different tools and tactics will be required. But the lessons learned from on-premises security resources will point the way toward what to do in the cloud. More of my analysis can be found in this piece for SiliconANGLE.

SiliconANGLE: The chief trust officer was once the next hot job on executive row. Not anymore.

We seem to be in a trust deficit these days. Breaches – especially amongst security tech companies – continue apace. Ransomware attacks now have spread to data hostage events. The dark web is getting larger and darker, with enormous tranches of new private data readily for sale and criminal abuse. We have social media to thank for fueling the fires of outrage, and now we can self-select the worldview of our social graph based on our own opinions.

In this story for SiliconANGLE, I discuss the decline of digital trust and tie it to a new ISACA survey and a new effort by the Linux Foundation to try to document and improve things.

The art of mathematical modeling

German Submarine Warfare in World War 1 I THE GREAT WAR Special - YouTubeAll this chatter about ChatGPT and large language models interests me, but from a slightly different perspective. You see, back in those pre-PC days when I was in grad school at Stanford, I was building mathematical models as part of getting my degree in Operations Research. You might not be familiar with this degree, but basically was applying various mathematical techniques to solving real-world problems. OR got its beginnings trying to find German submarines and aircraft in WWII, and then got popular for all sorts of industrial and commercial applications post-war. As a newly minted math undergrad, the field had a lot of appeal, and at its heart was building the right model.

Model building may bring up all sorts of memories of putting together plastic replicas of cars and ships and planes that one would buy in hobby stores. But the math models were a lot less tangible and required some careful thinking about the equations you chose and what assumptions you made, especially on the initial data set that you would to train the model.

Does this sound familiar? Yes, but then and now couldn’t be more different.

For my class, I recall the problems that we had to solve each week weren’t easy. One week we had to build a model to figure out which school in Palo Alto we would recommend closing, given declining enrollment across the district, a very touchy subject then and now. Another week we were looking at revising the standards for breast cancer screening: at what age and under what circumstances do you make these recommendations? These problems could take tens of hours to come up with a working (or not) model.

I spoke with Adam Borison, a former Stanford Engineering colleague who was involved in my math modeling class: “The problems we were addressing in the 1970s were dealing with novel situations, and figuring out what to do, rather than what we had to know built around judgment, not around data. Tasks like forecasting natural gas prices. There was a lot of talk about how to structure and draw conclusions from Bayesian belief nets which pre-dated the computing era. These techniques have been around for decades, but the big difference with today’s models is the huge increment in computing power and storage capacity that we have available. That is why today’s models are more data heavy, taking advantage of heuristics.”

Things started to change in the 1990s when Microsoft Excel introduced its “Solver” feature, which allowed you to run linear programming models. This was a big step, because prior to this we had to write the code ourselves, which was a painful and specialized process, and the basic foundation of my grad school classes. (On the Stanford OR faculty when I was there were George Danzig and Gerald Lieberman, the two guys that invented the basic techniques.) My first LP models were written on punched cards, which made them difficult to debug and change. A single typo in our code would prevent our programs from running. Once Excel became the basic building block of modeling, we had other tools such as Tableau that were designed from the ground up for data analysis and visualizations. This was another big step, because sometimes the visualizations showed flaws in your model, or suggested different approaches.

Another step forward with modeling was the era of big data, and one example with the Kaggle data science contests. These have been around for more than a decade and did a lot to stimulate interest in the modeling field. Participants are challenged to build models for a variety of commercial and social causes, such as working on Parkinson’s cures. Call it the gamification of modeling, something that was unthinkable back in the 1970s.

But now we have the rise of the chatbots, which have put math models front and center, for good and for bad. Borison and I are both somewhat hesitant about these kinds of models, because they aren’t necessarily about the analysis of data. Back in my Stanford days, we could fit all of our training data on a single sheet of paper, and that was probably being generous. With cloud storage, you can have a gazillion bytes of data that a model can consume in a matter of milliseconds, but trying to get a feel for that amount of data is tough to do. “Even using ChatGPT, you still have to develop engineering principles for your model,” says Borison.”And that is a very hard problem. The chatbots seem particularly well-suited to the modern fast fail ethos, where a modeler tries to quickly simulate something, and then revise it many times.” But that doesn’t mean that you have to be good at analysis, just making educated guesses or crafting the right prompts. Having a class in the “art of chatbot prompt crafting” doesn’t quite have the same ring to it.

Who knows where we will end up with these latest models? It is certainly a far cry from finding the German subs in the North Atlantic, or optimizing the shortest path for a delivery route, or the other things that OR folks did back in the day.