The changing world of the engineer c.1900

I have been reading David McCullough’s books on the Wright brothers and the building of the Brooklyn Bridge. Both give a very vivid picture of what the life of an engineer was more than a century ago. This life was a very different one from what we know of today. What fascinates me about how both the Wright brothers and the Roeblings (first John, then his son Washington, the engineers behind the bridge) that built things back in the day. Let’s look into their toolsets, their work habits, and their thinking processes.

First and foremost for both situations was the power of observation. Wilbur Wright spent countless hours watching how birds flew, and then tried to figure out a collection of materials that could mimic them. Within a decade people were building airplanes out of paper and wood, what we would consider mere toys today. But using some of those early calculations enabled us to build 747s and SR-71s that fly fast and are built with very advanced materials. And are anything but toys, to be sure.

Second was understanding your materials. The Wright flyer worked because it was extremely light and flexible. The Brooklyn Bridge worked because it was heavier than previous bridges: that it could withstand and distribute the loads properly. The bridge is still in great shape, more than 100 years later. We tear down lesser structures after a decade.

Washington Roebling spent his days watching his bridge being built from a nearby house. He was severely injured from getting a bad case of the bends before anyone knew what this was. Perhaps this could be the first attributed case of remote work, although the distance was covered using a telescope rather than a VPN, Slack and email. His father was also injured on the job from a ferry accident and dies shortly thereafter. All four men got in the middle of things and spent lots of hands-on time to refine their calculations and their drawings and their builds.

About those calculations. We are talking about basic math, using pencil and paper. We tend to forget how easy it is to revise things now that we have powerful computers that can instantly spot grammatical or coding errors and even suggest changes as we type. Back in those days, it was a lot more work and required often starting from scratch.

The slide rule was about as fancy as things got back then, something that I used when I first began my college education. When I went to grad school in the late 1970s, computers were still the size of rooms. Look at the evolution of IBM, from making those roomfuls of computers to changing the desktop world with its PC business, which was eventually sold to a Chinese company. Now IBM is a software and services company.

The first airplanes and bridges were built in the era before electricity. If you ever have an opportunity to visit the Detroit area, you should see the actual bicycle shop that the Wrights used to machine their parts. It isn’t recognizable because it ran on steam power, with these long leather belts that rotated the equipment. Now we think nothing of plugging something into the wall, and complain if the cord isn’t long enough. (You might remember my post about the invention of the electric light bulb and other wonders on display at the Henry Ford Museum.)

Engineers are taught how to solve problems. What is interesting about the stories in both books is how the context of the problems is explained in clear language, with gripping narratives about the various lives involved and the decisions made. You are there with the Wrights on a desolate barrier island as they struggle to figure things out, or inside the bridge piers or watching the cables being strung across the river. They are tales that have stood the test of time.

One reason is that both these books (as well as a third one on the building of the Panama Canal) are extraordinarily researched and well-written. I really enjoyed watching this interview McCullough did with Librarian of Congress James Billington on another of his books, the first part devoted to his writing tips.

Avast blog: The latest privacy legal environment is getting interesting

California’s privacy laws have now been in effect for more than two years, and we are beginning to see the consequences. Earlier this month, the California Attorney General’s office released the situations where various businesses were cited and in some cases fined for violations. It is an interesting report, notable for both its depth and breadth of cases.

The CalAG is casting a wide net and in my blog for Avast I discuss what happened there and how the  privacy legal situation is evolving elsewhere. I also offer some words of advice to keep your business from getting caught up in any potential legal action.

Avast blog: The rise of ransomware and what can be done about it

new report by John Sakellariadis for the Atlantic Council takes a deeper dive into the rise of ransomware over the past decade and is worth reading by managers looking to understand this marketplace. In my latest blog for Avast, I explore the reasons for ransomware’s rise over the past decade — such as more targeted attacks, inept crypto management, and failed federal policies — as well as measures necessary to start investing in a more secure future.

Building a better surgical robot

I have learned over the years that doctors who are digital natives, or at least comfortable with the technologies that I use (email and the web), are those doctors that I want to treat me. In the past, whenever I have looked for treatment, I have followed a different path to choose my doctor, looking for someone who was older with loads of experience, who has seen plenty of patients.

But older experience isn’t necessarily relevant anymore, and as I age that is also pushing the envelope of what “older” really means. The older docs got their medical education more than 30 years ago, when there were different treatment modalities, different standards of medical care, and computers were the size of rooms. This is why I went with my urologist, someone closer to my kids’ age than mine, when two years ago I had my prostate removed surgically. The operation went well, and was done with the DaVinici robotic device made by Intuitive Surgical. My surgeon, Eric Kim, did touch me during the surgery — to open and close me up and position the robotic arm The rest of the time he was using the robot.

The robots have some big advantages over manual methods. Patients spend less time in the operating room, not to mention a reduction in blood loss, much smaller incisions and shorter hospital stays. Kim estimates that less than five percent of all prostates are removed by manual methods anymore. He has done more than 100 surgeries using the latest model of the daVinci, one that requires a single incision. Many of his patients that had surgery with this model of the robot — myself included — were able to go home within a few hours.

The robots have another direct benefit: “The doctor has instant feedback from an ultrasound or heart-lung machine without taking their eyes off of the procedure and operating field in progress,” said David Powell, the principal design engineer for Intuitive.

Being interested in technology, I have learned that these robots have evolved from using 3D standard-definition stereo vision to today’s dual-console, multi-window 3D high-definition systems. These units can be found in hundreds of hospitals around the world and are used to perform numerous urology, thoracic, ENT and gynecologic laparoscopic surgical procedures.

The company has worked with Xilinx for two decades, upgrading their Virtex and Spartan FPGA video processing chipsets to make the views seen by their human surgeons more helpful and more precise. Plus, the better video setup means less eye strain for the surgeon, and the ability to train new staff members.

“Xilinx’ embedded processor architecture has led to a major revolution for us in terms of our subsequent platform designs,” said Powell. The current daVinci models employ dozens of these chipsets, and benefits from being programmable, as well as a more scalable and distributed architecture. This means that many new capabilities can be introduced with an in-field firmware upgrade, rather than swapping out major hardware components. All of this results in more uptime and increased robot usage, amortizing their costs over more surgeries. Our hospital has seven of the machines — both older and newer models — that are busy at most times. “It is rare for a robot to sit unused,” said Kim.

It is easy to recognize the newer electronics, because the original daVinci models used a collection of thick custom cables to connect its various components that were failure-prone and required frequent repairs. The current version uses a single fiber optic line to deliver eight channels of full 1080i HD video and is more reliable. Sadly, I wasn’t able to see the machine that did the deed on my prostate, but glad that I got the benefits!

New AMD website: Performance intensive computing

AMD and Supermicro have asked me to help them build this new website that focuses on higher-end computing services. Topics include building computing clusters, taking advantage of GPUs to increase capacity and performance, and highlighting various case studies where this equipment plays a key role, such as with scientific computing and for academic research.

The Goldberg variations: Every Good Spy Deserves Favor

If you didn’t attend the RSA conference this year, you might have missed this presentation about a rather interesting application of codes and ciphers. I call it the Goldberg variations, and those of you that are musically inclined might get the references in my post’s title.

Channeling the power of arts into education - The San Diego Union-TribuneRSA conference usually has one of these sessions, where the organizers like to highlight a moment in history where people played an important role in cryptography. This year’s moment was a presentation by Merryl Goldberg, who in 1985 was part of a Klezmer band that toured the former Soviet Union. (Today she is a music professor at CalArts,) In reality, they were spies on a mission to help identify Jews and get them out of the country. Her hosts were fellow musicians that were part of the Phantom Orchestra, so named because it was composed of dissidents who were being watched and harassed by the KGB. At the end of this blog, I will link to another presentation that highlighted another code-breaking pioneer of the past.

The band members knew that their bags would be searched, because let’s face it, in 1985 the USSR wasn’t high on the tourist trail and travel blogging hadn’t yet been invented as a profession. Goldberg came up with a very clever coding scheme that used musical notation to document her efforts, and smuggle information out of the country to help the dissidents eventually escape. Musical note names span the letters A to G, so Goldberg assigned the remaining letters of the alphabet to notes in the 12-tone chromatic scale using sharps and flats. She used other notations to make her music look more like music to pass casual inspection.

When the band arrived in Moscow, they were detained. Goldberg describes how one customs official went through her music book sheet by sheet. The coded messages were disguised as separate compositions. If you could sight-read music, you would very quickly figure out that the notes didn’t make much sense — perhaps if you were a Philip Glass fan you might think this is more modern music. Anyway, her steganography worked and the band was able to enter the country, play their music, and identify the dissidents and complete their mission. The trip wasn’t without drama: the Americans were tailed by the KGB and eventually expelled, as documented in this news clipping:

Newspaper clipping reading 4 Americans expelled after Soviet meeting

Music “is a way to find space when you’re dealing with bad guys and bad things gives you that energy inside your brain,” she said at the conference. I find it an interesting story, and glad Goldberg was around to remind us that EGBDF is an important acronym for more than musical notation.

If you enjoyed this little moment of history, you might want to take in a documentary that will air next month on PBS about the life of Elizebeth Friedman. She literally paved the way for the modern era of codes and ciphers nearly 100 years ago, and a skit featuring actors portraying her and her husband I believe was presented at RSA’s 2005 conference.

When it comes to infosec, don’t be Twitter

He was a famous hacker. Now, he's detailing his main concern with Twitter - CNN VideoPeiter “Mudge” Zatko worked to improve Twitter’s security posture during 2021 directly for Jack Dorsey. Last month he sent a 200+-page complaint uncovering many of its internal security bad practices. The document was widely distributed around various DC agencies, including the FTC, the SEC, the DoJ. A redacted version was also sent to Congress. This smaller document was shared with the Washington Post and made public online. You can watch his interview with CNN’s Donie O’Sullivan here.

Mudge was fired in January 2022 for poor performance and ineffective leadership. Which seems suspect just from that wording, and what I know about his level of professionalism, knowledge and sincerity. (Mudge was one of the L0pht hackers who famously testified before Congress in 1998, a story that I covered for IBM’s blog here.) Yet, I am not sure why it took many months before he circulated his document and obtained legal protection. Whether you agree with Mudge or not, the complaints make an interesting outline of what not to do to provide the best possible internal security.

Excessive access rights. Mudge said too many of Twitter’s staffers (he claimed about half of total staff) had access to sensitive information. Twitter had more than 40 security incidents in 2020 and 60 in 2021, most of which were related to poor access controls. Its production environment was accessible to everyone and not properly logged. This isn’t anything new: Twitter was hacked by a teen back in 2020 through a simple social engineering trick. Inappropriate access control is a very common problem, and an entire class of privilege access management tools has grown up that can be used to audit permissions.

Poor board communication of security posture. Mudge alleges that his research wasn’t properly communicated to Twitter’s board of directors as early as February 2021 and was told to prepare other documents that were “false and misleading” to the board’s risk committee. The proportion of incidents attributed to access control failures was also misrepresented to the board.

Poor compliance regimens. Security issues were brought up to compliance officials during Mudge’s tenure. But only after he was fired did he hear from these officials, which is documented in the complaint. Compliance is a two-headed process: you need to first figure out what private data you collect, and then ensure that it is properly protected. It appears Twitter did neither of these things and did them over a period of many years, creating what Mudge called “long overdue security and privacy challenges.” Mudge documented that Twitter never was in compliance with a 2011 FTC consent decree to fix handling private data, document internal systems and develop an appropriate software development plan. He also stated that the progress of all of these elements were misrepresented to both the board and FTC in subsequent filings.

Sloppy offboarding. Do you terminate access rights when employees leave? Twitter didn’t: A software engineer demonstrated that 18 months after his departure, he was still able to access code repositories on GitHub. When this was made public, his access was only then removed.

Where are the bots? An entire section of Mudge’s report is devoted to “lying about bots to Elon Musk.” Note the timing of this: Mudge was fired months before Musk began his mission to acquire the company. Mudge claims Twitter provided false and misleading information to Musk, saying they couldn’t accurately report on the number of bots on their platform, and citing various SEC reporting requirements. Mudge’s claim was if this number was ever measured properly, it would harm Twitter’s image and stock valuation. Twitter executives also received bonuses not for cutting spam and bots but for growing user numbers.

Lack of updated systems. Half of the staff PCs had disabled software and security updates and patching on their systems, such as endpoint firewalls or enabling remote desktop software. This is also a common problem, and Twitter had no corporate view of the security posture across its computer population. Nor was this situation ever properly communicated to Twitter’s board, who were originally told that 92% of the PCs were secured.

Lack of any insider threat detection or response.  Mudge discovered that more than 1,500 daily failed logins were never investigated. Many employees had copies of data from production systems. Both are wide open invitations for hackers to compromise their systems.

Lack of any backup regimen. No employee PCs were ever backed up. At one point, Twitter had implemented a backup system, but it was never tested and didn’t work properly.

Twitter has claimed they have made security improvements since Mudge left the company. Mudge points out this blog post from September 2020 (during his tenure at Twitter) co-written by CEO Parag Agrawal, as filled with false claims that his document goes through at length to illuminate. Clearly, from what I know Twitter has a long way to go to improve their security practice. Hopefully, some of the items mentioned above are under control in your own business.

Mudge testifies before the Senate Judiciary Cmte. today, watch the stream.

Watch that API!

Things to Look Out for When Building Secure APIsIn the last couple of weeks I have seen business relationships sour over bad software security. The two examples I want to put forward for discussion are:

Both breaches had larger consequences. In Digital Ocean’s case, the lack of MailChimp’s response (which was two days) was one of the reasons for switching list serve providers. Signal had 1900 customer accounts that were at risk and is still using Twilio. Twilio’s breach response has also been criticized in this blog post, and the breach has spilled over elsewhere: Cloudfare announced that 76 of their employees had experienced a similar attack in the same time frame but didn’t fall for it.

What is happening here is a warning sign for every business. This isn’t just a software supply chain issue but a more subtle situation about how you use someone’s software tools in your daily operations. And if basic services are at risk, such as mailing lists and phone number verifications, what about things that are more complex that are part of your software stack?

Here are a few tips. If you use Signal,  go to your phone to Signal Settings  > Account > Registration Lock and make sure it is enabled. This will prevent these kinds of compromises in the future. Also update your phone to the latest Signal version too. Take a moment to explore other third party software providers and ensure that your APIs have been set up with the most secure authentication options possible. This includes cloud storage containers: the latest cloud-native security report from Sysdig found that 73% of cloud accounts contained exposed Amazon S3 buckets with no authentication whatsoever.

Avast blog: A tale of two breaches: Comparing Twilio and Slack’s responses

We recently learned about major security breaches at two tech companies, Twilio and Slack. The manner in which these two organizations responded is instructive, and since both of them published statements explaining what happened, it’s interesting to observe the differences in their communication, along with some lessons learned for your own breach planning and response. You can read more about the situations in my latest blog post for Avast.