Book review: Long Island Compromise

I am of two minds with this novel, which chronicles a fictional Jewish family on the north shore of Long Island and how they devolve after the father is kidnapped for a week. The three children are tracked as they grow up into dysfunctional adults with addiction problems, with marital problems, and with various other issues in trying to cope with their father’s ordeal. The Long Island Compromise is really a devil’s bargain — having lived in one of the wealthiest suburbs in America, after escaping the Holocaust, after dealing with numerous anti-semitic people, places, and circumstances. Having grown up on Long Island’s south shore and raised my daughter on the North Shore in a community that mirrors what is described in fictional terms in the novel, this story resonated with me. The excesses experiences with the family’s wealth, and with trying to out-Jew their neighbors is all too real.

So is their reaction to the father’s kidnapping, which manifests itself in different ways to each family member. Some choose avoidance : “any reference to a thing that could later be a trigger to discuss The Thing” — the kidnapping — is a very apt way to describe grief and the fragility of those who are grieving.

So what is there not to like about this book? It isn’t that it cuts too close to home. It isn’t that its scenes of BDSM or drug abuse or numerous hooker and mystic encounters are (as I imagine) too realistic. The descriptions are sometimes just so filled with irony and accuracy that I would often pause while reading to let them sink in. But they could be hard to take for some readers. And for those of you who grew up in suburbia, or who are Jewish, this could be entertaining, poignant, or both. Certainly, its treatment of how families confront their destinies and future potential is laid bare in a way that I haven’t seen very often, and is quite genuine.

The novel is based on this actual kidnapping that happened in the 1970s. Read it here.

The end of the floppy disk era

An article in this week’s New York Times decries the end of the floppy disk. Its use as a medium of data transfer for Japanese government reports has finally been replaced with online data transfer. I read the piece with a mixture of sadness and amusement. The floppy was a big deal — originating from IBM’s big iron. It became the basic fuel of the PC revolution.

Before we had PCs, in the late 1970s, we had the first dedicated word processor machines coming into offices. I came of professional age  when these huge beasts, often built-in to office furniture. They were the domain of the typing pool of secretaries that would transform hand-written drafts into typed documents. These word processors had printers and ran off 8″ floppies that held mere kilobytes of text files. Those larger disks were a part of America’s nuclear control bunkers up until 2019 or so.

But back to the 1980s. Then IBM (and to some extent Apple) changed all that with the introduction of 5″ versions that were attached to their PCs. Actually, they measured five and a quarter inches. Within a few years, they became “double-sided” disks, holding a huge 360 KB of files. To give you an idea of this vast quantity of storage, you could save dozens of files on a single disk. But things were moving fast in those early days of the PC — soon we had hard-shell 3.5 inch floppies — the label remained, even though the construction changed — that could hold more than a megabyte of data. Just imagine: today’s smart watches, let alone just about any other smart home device — can hold gigabytes of data.

You would be hard-pressed to find a computing device that has less capacity these days. And that is a good thing, because today’s files — especially video and audio — occupy those gigabytes. But I just checked: a 5,000 MS Word file — just text — is only 35 KB, so things haven’t changed all that much in the text department.

The double-sided label sticks in my mind with this anecdote. The scene was a downtown office in LA, where I worked for the IT department of a large insurance company in the mid-1980s. We occupied three office towers that spanned several blocks, and part of the challenge of being in IT was that you spent a lot of time going around the complex — or at least for the times — debugging user’s problems. We would often tell users to send us a copy of their disk via interoffice mail and we would take a look at it if it wasn’t urgent. Soon after I got this call I got the envelope. Inside were two sheets of paper: the user had placed his floppy disk on the glass bed of their Xerox copier, and sent me the printouts. But this was a user who was paying attention: he noticed the “double-sided” designation on the disk, so flipped it over and made a copy of the back of the disk too.

The dual-floppy drive PC was a staple for many years: one was used to run your software, the other to store your data. The software disks were also copy-protected, which made it hard for IT folks to backup. I remember going over to our head of IT’s home one weekend to try to fix a problem he had with the copy-protected version of Lotus 1-2-3, the defining spreadsheet of the day.

Those were fun times to be in the world of PCs. The scene shifts to downtown Boston, at the offices of PC Week, back in early 1987. I had left the insurance company and taken a job with the publication. A few months into the job, I had gotten a question from a colleague who was having trouble with his PC, the original dual floppy-drive IBM model. I went over to his desk and tried to access his files, only to hear the disk drive grind away — not a sound that you want to hear. I flipped open the drive door and removed the offending disk. My colleague looked on with curiosity. “Those come out?” he exclaimed. No one at the publication had bothered to tell him that was the case, and he had been using the same physical disk for months, erasing and creating files until the plastic was so worn out that you could almost see through it. I showed him our supply cabinet where he could stock up on spare floppies.

Apple was the first company to sell computers sans floppies in 1998, and other PC makers soon eliminated them. Storage on USBs and networks made them obsolete.Sony would stop selling the blank disks in 2011, but they lived on in Japan until now.

Floppies were trouble, to be sure. But they were secure: we didn’t have to worry about our data being transmitted across the world for everyone to see. And while their storage capacity was minuscule — especially by today’s standards —  it was sufficient to launch a thousand different companies.

Self-promotions dep’t

Speaking of other things that have lived on in Japan, I recently wrote about the Interop show network and its storied history. I interviewed many of the folks who created and maintained these networks over the years, and why Interop was an innovative show, both then and now.

CSOonline: CISOs must move quickly to resolve Kaspersky software ban

The US government enacted new restrictions on Kaspersky’s customersindicting 12 of its executives and prohibiting further sales of its software and services in June. The regulations augment existing bans from using its software by US federal agencies that began several years ago and have spread to similar bans by federal agencies in places such as Lithuania and the Netherlands.

The action coordinated efforts by both the Commerce and Treasury departments, based on national security risks about any potential cooperation with Russian intelligence agents.

You can read my analysis for CSO here and what IT managers need to do if they are still using their software tools. 

CSOonline: What prevents SMBs from adopting SSO

A new report by the Cybersecurity and Infrastructure Security Agency (CISA) is the latest research to point out the “Barriers to Single Sign-On (SSO) Adoption for Small and Medium-Sized Businesses” – which is the report’s title. While the listed reasons aren’t new or even unexpected, it is a good summary of the steep climb that many SMBs have in implementing SSO. CISA convened a series of focus groups of various stakeholders, including the SSO vendors and their SMB customers and channel providers, along with network auditors.

CISA’s report cites several reasons why SSO hasn’t been deployed by smaller organizations, including greater administrative implementation burdens, lack of technical know-how within SMB IT departments, and incomplete support documentation. You can read my analysis about the report in CSOonline here.

The storied history of the Interop Shownet

This is the story about how a group of very dedicated people came together at the dawn of the Internet era to build something special, something unique and memorable. It was eventually known as the Interop Shownet, and it was created in September 1988 at the third Interoperability conference held in Santa Clara, California. This is the story of its evolution, and specifically how its show network became a powerful tool that moved the internet from a mostly government-sponsored research project to a network that would support commercial businesses and be used by millions of ordinary people in their daily lives.

But before we dive into what happened then, we must turn back the clock a couple of years.

In August 1986, a few very motivated people got the bright idea to teach others how to implement the early internet protocols. This first conference, called the TCP/IP Vendors Workshop, was held in Monterey California and was by invitation only. (Agenda photocopy) Speakers included Vint Cerf (who was at MCI), Jon Postel (RFC Editor and at ARPANET before playing a key role in internet administration), and Paul Mockapetris and Bob Braden (both at USC-ISI).

Two subsequent conferences were held the following March in Monterey and then December in Crystal City, Virginia, both were called the TCP/IP Interoperability Conference. All three were unusual events for several reasons: first, the presenters and instructors were the actual engineers that developed the earliest internet protocols. They were also there to impart knowledge, rather than sell products – mainly because few commercial products were yet invented.

By September of 1988, the format of the conference changed, and expanded beyond lectures to a more practical proving ground. The event was renamed once again, and so Interop – and its show network — was born. The mission was still to teach internet technologies and protocols, but for the first time the event was used to test and demonstrate various internet communications devices on an active computer network. That show used a variety of Ethernet cables to connect 51 exhibitors together, with T1 links to NASA Ames Research Center and NSFNet in Ann Arbor Michigan.

The network diagram of the 2024 Tokyo Shownet in all of its complexity

The Interop conference quickly grew into a worldwide series of events (1) with multiple shows held in different cities that were attended by tens of thousands of visitors with more than a thousand connected booths. In those early years, the largest shows were held in Tokyo, which began in 1994 and continued annually (with a pause because of the pandemic), with the latest show held this past summer. This year’s show spanned over 200 vendors’ booths and drew about 40,000 visitors each of its three days. The Tokyo Interop is also where the Shownet not only has survived, but thrived and continues to innovate and demonstrate internet interoperability to this very day. Many products had their world or Japanese debuts at various Tokyo Interops, including Cisco’s XDR and 8608 Router and NTT’s Open APN.

The name of Interop’s network also evolved over time. It was called variously the Show and Telnet, the Interop Net and eventually the Shownet, which is how we’ll refer to it in this article.

My own interop journey with Network Computing magazine

Before I get into the evolution of Interop and the role and history of its Shownet, I should first mention my own personal journey with Interop. If we jump back to 1990, I was in the process of creating the first issue of Network Computing magazine for CMP Media. Our first issue was going to debut at the Interop show, the second time it was held that fall in the San Jose Convention Center.

The publisher and I both thought this was the best place for our debut for several reasons. First, our magazine was designed for similar motivations — to demonstrate what worked in the new field of computer networking. We had designed our publication around a series of laboratories that had the same equipment found in a typical corporate office, including wide-area links, and a mixture of PC DOS, Macintoshes, Unix and even a DEC minicomputer connected together. Second, we wanted to make a big splash, and our salespeople were already showing prototype issues ahead of the show to entice advertisers to sign up. Finally, Network Computing’s booth would be connected to the Shownet and the greater internet, just like many of the exhibitors who were trying out some product for the very first time.

One other feature about Network Computing that set us apart from other business trade magazines at the time: each bylined article would contain the email address of the author, so that readers could contact them with questions and comments. I wanted to use the domain cmp.com and set up an actual internet presence, but alas I was overruled by management, so we ended up using a gateway maintained by one of the departments at UCLA where a couple of our editors were housed. While author’s email contacts are now common, it was a radical notion at the time.

The earliest days of Interop

The Interop show in the late 1980s was a markedly different trade show from others of its era. At the time, trade shows with networked booths were non-existent. By way of perspective, up until that point in those early years, there were two kinds of conferences: one that focused on the trade show with high-priced show floors and fancy exhibits. There, exhibitors were forced to “pay to play,” meaning if they bought booth space, they could secure a speaking slot at the associated conference. The other was a more staid affair that was a gathering of the engineers and actual implementers. Interop was a notable early example bridging the two: it looked like a trade show but was more of a conference, all in the guise of getting better commercial products out into the marketplace. It helped that it had its roots in those early TCP/IP conferences.

“You could see the internet in a room thanks to the Shownet, with hundreds of nodes talking to each other. That was unique for its time,” said Carl Malamud. “The Shownet was the most complex internet installation you could do at any moment of time.” Malamud would play several key roles in the development of various early internet-based projects, including running the first internet-based radio station, and was a Shownet volunteer in 1991.

That complexity has been true from the moment the Shownet was first conceived to the present day. Many of the internet’s protocols — both at in its earliest years and up to the present era — were debugged over the Shownet: volunteers recall testing Netbios over TCP/IP, 10BaseT ethernet, SNA over TCP/IP, FDDI, SNMP, IPv6, various versions of segment routing and numerous others. That extensive protocol catalog is a testament to the influence and effectiveness of the Shownet, and how enduring a concept it has been over the course of internet history. Steve Hultquist, who was part of the early NOC teams, remembers that the very first version of 3Com’s 100BaseT switch — with “serial number 1” — was installed on the ShowNet.

The organization behind Interop, Advanced Computing Environments, and their booth at the 1989 show

The force behind Interop was Dan Lynch, who passed away earlier this year. Lynch foresaw the commercial internet and designed Interop to hasten its adoption. He based Interop on a series of efforts to bring together TCP/IP vendors, and the proto-Interop shows that were run in the middle 1980s that were more “plugfests” or “Connectathons” where vendors would try out their products. The main difference was those efforts deployed mostly proprietary protocols, whereas Interop ran on open source.

He told Sharon Fisher in November 1987: “There are millions of PCs out there and they’re starting to get networked in meaningful ways, not just in little printer-sharing networks.” Part of his vision — and those that he recruited — was the notion of interoperability which could be used as a selling point and as an alternative to single-vendor proprietary networks from IBM, Digital Equipment Corporation and others that were common in that era. Larry Lang, who worked for many years at Cisco, said, “The reassurance that it was okay to give up having ‘one throat to choke’ came from confidence that the equipment was interoperable. It is hard to remember a time when that was a worry, but it sure was.”

Part of Lynch’s vision was ensuring that proving interoperability was a very simple litmus test: did the product being exhibited work as advertised in a real-world situation? That seems like common sense, but doing so in a trade show context was a relatively rare idea. And while it was a simple question, the answer was usually anything but simple, and sometimes the reasons why some product didn’t work — or didn’t work all the time — was what made the Shownet a powerful product improvement tool. That is just as true today as it was back then. The more realistic the Shownet, the more often it would expose these special circumstances that would bring out the bugs and other implementation problems.

It would prove to be a potent and enduring vision.

What does interoperability mean?

The notion of interoperability seems so common sense now — and indeed it is the default position for most of the current networking world. However, in the early days of the internet it was fraught with problems, both in terms of larger scale implementations and smaller issues that would prevent products from working reliably. One of Sharon Fisher’s articles in Computerworld in 1991 (2) speaks about TCP/IP this way: “The astounding thing is not how gracefully it performs but that it performs at all. TCP/IP is not for everyone.” Times certainly have changed in the 33 years since that was written. Today the notion of internet connectivity, using TCP/IP protocols, is a given assumption in any computing product, from smart watches to the largest mainframe computers.

Those early implementation differences plagued both large and small vendors alike, and required a meeting of the minds where the protocol specifications weren’t exacting enough to ensure its success, or where bugs took time to resolve. Enter the Interop conference. As a reminder, in those early days the popular IP applications were based on FTP, SMTP, and SNMP. The web was still being invented and far away from being the de facto smash hit that it is today. Video conferencing and streaming didn’t exist. Telephones still ran on non-IP networks.

One of the early casualties was the Open Systems Interconnect series of protocols. It was precisely because of the interoperability among TCP/IP products and “the failure of OSI to effectively demonstrate interoperability in the early 1990s that was the final nail in its coffin,” said Lloyd. There are other stories of the defeat of OSI, such as this one in the IEEE Spectrum. (3)

The relationship among the Shownet, the conference tutorials, and the NOC

To accomplish Lynch’s vision, Interop wasn’t just the Shownet, but its interaction with two other elements that became force multipliers in the quest for interoperability. These were the tutorials that were given preceding the opening of the show floor, and the Network Operations Center (NOC) team that ran the network itself. All three had an important synergy to promote the actual practice of interoperability among different vendors’ products: not just in the demonstration of what worked with what but proving out protocol mismatches or programming errors so that new equipment could be made to interoperate.

Dave Crocker, who authored many RFCs and served on the Interop program committee that selected speakers in the 1990s, called out this tripartite structure of Shownet, NOC and conference as a major strength of Interop. “Interop was able to contrast the technologies of the internet with the interoperability of non-internet technologies, such as IBM’s Systems Network Architecture. It had very pragmatic implications and wasn’t just promoting marketing speak.”

The tutorials (and by way of disclosure, I taught a few of them during those early years, as well as also serving with Crocker on the program committee) were given by many of the engineers who developed those early protocols and techniques and other pieces of internet technology, so that others could learn how to best implement them. Here is where Interop contained its secret sauce: the tutorials were given by the people that contributed to the underlying protocols and code, in some cases code so new it was changing over the duration of the show itself. “It was only after getting to Interop that we found out how few options were actually used by most implementations, and only then did we have access to the larger internet and various versions of Unix computers,” said Brian Lloyd, who worked at Telebit at the time. “It was real bleeding edge stuff back then and the place to go for product testing and see how marketers and engineers would work together.”

And the NOC was a real one, like what could be found at large corporations, monitoring the network for anomalies and used to debug various implementations leading up to the show’s opening moments. “It was unusual for its time,” said Fisher in another article in Infoworld. (4) “The NOC team was infamous in the trade press for its tours and the time members took to explain things to us,” she said. Malamud recalls that the NOC had a strict “no suits” policy, meaning that its denizens were engineers that rolled up their shelves and got stuff done.

All of this happened with very few paid staff: most of the people behind the Shownet and NOC were volunteers who came back, show after show, to work on setting things up and then taking them down after the show ended. That was, and to some extent still is, a very high-pressure environment: imagine wiring up a large convention center and connecting all of its conference rooms with a variety of network cabling.

One of the more infamous moments of Interop was the Internet Toaster, created by John Romkey (5) originally for the 1990 show. “I wanted to get people thinking of SNMP not just as getting variables, but for control applications, a wider vision. So we had an SNMP controlled toaster. If you put bread in the toaster, and set a variable in SNMP, the toaster would start toasting. There was a whole MIB written up for it, including how you wanted the toast, and whether it was a bagel or Wonder Bread. I ended up with lots and lots of bread in my garage. It got a lot of attention, but I don’t think that managing your kitchen through SNMP is very practical today.“

Dave Buerger, who was an early tech journalist at CMP, remembers Interop with having “a strange sense of awe unfolding for everyone as we glimpsed the possibilities of global connectivity. Exhibits on the show floor were more experiments in connecting their booths to the rest of the world.”

Construction of the earliest Shownets (before 1993)

To say that Lynch was very persuasive is perhaps a big understatement. He convinced people that were quirky, unruly, or difficult to work with to spend lots of time pulling things together. “Dan allowed us to do stuff that the usual convention wouldn’t normally allow, and managed people that weren’t used to being managed,” said Malamud.

Peter de Vries was one of the earliest Shownet builders when he was working at Wollongong as one of the early internet vendors. He ended up working for Interop for three years before opening up the West Coast office for FTP Software. He remembers Lynch “dragging people kicking and screaming into using the internet back in the late 1980s and early 1990s. “But he was a fun guy to work for, and he had an unusual management style where he didn’t issue demands but convinced you to do something through more subtle suggestions, so by the time you did it you were convinced you had the original idea.”

These volunteers would essentially be working year-round, especially once the calendar was filled with multiple shows per year. Back in those early years, the convention centers didn’t care about cabling, and hadn’t yet figured out that having a more permanent physical networking plant could be used as an asset for attracting future meetings. ”We were often the first show to hang cables from the ceiling, and it wasn’t easy to do,” said Malamud, who chronicled the 1991 Shownet assembly. (6) The first Interop shows made use of thick Ethernet cables that required a great deal of finesse to work with. de Vries recalls they had to pass wires through expansion joints and other existing holes in the walls and floors, wires that didn’t easily bend around corners. “Each network tap took at least ten minutes of careful drilling to attach to this thick cable.” He has many fond memories of Lynch: “My goal was to try to get everyone to use TCP/IP, but Dan took it to the next step and show that TCP could be a useful tool, something better than a fax. He was a real visionary.”

Ethernet — in all of its variations over the years, including early implementations of 10BaseT and 100-megabit speeds — wasn’t the only cabling choice for Interop: the show would expand to fiber and Token Ring cabling as part of its mission. Brian Chee was one of those volunteers and remembers having to re-terminate 150 different fiber strands across the high catwalks of the convention spaces. “We even had to terminate the fiber on the roof of the convention center to connect it to the Las Vegas Hilton across the street,” he said.

Getting all that cabling up in the air wasn’t an easy task either. Patrick Mahan worked on the San Francisco show in 1992 and recalls that he and other networking volunteers were paired with the union electrical workers on a series of scissor lifts. “We needed multiple lifts operating in tandem to raise them, and we had just started hanging a 100-foot length when a loud air-horn goes off and each lift immediately starts descending to the floor, because it was time for the mandatory union 15-minute break. It took about three minutes before the cable bundles stared breaking apart and crashing to the cement floor. You could hear the glass in the fiber cables breaking!”

The Vegas climate made installations difficult, especially when its non-air-conditioned convention halls reached temperatures of 110 degrees F. outside. “Many convention centers don’t turn on their air conditioning until the night before the show begins, so it was a particularly harsh environment,” said Glenn Evans, who worked as both a volunteer and an employee of Interop during the late 1990s and early 2000s. “Vegas in May is very dry, and static electricity is a big issue. We fried several switch ports inadvertently and spent long nights adding static filters to avoid this because some shows had more than 1,200 connections across their networks.” Evans emphasized that the install teams relied on “redneck engineering to come up with creative solutions, and it didn’t have to be perfect, just had to work for five days.”

The cabling had to be laid out three times for each Shownet. The volunteers would have access to the convention center for a day months before any actual show. They would lay out the first cable segments and add connectors, then roll them up and store them in a warehouse. Then before the show there would be a “hot staging” event where the cables were connected to their equipment racks and tested. Finally, several nights before the show began was the real deployment at the convention center, which would span several 24-hour days before the actual opening. Those long nights were epic: de Vries recalls falling asleep in the middle of one night at the top of a 15-foot ladder, only to be gently awakened by the only other person in the convention center at the time. “Those installations nearly killed me!”

Many of the participants during those early years were motivated by a sense of common purpose, that their efforts were directly contributing to the internet and its usefulness. “I loved that we could help the overall industry get stuff right,” said Hultquist. “They were some of the smartest people that I have ever worked with and were constantly pushing the envelope to try to deploy all sorts of emerging technologies.”

But the physical plant was just one issue: once the cabling was in place, the real world of getting equipment up and running across these networks was challenging. In those early Interops, equipment was often at the bleeding edge, and engineers would make daily or even minute-by-minute changes to their protocol stacks and application code.

James van Bokkelen was the president of FTP Software and recalls seeing the Shownet in 1988 crash while running BSD v4.3, thanks to a buggy version of one TCP/IP command. Turns out the bug was present in Cisco’s routers that were used on the Shownet. “It took a few minutes of scampering before everything was in place and we got Shownet back online,” he said. Scampering indeed: the volunteers had to compare notes, debug their code, and reboot equipment often located at different ends of the convention floor.

“We were getting alpha software releases during the show. This network created an environment where people had to fix things in real time in real production environments,” said Hultquist. “Wellfleet, 3Com and Cisco were all sending us router firmware updates so their gear could interoperate with each other. I loved that we could help the overall industry get stuff right.”

At one of the 1991 Interops, “FDDI completely melted down,” said Merike Kaeo, who at the time was working for Cisco in charge of their booth and also volunteering in the NOC. “There was some obscure bug where a router reboot wasn’t enough, you had to reset the FDDI interface adapter separately. It didn’t take all that long to get things running, thankfully.”

Some of the problems were far more mundane, such as using equipment with NiCad batteries that had very short shelf life. Chee recalls that one Fluke engineering director who got tired of trying to get these replaced with Lithium-ion batteries. “He would send his team up to the rafters with network test equipment that had very short battery life; they were quickly replaced in their newer products.”

As more Shownets were brought up over the years, they had built-in redundant — and segregated — links. “We all played a part to make sure that after 1991, we would have a stable portion that would run reliably and put any untested equipment on another network that wouldn’t bring the working network down once the show started,” said Bill Kelly, who worked for Cisco in its early days.

de Vries said the first couple of Interop Shownets had less than 10 miles of cabling, which grew by 1991, according to Malamud, to having more than 35 miles of cabling, connecting a series of ribs, each one running down an aisle of the convention floor or some other well-defined geographic area. “Each rib had both Ethernet and Token Ring, connected to an equipment rack with various routers,” said Malamud. “There were two backbones that connected 50 different subnets, one based on FDDI and the other on Ethernet, which in turn were connected via T-1 lines to NASA Ames Research Center and Bay Area internet points.”

Bill Kelly, who worked on the Shownet NOC while he was at Cisco at the time, developed a three-stage model that covered a product lifecycle. “The first stage is using the IETF RFCs to try to make something work. Then the second stage is when a vendor is late to market and must figure out how to play nice with the incumbents and the standards. The third stage is mostly commodity products, and everything works as advertised.”

The middle years (1994-1999)

The internet – and Interop – were both growing quickly during this time period. New internet protocols and RFCs were being created frequently, and applications – and dot com businesses – sprang up without any business plans, let alone initial paying customers. There were new venues each year in Europe, shows in Sydney, Australia, and Singapore and Sao Paulo, Brazil. Some years had as many as seven or eight different shows, each with its own Shownet that needed to be customized for the particular exhibit halls in these cities.

Let’s return to 1986 for a moment. This is when Novell began its own trade shows called NetWorld to explain its growing Netware community. By 1994, these had grown, and that is when Novell and Interop merged their shows, calling them NetWorld+Interop. This moniker held until 2004, when Interop was purchased by a series of technical media companies. “Shownet didn’t change much after the Novell merger,” said Hultquist. “We could accommodate their stuff at the edge, but it didn’t impact the core network.” For the Shownet team, Netware was just another protocol to interoperate across. Despite Novell’s influence, during these years, TCP/IP became a networking standard. So did the cabling that made up the Shownet: “In the mid-1990s, a lot of the cable plant could be reused from show to show, with a standard set of 29-strand multimodal fiber with quick connectors and 48 strands of copper cat 5 for the ribs,” said Evans.

TCP/IP evolved too: by the end of the 1990s, the protocols and Ethernet hardware became a commodity and were both factory-installed in millions of endpoint devices. “The internet was becoming more standardized, and Interop became less of an experiment and more of a technology demonstration,” said Evans.

Nevertheless, vendors tried to differentiate themselves with query exhibits, pushing the envelope of connectivity. One stunt happened during the 1995 Interop at the Broadcom booth, which demonstrated Ethernet signals over barbed wire.  “The wires were ugly and rusty and had nasty little barbs all over them,” according to one description written years later. (7)

By 1999, the Shownet split into two separate parts: the live production network connecting the exhibit booths called InteropNet and InteropNet Labs used for connecting new products. Back then, this included VoIP, VPNs and other “hot technologies,” according to a post by Tim Greene in CNN. (8) This was because of several market forces: First, more and more conferences began promoting the idea of internet connectivity for both attendees and vendor participants. “As that reality dawned on people, the Interop Shownet became an increasingly useless anachronism,” said Larry Lang, who was part of the team building Cisco’s support for FDDI at that time. “As our competition became Wellfleet rather than IBM, why would we want to participate in an expensive and time-consuming display that suggested complete equivalence among all the products?”

Hultquist was quoted in that CNN piece saying that attendees “won’t know whether a piece of equipment really worked because of the demands placed on them by more experimental or untested products.”

A second issue had to do with striking a balance between established vendors and newcomers. Kelly remembers the relationship between Cisco and Interop to be “complicated because we were the market leader and if we just donated equipment without any technical support we ran the risk of outsiders misconfiguring the devices. Interop was also used to dealing with small engineering groups and not pesky marketing types that wanted to know the value of participating in the show.” Plus, long-running contributors to the original Shownets often got a jump on developing new gear and interacting with products that weren’t yet on the market.

The new millennium of Interop

Interop continued to grow in the new millennium. There were two notable events affecting the Shownet. The day the towers fell in New York was also the start of the fall Interop Atlanta show back in 2001. Many of the Shownet volunteers recall how quickly their network became the main delivery of news and video feeds to those attendees that were stuck in Atlanta, since all flights were grounded for the next several days. Brian Chee remembers that “within minutes of the disaster we maxed out the twin OC-12 WAN connections into the Shownet. We brought up streaming video of CNN Headline news over IP multicast that cut our wide-area traffic substantially, at the same time being an impressive demonstration of that technology.”

But then a few years later another event happened. “The day the Slammer virus hit, back in 2003, we had just gone production across the Shownet. That virus hurt our network throughput just enough that all our monitoring devices were useless,” said Chee. “But the NOC team was able to characterize the problem within a few minutes, and we were able to use air gapped consoles to reset routers and filter out the virus-infected packets.” That is as real world as it gets and is an example of how the Shownet proved its worth, time and time again.

“The coolest thing we got out of working at Interop is that technology doesn’t happen without the people, and the people involved were some of the hardest working and smartest people that you’ll ever meet. They checked their egos at the door, and solved problems jointly,” said Evans. “It was run like a democratic dictatorship, where everyone had a say.”

I didn’t attend this summer’s Tokyo Interop, but have asked for reports that will be added when this article runs later this year.

Acknowldgements. This is a draft of an upcoming article in the Internet Protocol Journal. This article wouldn’t be possible without the help of numerous volunteers and staff at Interops of the past, including:

Bill Alderson, Karl Auberbach, Dave Buerger, Brian Chee, Dave Crocker, Peter J.L. de Vries, Glenn Evans, Sharon Fisher, Connie Fleenor, Steve Hultquist, Merike Kaeo, Bill Kelly, Larry Lang, Brian Lloyd, Carl Malamud, Jim Martin, Naoki Matsuhira, Ryota Roy Motobayashi, and James van Bokkelen,

We are posting it here for comments, and would urge those of who have been inadvertently left out of the narrative to either post your comments here or send them to me privately. Thanks for your help!

Footnotes:

  1. http://motobayashi.net/interop/, Ryota Motobayashi has kept track of each show on his website, which also includes network diagrams and dates and places.
  2. https://books.google.gr/books?id=cJdcbp0zqnkC&lpg=PA97&dq=%22sharon%20fisher%22%20%22dan%20lynch%22&pg=PA97#v=onepage&q=%22sharon%20fisher%22%20%22dan%20lynch%22&f=false, Sharon Fisher ComputerWorld, October 7, 1991.
  3. https://spectrum.ieee.org/osi-the-internet-that-wasnt
  4. https://books.google.com/books?id=szsEAAAAMBAJ&pg=PT17&lpg=PT17&dq=%22sharon+fisher%22+%22interoperability+conference%22&source=bl&ots=ATqq8ZwL6P&sig=ACfU3U14wJEWFjEMHzw0dpz8e6isz5HBow&hl=en&sa=X&ved=2ahUKEwirwt2nkoqEAxUejYkEHe0RDkEQ6AF6BAgMEAM#v=onepage&q=%22sharon%20fisher%22%20%22interoperability%20conference%22&f=false, Sharon Fisher Infoworld, Ocotober 10, 1988.
  5. https://web.archive.org/web/20110807173518/http://aboba.drizzlehosting.com/internaut/pc-ip.html
  6. https://museum.media.org/eti/RoundOne01.html
  7. http://www.cnn.com/TECH/computing/9905/11/nplusi1.idg/index.html, Tim Greene, CNN, May 11, 1999.

Big if true: creating bespoke online realities is dangerous

Jack Posobiec, Mike Benz, Justine Sacco, Samara Duplessis. If you have never heard of any of these people, this post might be illuminating about how online conspiracies are created and thrive. It is based on a new book, Invisible Rulers: The People Who Turn Lies into Reality,” by Renne DiResta, a computer science researcher whom I have followed over many years. DiResta has been involved in debunking various memes, such as Pizzagate, “stolen” elections, anti-vaxxers, Wayfair selling kids inside their filing cabinets and numerous other cabals. It is now quite possible to mass-produce unreality.

Her book describes the toxic mixture of influencers, algorithms and crowd responses to construct various intricate and believable online conspiracies. She calls this unholy trinity a bespoke reality, used as a self-reinforcing mechanism that has been constructed over the years to cause a lot of pain and suffering for unsuspecting people. “Platforms have imbued crowds with new qualities. They are no long fleeting and local but persistent and global,” she writes. She herself has been the target of a few internet mobs, getting sued, doxxed, misquoted and more. Earlier this summer, she lost her job at the Stanford Internet Observatory, a research outfit she ran with Alex Stamos, who left last year. That link describes what SIO will become without their leadership, and it is debatable if the operation still really exists.

Clearly, “it is not a good time to be in the content moderation industry,” said 404 Media’s Jason Koebler. Trust and safety moderation teams are all but disbanded, and big consulting contracts to comb through the millions of toxic posts on various social networks aren’t being renewed. Facebook announced earlier this year they were shutting down CrowdTangle, its major research tool, to be replaced by something that may or not actually be useful.  We all know what happened over at Twitter when it was bought by a billionaire man-boy, such as repricing API access to the Twitter APIs. What used to be free back in the Before Times now costs $42,000 a month. And new research from CheckMyAds indicate that advertisers there are returning back, only this time being shoehorned into comments, including comments of posts that violate its own content rules about hate speech.

@checkmyads

Elon Musk’s X placed ads for dozens of brands in the replies below posts that violate the X Rules against hateful content. Here’s what we found when we looked of a sampling of posts.

♬ original sound – Check My Ads

It seems all social media have adopted a model of toxic influencer-as-a-service. “What matters is keeping fans engaged, aggrieved and subscribed,” says DiResta. She talks about how the influencer is not just telling the story, but becomes part of the story itself. They can adopt one of several roles or personas: the Entertainer, the Explainer, the Bestie, Idols, and Gurus. There are generals, who keep the mob all in a lather, and Reflexive Contrarians, a particular type of explainer that tell you why everything you know is wrong, and Propagandists, and the Perpetually Aggrieved. This latter type have a solid understanding of how platform algorithms amplify their content, and yet also can avoid their moderation efforts, when they cry “censorship” if they run afoul of them.

No matter what type of influencer one is, the real measure of success is when they amass a large enough audience they become like Enron, “too big to cancel.” At that point, truth and interest all become relative, and almost irrelevant, what she calls the Fantasy Industrial Complex, the cinematic universe that is no different from the comics.

But the cinematic universe has to have its villains to succeed. If you create an online service that focuses on a particular self-selected audience (say Parler as an example), you lose the ability to fight the others, and your perpetual complaints don’t land. “There is no opportunity to spin up an aggrievement fest over being wrongfully moderated,” she writes. By design, you can’t own your enemies. So sad.

The title of this post — “big if true” — refers to what influencers say in their rush to publish some content. “Experts may wait to be sure of something,” says DiResta. “But not influencers. And if this turns out to be false? Oh, well, they were just sharing their opinion and just asking questions.”  Trolling is fun, and quite profitable, it turns out ” And it almost doesn’t matter if the statements actually advance a cause or prove anything. “The point is the fight. Winning insights, in fact, negatively impacts the influencer because resolution would reduce the potential for future monetizable content,” she writes.

This has several implications. We are no longer in the arena of freedom of speech: instead, we debate the freedom of reach. It isn’t about hosting content on a particular platform, but how it is promoted and packaged. We aren’t talking about the marketplace of ideas, but the way those ideas are manipulated.

DiResta’s book should be required reading for all PR and marketers. The last portion of her book has some very concrete suggestions on how to turn down the toxicity, and try to return to a bespoke world that actually has some basis in truth. If you don’t want to read it, I suggest watching the middle third or so of her interview with Quentin Hardy.And maybe re-evaluate your social media presence. “If we want virtual town squares” in our online world, she says “we have to act like the people on them are our actual neighbors.”

CSOonline: Pegasus can target government and military officials

The controversial spyware Pegasus and its operator, the Israeli NSO Group, is once again in the news. Last week, in documents filed in a judgment between NSO and WhatsApp, they admitted that any of their clients can target anyone with their spyware, including government or military officials because their jobs are inherently legitimate intelligence targets. The lawsuit began in October 2019.

NSO has in the past been very circumspect about who is infected with their spyware, which uses so-called “zero-click” methods meaning that a potential target doesn’t have to click on anything to activate the software. It can access call and message logs, remotely enable the camera and microphone and track the phone’s location, all without any notification to the phone’s owner.

I place the context of the suit in the checkered past of NSO and Pegasus in my latest piece for CSOonline.

How Russia is exploiting Telegram for war funding and news coverage

While lots of focus is on TikTok, I would argue that many of us are missing the influence and role played by the messaging network of Telegram. In this post, I explain why that could be a bigger threat to the online world.

Last fall, I wrote a post for SiliconAngle about how social media accounts are being used by pro-Russia misinformation groups. This was based on a report by Reset sponsored by the EU. One of the results from this report is that Telegram is very permissive in allowing hateful content and propaganda. A new report from  the Atlantic Council’s Digital Forensic Research Lab last week takes a deeper dive into how Telegram has been a communications kingpin for Russia’s war, and how effective and pervasive it is. The social network is not only being used for misinformation purposes, but also to recruit mercenaries, fund their purchases of tactical equipment and medical supplies, and serve as primary sources for Russian TV war coverage. The council calls it a digital front and another battlefield in the Ukraine conflict.

What surprised me was the huge audience that Russian Telegram has: with an estimated 30M monthly active users, billions of views and its cozy relationship with various Russian state-sponsored traditional TV channels. There are even channels run by the NY Times and Washington Post that were created to get around website and other internet content blocks.

By now, most of us are familiar with the term “catch and kill” as it applies to media buying stories that are never intended to run. Pro-Russia Telegram channels are paid not to mention specific persons or companies.

My analysis for Avast’s blog about data privacy of various messaging networks from early 2021 shows that Telegram isn’t as anonymous as many people first thought. The council’s report confirms this, finding government crackdowns on supposedly anonymous Telegram channels that have real-world consequences of arrest and prison terms for those channels that take these anti-government positions. Even so, there are many Telegram channels that continue to be critical of government policies and operations, such as those supporting last summer’s failed Wagner mutiny.  “While Telegram positions itself as a censorship-free platform, the available evidence demonstrates how the service is not a completely safe place for critics of the war,” they wrote. Wagner’s head Yevgeny Prigozhin discovered this first hand and died after declaring his mutinous intentions initially on Telegram.

Some of Russia’s military bloggers offer occasional criticism of the war, which adds to their credibility and popularity. “Users see their efforts as trustworthy and balanced, especially when compared to state media resources,” the council’s report wrote. That is not only insidious but dangerous, especially as many posts are widely shared and get millions of views.

As I mentioned earlier, many of the Telegram accounts openly ask for donations, providing bank account numbers and crypto wallet addresses, mostly in Bitcoin and Tether (ironically, one that is tied to the US dollar). The funds collected have been significant, in the equivalent of millions of dollars. They are also used for recruiting fighters and coordinating hacktivism efforts such as DDoS’ing Ukrainian targets such as civilian infrastructure, government data centers and banks. Ironically, Telegram is also used to help Russians avoid the draft with all sorts of tips and strategies on how to emigrate out of the country.

The final irony is that Telegram was created by two Russian brothers to get around government censorship, and was blocked by the government for several years. The brothers now live in Dubai and the Russian government has decided to leverage the network to amplify its propaganda and complement its communications.

The arrival of the digital guillotine

Our online cancel culture took another step deeper into the morass and miasma that shows how sophisticated, toxic, and partisan it has become. The 2024 version now comes with a new label called digitine, for digital guillotine, meaning cutting off discussion of the other side, boycotting the companies that have taken opposing positions, and moving to a worldwide audience.
Remember the Facebook ad boycott of the summer of 2020? That seems so naive now. Here are a few ways things have gotten worse:
  1. Much of the digitine can be traced to the division over two controversial wars and ratcheted up the hyper-politica volume. Either you are for Hammas (as incredible as that sounds) or for Israel. Pro-Russia or Ukraine. There is no middle ground. 
  2. It is religious. Jew vs Arab. Or more accurately, pro-Jew vs. anti-Jew. Bring out those dusty anti-semite tropes and re-quote the protocols of Zion. They aren’t so dusty after all. This has created all sorts of secondary corporate spillover effects. Say your company announces support for one or the other side. That triggers all sorts of boycotts and protests (as an example, what is happening in Malaysia — a Muslim-majority country — with complaints about Starbucks’ support of Israel). It takes apart our global village.
  3. It is directed at celebs/influencers, not the digital platforms themselves as the 2020 Facebook ad boycott. Makes it easier to digest, to put on placards, to gather media coverage. The viral nature of these clips, gassed up with social networks, feed into the outrage machinery which  brings these campaigns quickly to millions.
  4. Speaking of which — the dis/misinformation tooling has gotten better. Thanks AI. Who needs Russian human-based troll factories when you can generate the memes faster with GPU-laden computers? This is aided and abetted by easy manufacture of deep fake celebs that are “captured” espousing one thing or another. (Scarlett will appear before the House cybersecurity committee later this year, oh boy!)

Sure you can silence the folks on your feed that are caught up in these campaigns. Or leave the worst offending platforms (such as one that uses a single letter). But these are like using a band aid to stop an arterial bleeding.

The latest threat to ecommerce: crackdowns by the US Customs and Border Protection

If you want to ship illegal goods into the US, you might think sending them via air freight as probably the worst way to get them into the country. You would be wrong. Tens of thousands of tons of shipments enter our air freight ports every day, and the vast majority of them receive no inspection whatsoever.

In the past, the US Customs and Border Protection (CBP) agency has made it easier particularly for smaller volume shippers to send their stuff here without having to pay any duties or tariffs, under what is called an Entry Type 86 exception. This means if the value of the item is less than $800 per buyer and per day that the shipment arrives, nothing is owed. Last year a billion such packages came into the US, with many coming from two Chinese shippers, Temu and Shein.

But criminals are clever, at least initially. Many of them have taken advantage of Type 86 exemptions to ship drug precursor chemicals and raw textiles and other things, knowing that their cargo won’t be touched as it moved through the ports. Well, that situation has changed and now CBP is checking things more carefully. As you might imagine, given the tonnage that goes through our ports, this is slowing things down considerably. The stricter scrutiny has had results: CBP has suspended customs brokers from the Type 86 program and seized many illegal shipments.

There are several downstream problems that could happen. First, expect delays on your favorite Amazon package that isn’t in their own warehouse and has to come from overseas. Cargo flights will be delayed or cancelled when the warehouse ports fill up with yet-to-be-inspected merchandise. Second, criminals will undoubtedly migrate to maritime shipments, which don’t get much in the way of inspection either. Third, major shippers will probably shift to consolidating orders and shipping to their own warehouses. All this means longer shipping times and these delays could result in higher prices to the ultimate consumer. All of this turmoil could spell trouble for legit ecommerce businesses that rely on predictable shipments of their goods, which is ironic when you think about it.