The Hidden AI Arms Race

Something remarkable is happening in plain sight, and most people—including the incumbents—aren’t paying attention. While everyone debates whether AI will replace jobs or transform industries, no one is talking about how AI is already systematically displacing the productivity tools we’ve used for decades.

This blog is an experiment in using AI tools. And while normally I write everything you read here manually, today’s entry is 95% fabricated by machines. The following is based on an actual conversation between myself and Bob Matsuoka about the stealth transformation reshaping how we work—and why the tech giants are missing this opportunity. Bob publishes his HyperDev blog, a technical publication focused on practical applications of agentic coding technologies. Drawing on his experience as former Head of Product Engineering at TripAdvisor and CTO of Citymaps, he provides hands-on reviews and strategic insights for developers navigating AI-powered development tools. He currently serves as fractional CTO for multiple companies while actively building with the technologies he covers. We last saw Bob’s work two decades ago when he penned a piece of why the relatively new iPhone was a big deal — for its Clock app.

Bob and I spoke earlier in the week, he recorded our meeting with Granola, a note-taking app. He then copied the transcript into a Claude thread that he uses for this purpose. While that was being developed, he made another Claude project to extract my previous work from my blog posts. We both manually made edits to a GDoc to clean up our respective dialog. We spent about 30 minutes or so on our own with all this work, not counting the hour that we spoke to each other.

Bob: You know, that VisiCalc comparison keeps rattling around in my head. But here’s the thing—what I’m seeing isn’t just a better spreadsheet. Six months ago, if I needed something done, I’d think “Gmail” or “Excel.” Now? Claude is my first stop for everything.

David: So when you say “go first”—you’re talking about bypassing the usual suspects entirely? Gmail, Calendar, the whole productivity stack that we’ve all been living in for the past decade-plus? That’s a pretty fundamental shift in user behavior.

Bob: This local MCP bridge I’ve got running—it connects to 55 different tools. Apple contacts, Google Calendar, Gmail. I can read emails, write responses, schedule meetings, all from what amounts to an AI dashboard. The thing just works.

David: Interesting. So we’re talking about the AI interface becoming your primary operating environment. I remember when browsers started feeling like the “real” desktop for most people—that was maybe 20 years ago? This sounds like the next evolution of that shift, but faster and more comprehensive.

Bob: Pretty much. And here’s what caught me off guard—Anthropic and OpenAI didn’t make any big announcements about this. They just shipped it. Last week, Anthropic rolled out unlimited Retrieval Augmented Generation training. You can dump a thousand documents into a project folder now and it trains on all of them. No press release. No marketing campaign. They’re moving so fast they don’t bother promoting major features.

David: The velocity here is what really gets my attention. I covered Google’s office suite launch back in the day—that was a multi-year enterprise sales campaign with migration consultants, pilot programs, the whole nine yards. These AI companies? They’re just shipping features like its continuous deployment. No marketing blitz, no sales engineering team, no six-month enterprise evaluation cycles. It’s almost like they don’t even realize they’re disrupting a multi-billion dollar market. Every day their AI tools are gaining functions.

Bob: Exactly that. But the underlying architecture is different this time. Let me explain—last month I had this travel client who needed a pricing model. Old me would have built some monster Excel sheet with formulas and maybe Visual Basic scripts. Instead, I uploaded the raw CSV data to a custom GPT, had it write Python code using NumPy, and boom. I had an interactive model where the client could ask “What’s my margin on a five-day trip to Miami during peak season?” No spreadsheet. Just answers.

David: Rather than handing clients a spreadsheet with 47 tabs and saying “good luck figuring this out,” you gave them something that actually responds to questions. That’s a fundamentally different interaction model from what we’ve been calling productivity software for the past couple decades.

Bob: And they never saw a spreadsheet. They just got answers. That’s the shift—from tools that require expertise to interfaces that provide insight.

The Pattern David Has Seen Before

David: I’ve covered enough of these platform transitions to recognize the pattern by now. IBM dominated when mainframes were everything—I remember those room-sized monsters that cost more than most houses. Microsoft crushed them with personal computers (which seemed crazy at the time, if you think about it). Google ate Microsoft’s lunch with web-based productivity tools. Each time I’m watching it happen, I think “this is unprecedented”—but the script keeps repeating itself with different players.

Bob: What’s the pattern you see repeating?

David: It’s the same story every cycle—the incumbent gets comfortable with their revenue streams and loses their edge. IBM couldn’t imagine computing without those room-sized mainframes (which, let’s be honest, still generate fantastic margins). Microsoft initially dismissed web-based productivity as “toys” when Google Docs launched. Now Google’s painted themselves into a corner with the advertising model—everything has to generate data for ad targeting, which fundamentally conflicts with what users actually want from productivity tools. They can’t see past their golden goose to imagine what work looks like when it’s not subsidized by surveillance capitalism.

Bob: That’s exactly what’s happening. But look, there’s another layer here. Google Office replaced desktop software with web apps—same basic concept, different delivery. But AI tools? They’re reasoning engines. They can process context, maintain state across tasks, generate stuff that didn’t exist before. That’s not an upgrade.That’s a different category.

David: Can you walk me through a specific example? I’m trying to understand how this “collaborative intelligence” differs from, say, really sophisticated autocomplete or code suggestion tools. Where’s the line between automation and genuine collaboration?

Bob:  I built this MCP Gateway project with pretty open-ended instructions—basically told it “solve problems however you think makes sense.” The thing that gets me is how it discovers its own solutions. Like, I asked it to set up a reminder for me, expecting it would use my Google Tasks tool. But because I said “reminder,” it went off and found the Mac OS Reminders app on its own, wrote AppleScript code to access it, and created the reminder there. I never told it about the Reminders app. It just discovered that path. That’s what I mean by collaborative intelligence—not that it’s actually intelligent, but it’s genuinely good at discovering new ways to solve problems. It doesn’t just follow patterns; it finds solutions I wouldn’t have thought of.

David: That’s helpful context, but help me connect this back to the productivity software question. How does understanding—excuse me, analyzing—code architecture translate to replacing the basic office tools that most knowledge workers live in every day? Excel, Google Docs, PowerPoint presentations?

Bob: The AI doesn’t just format code or fix syntax errors. I’ve configured it to analyze what I’m trying to build. It refactors entire architectures, suggests performance optimizations, debugs problems I didn’t know existed. That’s not a better text editor—that’s collaborative intelligence.

Why the Incumbents Are Missing It

David: Here’s what puzzles me about this whole situation—Google’s got more money than they know what to do with, they’ve practically got a monopoly on search, they hired half the world’s AI researchers, they invented the transformer architecture that makes all this possible. They certainly had plenty of AI-themed announcements at their IO conference recently, so they are building stuff. But they should own this space, not just be another player.

Bob: Two things. First, they’re trapped by their own success. Their business model depends on keeping people in their ecosystem, showing ads, and collecting data. But AI-first productivity doesn’t fit that model. When I ask Claude to analyze my calendar and suggest optimizations, the last thing I want is ads mixed into that analysis. Second, they’re thinking about AI as a feature to bolt onto existing tools. Google’s putting AI into Docs and Sheets. Microsoft’s adding Copilot to Office. That’s like adding internet features to desktop software in 1995. They’re missing that the entire interface paradigm is changing.

David: Yes, there’s also the organizational antibody problem that I’ve documented at pretty much every large tech company I’ve covered over the decades. You’ve got thousands of engineers working on incremental improvements to Gmail, Docs, Sheets—products that generate billions in revenue and support entire divisions. Walk into a meeting and suggest “let’s cannibalize all this for something completely different” and watch how fast you get politely but firmly shown the door. (Even if you’re absolutely right about the technology direction.) The incentive structures just don’t support that kind of self-disruption, especially when the current products are still growing and profitable.

And that brings us back to the deployment velocity problem. These AI companies are shipping major new capabilities weekly, sometimes multiple times per week. Google’s enterprise software division? They’re operating on quarterly release cycles if you’re lucky. That’s not just a technology gap—that’s a fundamental cultural and organizational chasm that’s very difficult to bridge in large organizations.

Bob: Plus, they’re not constrained by existing user expectations. When Anthropic ships a new capability, users adapt their workflows to take advantage of it. When Google changes something in Gmail, users complain that their familiar interface looks different.

The Enterprise Implications

David: Let’s shift focus to enterprise implications, because this is where things get really messy. I’ve been getting calls from CIOs and IT directors who are completely caught off guard by this bottom-up adoption pattern. Their employees are using these AI tools for mission-critical work—and reporting dramatic productivity gains—but IT has zero visibility into data governance, security policies, or compliance implications. It’s shadow IT on steroids, and frankly, most organizations aren’t equipped to handle it. This sounds familiar to those of us that started using PCs back in the 1980s.

Bob: That’s the other thing that’s different about this transition. Previous platform shifts were top-down. IT departments evaluated productivity suites, negotiated enterprise contracts, managed rollouts. But AI tools are getting adopted bottom-up by individual workers who see immediate gains. I know developers using Claude or Cursor for all their coding because it makes them probably 3x more productive. They’re not waiting for company approval. They’re just using the tools and dealing with governance later.

David: Shadow IT on steroids—and I say this as someone who’s been tracking unauthorized technology adoption in enterprises since people were sneaking Dropbox accounts onto corporate networks back in 2008 and spreadsheets in the 1990s. The scale and speed of this AI tool adoption is unlike anything I’ve documented before.

This puts IT departments into an impossible situation: they can try to block these tools and essentially tell their organization “we’d rather be less productive, thank you very much.” Or they can try to manage technologies they don’t understand well enough to create appropriate governance policies. I’ve watched CIOs try to navigate this, and honestly, it’s not pretty. The usual enterprise software playbook—pilot programs, vendor evaluations, compliance frameworks—doesn’t work when your employees are already using these tools daily and seeing immediate benefits.

Bob: Here’s what the AI companies figured out. They’re offering enterprise versions with security features, compliance controls, audit trails. But the value proposition isn’t “replace your existing tools.” It’s “make your people probably more effective at their jobs.” Hard to argue with that.

David: What’s your timeline prediction here? Because in all my years covering enterprise software transitions—and I’ve tracked everything from mainframe migrations to cloud adoption—they usually take forever. Even cloud-based email adoption still took most enterprises three to five years to complete. But this feels fundamentally different from anything I’ve documented before.

Bob: Based on what I’m seeing with my clients? Eighteen months, maybe less. When I can build sophisticated data analysis in three hours that used to take weeks, I’m not going back. And once you experience that kind of leverage, everything else feels like working with oven mitts on.

What Comes Next

David: What should people in the industry be watching for? What are the early warning signs that this shift is accelerating?

Bob: Multi-agent systems. Right now, most people use AI tools one at a time—Claude for writing, Cursor for coding, maybe Perplexity for research. (Though in fact, both Anthropic and OpenAI are actually using multi-agent systems in their desktop tools now.) But I’m starting to see orchestration tools that coordinate multiple AI agents on complex tasks. That’s when this really explodes, because you’re not just replacing individual productivity tools. You’re replacing entire workflows.

David: And what about the incumbent giants? Google, Microsoft—what’s their play here, assuming they wake up before it’s too late?

Bob: They need to pick a lane. They can survive losing the innovation high ground—IBM did, (old) Microsoft did. Both companies are more profitable now than when they dominated. But if they want to remain relevant for the next generation of workers, they need to build AI-native productivity platforms, not bolt AI features onto legacy products. The talent migration tells the story—I’m seeing engineers leave Google and Microsoft for AI companies, not because of money but because that’s where the interesting problems are.

That’s the real signal right there. When the smartest people in your industry start working elsewhere, you’re no longer the future. You’re just the past that happens to be profitable.

The Bottom Line

As our conversation wound down, I kept thinking about David’s observation that this transition feels both familiar and unprecedented. The pattern of disruption—incumbent complacency, technological shift, talent migration—follows a predictable script. But the speed and scope of change feels fundamentally different.

“We’re not just watching productivity software get replaced,” I found myself reflecting as we wrapped up. “We’re watching the entire concept of ‘software’ evolve into something more like collaborative intelligence.”

The implications extend far beyond which productivity suite you use. We’re looking at a fundamental shift in how humans and machines work together, happening largely under the radar while everyone debates the bigger questions about AI’s future.

The companies that recognize this shift early—and adapt their workflows, their hiring, their entire approach to knowledge work—will have a massive advantage. Those that don’t may find themselves wondering how they went from industry leaders to legacy providers in the span of a few quarters.

The arms race is real. It’s happening now. And the winners aren’t necessarily the companies with the biggest marketing budgets or the most enterprise sales reps. They’re the ones building the tools that knowledge workers reach for first every morning.

How one PR firm has exploited AI agents

AI has certainly taken plenty of mindshare as of late. 10Fold is one Bay Area PR firm which has been using it for more than 18 months to develop its own AI agents and other routines to make them more productive, provide more focused service to their mostly hi-tech clients, and analyze suggested approaches to acquire new clients. I recently spoke to Susan Thomas, their CEO.

“Our first AI app was developed out of sheet desperation.” They had an army of interns to sort through the “coverage” or press clips about their clients and competitors. “The bigger the companies we followed, the more people we needed,” she said. They did more than just count clips but go deeper to look at keywords, sentiment, and messaging used in the clips. They also examine the quality of the articles and measure their engagement.  Their agent learned how to do all of this, and saved the time of three interns in total. It was developed by an outside firm that ended up costing $80,000. “Now we don’t have to hire this army of interns to do this analysis.”

They ultimately paid for licenses for ChatGPT. These, along with Gemini, proved their utility in what she calls the “discovery phase,” when they are approached by a prospective client. The manual process would take two or more hours; the chatbots took seconds. They would get all sorts of intel, such as venture funding, how many employees, where their offices were located, key analyst relationships and media coverage. “We also figured out how to find the prospect’s current PR agency, another long slog that was reduced to a few seconds,” she said.

When they first began using chatbots, they got some immediate benefit but it still took a series of eight or more separate prompts to do all of this research. Now they have an agent that consolidates this all together.

They wrote another AI agent that would research whether their ideas had been already used by the prospect. This has bled into having another AI agent analyze contributed article ideas to see if they are unique.

Thomas also uses AI to review her emails, catch typos and fix any style variations. She has uploaded their corporate handbook to make it easier to query about policies without having to read through the entire document.

The net result is that the majority of her staff now is AI fluent. “AI is making our media campaigns more successful, and making our reports more interesting. We aren’t writing the same materials over and over again, and our client retention is solid.”

What is most significant is that four of these agents Thomas wrote herself, with each one taking 10 or so minutes to code up. To me this demonstrates their power, and I recall when spreadsheets were first coming into corporations back in the early 1980s. AI agents are having a similar effect.

 

Time to pare down your mobile app portfolio

When the iPhones and Android devices were first introduced, I recall the excitement. We would download apps willy-nilly, and many of them we would use maybe twice before souring on their bad or frustrating UX. The excitement was everywhere, and back in 2009, I attended the final presentations of a Washington University computer science class on how to develop new iOS apps. The class is still being taught today, and while 15 years may seem like a lifetime, we are still dealing with basic issues about app security and data privacy. With all the buzz surrounding DeepSeek this week comes the inevitable analysis by NowSecure about the major security and privacy flaws in its iOS app.

Ruh-oh. Danger Will Robinson! (Insert your favorite meme here.)

Pin page

So much for app excitement. I have come full circle: When I got my latest iPhone last year, I spent some time doing the opposite: paring down my apps to the barest minimum.

It is time to take another closer look at your app portfolio, and I suggest you spend part of your weekend doing some careful home screen editing. Now, I wasn’t one of the many millions (or so it seems) of folks who downloaded DeepSeek, or who freaked out when TikTok went down for a few hours and rushed to download Another Chinese Social Media App in its place.

But still. We should use the privacy abuses found in DeepSeek’s app as a teachable moment.

Your phone is the gateway to your life, to your electronic soul. It is also a major security sinkhole. It has become a major gateway for phishing attacks, because often we are scrolling around and not paying attention to what we are doing, especially when we get an “emergency” text or email.

But let’s talk about our apps. If you read the entire NowSecure report, you will see that you should run away from using the DeepSeek app. It will send your data across the intertubes unencrypted. When it does use encryption, it does so using older methods that are easily compromised, and has its keys hardcoded in the app making your data easy to read. It also hoovers up enough device fingerprinting info to track your movements. And its terms of service say quite plainly that all this information is sent to Chinese servers. Thanks, but no thanks.

Why did I initially pare down my apps last year?  I did this for a combination of reasons. First, it seemed like a good time to review all those cute icons and cut out the ones that were clogging my home screens. And I really wanted to get to a single screen, but accepted two screens full of apps. Also, I wasn’t comfortable with the level of private details that the bad apps were sending to their corporate overlords, or to data brokers, or to both.

To make it easier for your Great App Cull, I suggest the divide and conquer approach. I divided my apps into four categories:

Type 1 apps were those that I knew had major privacy concerns about, such as Facebook’s Messenger, Twitter, Google Meet and Maps . I am sure there were others that don’t immediately come to mind. You can debate whether the privacy concerns are real or not, but I think most of us would agree that DeepSeek would definitely fall into this bucket.

Type 2 were apps that really were so poorly designed that I would be better off using just the web versions, such as the T-Mobile and Instacart apps and several banking apps.

Type 3 were apps that I had to download to do some specific task, such as attend a conference, or because I used it maybe one or two times, such as the Bluesky app or the Ring camera app. These were also poorly designed.

Type 4 were apps that were no longer relevant to my life, such as to control my Ecobee thermostat in a place that I no longer lived, or to run a bunch of VPN apps that I was testing for CNN that I no longer used.

I am sure that years from now DeepSeek’s app will be a case study of what not to do to write secure mobile apps. This is why many countries and agencies have already banned its use on government-owned devices and why there is a bill before our Congress to do so.

CSOonline: Python administrator moves to improve software security

The administrators of the Python Package Index (PyPI) have begun an effort to improve the hundreds of thousands of software packages that are listed. The attempt, which began earlier last year, is to identify and stop malware-laced packages from proliferating across the open-source community that contributes and consumes Python software.

The effort called Project Quarantine is described in blog post by Mike Fiedler, who is the sole administrator responsible for Python security. The project allows PyPI administrators and a select group of developers to mark a project as potentially harmful and prevent it from being easily installed by users, avoiding further harm.

In my blog post for CSOonline, I describe this effort and how it came about.

How IT can learn from Target and Walmart

With all the holiday shopping happening around now, you probably have visited the websites at Target and Walmart, and maybe that prime Seattle company too. What you probably haven’t visited are two subsidiary sites of the first two companies that aren’t selling anything, but are packed with useful knowledge that can help IT operations and application developers. This comes as a surprise because:

  • they both contain a surprising amount of solid IT information that while focused on the retail sector have broader implications for a number of other business contexts
  • they deal with many issues that are at the forefront of innovation, (such as open source and AI) not something normally associated with either company
  • both sites are a curious mixture of open source tool walkthroughs, management insights, and software architecture and design.
  • many of the posts on both sites are very technical deep dives into how they actually use the software tools, again not something you would ordinarily think you could find from these two sources

Let’s take a closer look. One post on Target’s site is by Adam Hollenbeck, an engineering manager. He wrote about their IT culture: “If creating an inclusive environment as a leader is easy for you, please share your magic with others. The perfect environment is a challenge to create but should always be our north star as leaders.” Mark Cuban often opines on this subject. Another post goes into details about a file analysis tool that was developed internally and released on open source. It has a user-friendly interface specifically designed to visualize files, their characteristics, and how they interconnect.

Walmart’s Global Tech blog site goes very heavy into its AI usage. “AI is eliminating silos that developed over time as our dev teams grew”, Andrew Budd wrote in one post, and GenAI chatbot solutions have been rolled out to optimize Walmart’s Developer Experience, a central tool repository. There are also posts about other AI and open source projects, along with a regular cyber report about recent developments in that arena. This is the sort of thing you might find on FOSSForce.com or something like TheNewStack, both news sites.

Another Walmart article, posted on LinkedIn, addresses how AI is changing the online shopping experience this season with more personalized suggestions and predictive content, (does this sound familiar from another online site?) and mentions how all Sam’s Club stores have the “just walk out” technology that was first pioneered by Amazon. (I wrote about my 2021 experience here.)

One other point: both of these tech sub-sites are not easily found: tech.target.com (not to be confused with techtarget.com) and tech.walmart.com — have no link from either company’s home pages. ” I’m not sure these pages should be linked from the home pages,” said Danielle Cooley, a UX expert whom I have known for decades. “As cool as this stuff is for people like you and me and your readers, it’s not going to rise to home page level importance for a company with millions of ecommerce visitors per day.” But she cautions that finding these sites could be an issue. “I did a quick google of ‘programming jobs target’ and ‘cybersecurity jobs target’ and still didn’t get a direct link to tech.target.com so they aren’t aiming at job openings. But also, the person interested in cybersecurity will not also the person interested in an AI shopping assistant for example.” Given their specificity, even if a visitor lands on them, they still might go away frustrated because the content is pretty broad.

You’ll notice that I haven’t said much about Amazon here. It really isn’t fair to compare the two tech sites to what they are doing, because of Amazon’s depth in all sorts of tech knowledge. And to be honest, in my extended family, we tend to shop more at Amazon than either Target or Walmart. But it is nice to know that both Target and Walmart are putting this content out there. I welcome your own thoughts about their efforts.

CSOonline: Top 5 security mistakes software developers make

Creating and enforcing the best security practices for application development teams isn’t easy. Software developers don’t necessarily write their code with these in mind, and as the appdev landscape becomes more complex, securing apps becomes more of a challenge to handle cloud computing, containers, and API connections. It is a big problem: Security flaws were found in 80% of the applications scanned by Veracode in a recent analysis.

As attacks continue to plague cybersecurity leaders, I compiled a list of five common mistakes by software developers and how they can be prevented for a piece for CSOonline.

The Cloud-Ready Mainframe: Extending Your Data’s Reach and Impact

(This post is sponsored by VirtualZ Computing)

Some of the largest enterprises are finding new uses for their mainframes. And instead of competing with cloud and distributed computing, the mainframe has become a complementary asset that adds new productivity and a level of cost-effective scale to existing data and applications. 

While the cloud does quite well at elastically scaling up resources as application and data demands increase, the mainframe is purpose-built for the largest-scale digital applications. But more importantly, it has kept pace as these demands have mushroomed over its 60-year reign, and why so many large enterprises continue to use them. Having them as part of a distributed enterprise application portfolio could be a significant and savvy use case, and be a reason for increasing their future role and importance.

Estimates suggest that there are about 10,000 mainframes in use today, which may not seem a lot except that they can be found across the board in more than two-thirds of Fortune 500 companies, In the past, they used proprietary protocols such as Systems Network Architecture, had applications written in now-obsolete coding languages such as COBOL, and ran on custom CPU hardware. Those days are behind us: instead, the latest mainframes run Linux and TCP/IP across hundreds of multi-core  microprocessors. 

But even speaking cloud-friendly Linux and TCP/IP doesn’t remove two main problems for mainframe-based data. First off, many mainframe COBOL apps are their own island, isolated from the end-user Java experience and coding pipelines and programming tools. To break this isolation usually means an expensive effort to convert and audit the code. 

A second issue has to do with data lakes and data warehouses. These applications have become popular ways that businesses can spot trends quickly and adjust IT solutions as their customer’s data needs evolve. But the underlying applications typically require having near real-time access to existing mainframe data, such as financial transactions, sales and inventory levels or airline reservations. At the core of any lake or warehouse is conducting a series of “extract, transform and load” operations that move data back and forth between the mainframe and the cloud. These efforts only transform data at a particular moment in time, and also require custom programming efforts to accomplish.

What was needed was an additional step to make mainframes easier for IT managers to integrate with other cloud and distributed computing resources, and that means a new set of software tools. The first step was thanks to initiatives such as the use of IBM’s z/OS Connect. This enabled distributed applications to access mainframe data. But it continued the mindset of a custom programming effort and didn’t really provide direct access to distributed applications.

To fully realize the vision of mainframe data as equal cloud nodes required a major makeover, thanks to companies such as VirtualZ Computing. They latched on to the OpenAPI effort, which was previously part of the cloud and distributed world. Using this protocol, they created connectors that made it easier for vendors to access real-time data and integrate with a variety of distributed data products, such as MuleSoft, Tableau, TIBCO, Dell Boomi, Microsoft Power BI, Snowflake and Salesforce. Instead of complex, single-use data transformations, VirtualZ enables real-time read and write access to business applications. This means the mainframe can now become a full-fledged and efficient cloud computer. 

VirtualZ CEO Jeanne Glass says, “Because data stays securely and safely on the mainframe, it is a single source of truth for the customer and still leverages existing mainframe security protocols.” There isn’t any need to convert COBOL code, and no need to do any cumbersome data transformations and extractions.

The net effect is an overall cost reduction since an enterprise isn’t paying for expensive high-resource cloud instances. It makes the business operation more agile, since data is still located in one place and is available at the moment it is needed for a particular application. These uses extend the effective life of a mainframe without having to go through any costly data or process conversions, and do so while reducing risk and complexity. These uses also help solve complex data access and report migration challenges efficiently and at scale, which is key for organizations transitioning to hybrid cloud architectures. And the ultimate result is that one of these hybrid architectures includes the mainframe itself.

CSOonline: Third-party software supply chain threats continue to plague CISOs

The latest software library compromise of an obscure but popular file compression algorithm called XZ Utils shows how critical these third-party components can be in keeping enterprises safe and secure. The supply chain issue is now forever baked into the way modern software is written and revised. Apps are refined daily or even hourly with new code which makes it more of a challenge for security software to identify and fix any coding errors quickly. It means old, more manual error-checking methods are doomed to fall behind and let vulnerabilities slip through.

These library compromises represent a new front for security managers, especially since they combine three separate trends: a rise in third-party supply-chain attacks, hiding malware inside the complexity of open-source software tools, and using third-party libraries as another potential exploit vector of generative AI software models and tools. I unpack these issues for my latest post for CSOonline here.

Using Fortnite for actual warfare

What do B-52s and a Chinese soccer stadium have in common? Both are using Epic Games’ Unreal Engine to create digital twins to help with their designs. Now, you might think having a software gaming engine would be a stretch to retrofit the real engines on a 60-plus year old bomber, but that is exactly what Boeing is doing. The 3D visualization environment makes it easier to design and provide faster feedback to meet the next generation of military pilots.

This being the military, the notion of “faster” is a matter of degree. The goal is for Boeing to replace the eight Pratt and Whitney engines on each of 60-some planes, as well as update cockpit controls, displays and other avionics. And the target date? Sometime in 2037. So check back with me then.

Speaking of schedules, let’s look at what is happening with that Xi’an stadium. I wrote about the soccer stadium back in July 2022 and how the architects were able to create a digital twin of the stadium to visualize seating sight lines and how various building elements would be constructed. It is still under construction, but you can see a fantastic building taking shape in this video. However slowly the thing is being built, it will probably be finished before 2037, or even before 2027.

Usually, when we talk about building digital twins, we mean taking a company’s data and making it accessible to all sorts of analytical tools. Think of companies like Snowflake, for example, and what they do. But the gaming engines offer another way to duplicate all the various systems digitally, and then test different configurations by literally putting a real bomber pilot in a virtual cockpit to see if the controls are in the right place, or the new fancy hardware and software systems can provide the right information to a pilot. If you look at the cockpit of another Boeing plane — the iconic 747, now mostly retired, you see a lot of analog gauges and physical levers and switches.

Now look at the 777 cockpit — see the difference? Everything is on a screen.

product image

It is ironic in a way: we are using video gaming software to reproduce the real world by placing more screens in front of the people that are depicted in the games. A true Ender’s Game scenario, if you will.

SiliconANGLE: Security threats of AI large language models are mounting, spurring efforts to fix them

A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security. The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. I take a deeper dive into a few different sources and suggest ways to mitigate the threats of these tools in my post for SiliconANGLE here.