Using Fortnite for actual warfare

What do B-52s and a Chinese soccer stadium have in common? Both are using Epic Games’ Unreal Engine to create digital twins to help with their designs. Now, you might think having a software gaming engine would be a stretch to retrofit the real engines on a 60-plus year old bomber, but that is exactly what Boeing is doing. The 3D visualization environment makes it easier to design and provide faster feedback to meet the next generation of military pilots.

This being the military, the notion of “faster” is a matter of degree. The goal is for Boeing to replace the eight Pratt and Whitney engines on each of 60-some planes, as well as update cockpit controls, displays and other avionics. And the target date? Sometime in 2037. So check back with me then.

Speaking of schedules, let’s look at what is happening with that Xi’an stadium. I wrote about the soccer stadium back in July 2022 and how the architects were able to create a digital twin of the stadium to visualize seating sight lines and how various building elements would be constructed. It is still under construction, but you can see a fantastic building taking shape in this video. However slowly the thing is being built, it will probably be finished before 2037, or even before 2027.

Usually, when we talk about building digital twins, we mean taking a company’s data and making it accessible to all sorts of analytical tools. Think of companies like Snowflake, for example, and what they do. But the gaming engines offer another way to duplicate all the various systems digitally, and then test different configurations by literally putting a real bomber pilot in a virtual cockpit to see if the controls are in the right place, or the new fancy hardware and software systems can provide the right information to a pilot. If you look at the cockpit of another Boeing plane — the iconic 747, now mostly retired, you see a lot of analog gauges and physical levers and switches.

Now look at the 777 cockpit — see the difference? Everything is on a screen.

product image

It is ironic in a way: we are using video gaming software to reproduce the real world by placing more screens in front of the people that are depicted in the games. A true Ender’s Game scenario, if you will.

SiliconANGLE: Security threats of AI large language models are mounting, spurring efforts to fix them

A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security. The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. I take a deeper dive into a few different sources and suggest ways to mitigate the threats of these tools in my post for SiliconANGLE here.

 

SiliconANGLE: Google’s Web Environment Integrity project raises a lot of concerns

Earlier last month, four engineers from Google LLC posted a new open-source project on GitHub and called it “Web Environment Integrity.” The WEI project ignited all sorts of criticism about privacy implications and concerns that Google wasn’t specifically addressing its real purpose.

Remember the problems with web cookies? WEI takes this to a new level. I tell you why in my latest piece here:

 

SiliconANGLE: Apps under attack: New federal report suggests ways to improve software code pipeline security

The National Security Agency and the Cybersecurity and Infrastructure Security Agency late last month issued an advisory memo to help improve defenses in application development software supply chains — and there’s a lot of room for improvement.

Called Defending Continuous Integration/Continuous Delivery (CI/CD) Pipelines, the joint memo describes the various deployment risks and ways attackers can leverage these pipelines. I describe their recommendations and the issues with defending these pipelines in my latest blog for SiliconANGLE.

SiliconANGLE: Databases then and now: the rise of the digital twin

When I first started in IT, back in the Mainframe Dark Ages, we had hulking big databases that ran on IBM’s Customer Information Control System, written in COBOL. These mainframes ran on a complex collection of hardware and operating systems that was owned lock, stock, and bus and tag barrel by IBM. The average age of the code was measured in decades, and code changes were measured in months. They contained millions of transactions, and the data was always out of date since it was a batch system, meaning every night new data would be uploaded.

Contrast that to today’s typical database setup. Data is current to the second, code is changed hourly, and the nature of what constitutes a transaction has changed significantly to something that is now called a “digital twin,” which I explain in my latest post for SiliconANGLE here.

Code is written in dozens of higher-level languages that have odd names that you may never have heard of, and this code runs on a combination of cloud and on-premises equipment that uses loads of microprocessors and open source products that can be purchased from hundreds of suppliers.

It really is remarkable, and that these changes have happened all within the span of a little more than 35 years. You can read more in my post.

 

 

SiliconANGLE: Cloud conundrum: The changing balance of microservices and monolithic applications

The cloud computing debate isn’t just about migrating to the cloud, but how the cloud app is constructed. Today’s landscape has gotten a lot more complicated, with virtual machines, cloud computing, microservices and containers. The modern developer has almost too many choices and has to balance the various tradeoffs among those architectures. I examine how to pick the right mix of cloud apps from a variety of tech, what I call the cloud conundrum in my latest analysis for SiliconANGLE.

 

Invicti blog: Ask an MSSP about DAST for your web application security

When evaluating managed security service providers (MSSPs), companies should make sure that web application security is part of the offering – and that a quality DAST solution is on hand to provide regular and scalable security testing. SMBs should evaluate potential providers based on whether they offer modern solutions and services for dynamic application security testing (DAST), as I wrote for the Invicti blog this month.

CSOonline: What is the Traffic Light Protocol and how it works to share threat data

Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive threat information within an organization or business and to enable more effective collaboration among security defenders, system administrators, security managers and researchers. In this piece for CSOonline, I explain the origins of the protocol, how it is used by defenders, and what IT and security managers should do to make use of it in their daily operations.

Wreaking Havoc on cybersecurity

A new malware method has been identified by cybersecurity researchers. While it hasn’t yet been widely used, it is causing some concern. Ironically, it has been named Havoc.

Why worry about it if it is a niche case? Because of its sophistication of methods and the collection of tools and techniques (shown in the diagram above from ZScaler) that it used. It doesn’t bode well for the digital world. Right now it has been observed targeting government networks.

Havoc is a command and control (C2) framework, meaning that it is used to control the progress of an attack. There are several C2 frameworks that are used by bad actors, including Manjusaka, Covenant, Merlin, Empire and the commercial Cobalt Strike (this last one is used by both attackers and red team researchers). Havoc is able to bypass the most current version of Windows 11 Defender (at least until Microsoft figures out the problem, then releases a patch, then gets us to install it). It is also able to employ various evasion and obfuscation techniques.

One reason for concern is how it works. Researchers at Reversing Labs “do not believe it poses any risk to development organizations at this point. However, its discovery underscores the growing risk of malicious packages lurking in open source repositories like npm, PyPi and GitHub.” Translated into English, this means that Havoc could become the basis of future software supply chain attacks.

In addition, the malware disables the Event Tracing for Windows (ETW) process. This is used to log various events, so is another way for the malware to hide its presence. This process can be turned on or off as needed for debugging operations, so this action by itself isn’t suspicious.

One of the common techniques is for the malware to go to sleep once it reaches a potential target PC. This makes it harder to detect, because defender teams can perhaps track when some malware entered their system but don’t necessarily find when it wakes up with further work. Another obfuscation technique is to hide or otherwise encrypt its source code. For proprietary applications, this is to be expected, but for open-source apps the underlying code should be easily viewable. However, this last technique is bare bones, according to the researchers, and easily found. The open source packages that were initially infected with Havoc have been subsequently cleansed (at least for now). Still, it is an appropriate warning for software devops groups to remain vigilant and to be on the lookout for supply chain irregularities.

One way this is being done is called static code analysis, where your code in question is run through various parsing algorithms to check for errors. What is new is using ChatGPT-like products to do the analysis for you and here is one paper that shows how it was used to find code defects. While the AI caught 85 vulnerabilities in 129 sample files (what the author said was “shockingly good”), it isn’t perfect and is more a complement to human code review and traditional code analysis tools.