SiliconANGLE: Security threats of AI large language models are mounting, spurring efforts to fix them

A new report on the security of artificial intelligence large language models, including OpenAI LP’s ChatGPT, shows a series of poor application development decisions that carry weaknesses in protecting enterprise data privacy and security. The report is just one of many examples of mounting evidence of security problems with LLMs that have appeared recently, demonstrating the difficulty in mitigating these threats. I take a deeper dive into a few different sources and suggest ways to mitigate the threats of these tools in my post for SiliconANGLE here.

 

SiliconANGLE: Google’s Web Environment Integrity project raises a lot of concerns

Earlier last month, four engineers from Google LLC posted a new open-source project on GitHub and called it “Web Environment Integrity.” The WEI project ignited all sorts of criticism about privacy implications and concerns that Google wasn’t specifically addressing its real purpose.

Remember the problems with web cookies? WEI takes this to a new level. I tell you why in my latest piece here:

 

SiliconANGLE: Apps under attack: New federal report suggests ways to improve software code pipeline security

The National Security Agency and the Cybersecurity and Infrastructure Security Agency late last month issued an advisory memo to help improve defenses in application development software supply chains — and there’s a lot of room for improvement.

Called Defending Continuous Integration/Continuous Delivery (CI/CD) Pipelines, the joint memo describes the various deployment risks and ways attackers can leverage these pipelines. I describe their recommendations and the issues with defending these pipelines in my latest blog for SiliconANGLE.

SiliconANGLE: Databases then and now: the rise of the digital twin

When I first started in IT, back in the Mainframe Dark Ages, we had hulking big databases that ran on IBM’s Customer Information Control System, written in COBOL. These mainframes ran on a complex collection of hardware and operating systems that was owned lock, stock, and bus and tag barrel by IBM. The average age of the code was measured in decades, and code changes were measured in months. They contained millions of transactions, and the data was always out of date since it was a batch system, meaning every night new data would be uploaded.

Contrast that to today’s typical database setup. Data is current to the second, code is changed hourly, and the nature of what constitutes a transaction has changed significantly to something that is now called a “digital twin,” which I explain in my latest post for SiliconANGLE here.

Code is written in dozens of higher-level languages that have odd names that you may never have heard of, and this code runs on a combination of cloud and on-premises equipment that uses loads of microprocessors and open source products that can be purchased from hundreds of suppliers.

It really is remarkable, and that these changes have happened all within the span of a little more than 35 years. You can read more in my post.

 

 

SiliconANGLE: Cloud conundrum: The changing balance of microservices and monolithic applications

The cloud computing debate isn’t just about migrating to the cloud, but how the cloud app is constructed. Today’s landscape has gotten a lot more complicated, with virtual machines, cloud computing, microservices and containers. The modern developer has almost too many choices and has to balance the various tradeoffs among those architectures. I examine how to pick the right mix of cloud apps from a variety of tech, what I call the cloud conundrum in my latest analysis for SiliconANGLE.

 

Invicti blog: Ask an MSSP about DAST for your web application security

When evaluating managed security service providers (MSSPs), companies should make sure that web application security is part of the offering – and that a quality DAST solution is on hand to provide regular and scalable security testing. SMBs should evaluate potential providers based on whether they offer modern solutions and services for dynamic application security testing (DAST), as I wrote for the Invicti blog this month.

CSOonline: What is the Traffic Light Protocol and how it works to share threat data

Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive threat information within an organization or business and to enable more effective collaboration among security defenders, system administrators, security managers and researchers. In this piece for CSOonline, I explain the origins of the protocol, how it is used by defenders, and what IT and security managers should do to make use of it in their daily operations.

Wreaking Havoc on cybersecurity

A new malware method has been identified by cybersecurity researchers. While it hasn’t yet been widely used, it is causing some concern. Ironically, it has been named Havoc.

Why worry about it if it is a niche case? Because of its sophistication of methods and the collection of tools and techniques (shown in the diagram above from ZScaler) that it used. It doesn’t bode well for the digital world. Right now it has been observed targeting government networks.

Havoc is a command and control (C2) framework, meaning that it is used to control the progress of an attack. There are several C2 frameworks that are used by bad actors, including Manjusaka, Covenant, Merlin, Empire and the commercial Cobalt Strike (this last one is used by both attackers and red team researchers). Havoc is able to bypass the most current version of Windows 11 Defender (at least until Microsoft figures out the problem, then releases a patch, then gets us to install it). It is also able to employ various evasion and obfuscation techniques.

One reason for concern is how it works. Researchers at Reversing Labs “do not believe it poses any risk to development organizations at this point. However, its discovery underscores the growing risk of malicious packages lurking in open source repositories like npm, PyPi and GitHub.” Translated into English, this means that Havoc could become the basis of future software supply chain attacks.

In addition, the malware disables the Event Tracing for Windows (ETW) process. This is used to log various events, so is another way for the malware to hide its presence. This process can be turned on or off as needed for debugging operations, so this action by itself isn’t suspicious.

One of the common techniques is for the malware to go to sleep once it reaches a potential target PC. This makes it harder to detect, because defender teams can perhaps track when some malware entered their system but don’t necessarily find when it wakes up with further work. Another obfuscation technique is to hide or otherwise encrypt its source code. For proprietary applications, this is to be expected, but for open-source apps the underlying code should be easily viewable. However, this last technique is bare bones, according to the researchers, and easily found. The open source packages that were initially infected with Havoc have been subsequently cleansed (at least for now). Still, it is an appropriate warning for software devops groups to remain vigilant and to be on the lookout for supply chain irregularities.

One way this is being done is called static code analysis, where your code in question is run through various parsing algorithms to check for errors. What is new is using ChatGPT-like products to do the analysis for you and here is one paper that shows how it was used to find code defects. While the AI caught 85 vulnerabilities in 129 sample files (what the author said was “shockingly good”), it isn’t perfect and is more a complement to human code review and traditional code analysis tools.

Skynet as evil chatbot

Building the Real Skynet - The New StackWhen we first thought about the plausible future of a real Skynet, many of us assumed it would take the form of a mainframe or room-sized computer that would be firing death rays to eliminate us puny humans. But now the concept has taken a much more insidious form as — a chatbot?

Don’t laugh. It could happen. AI-based chatbots have gotten so good, they are being used in clever ways: to write poems, songs, and TV scripts, to answering trivia questions and even writing computer code. An earlier version was great at penning Twitter-ready misinformation.

The latest version is called ChatGPT which is created by OpenAI and based on its autocomplete text generator GPT-3.5. One author turned it loose on trying to write a story pitch.Yikes!

The first skirmish happened recently over at Stack Overflow, a website that is used by coders to find answers to common programming problems. Trouble is, ChatGPT’s answers are so good that they at first blush seem right, but upon further analysis, they are wrong. Conspiracy theories abound. But for now, Stack Overflow has banned the bot from its forums. “ChatGPT makes it too easy for users to generate responses and flood the site with answers that seem correct at first glance but are often wrong on close examination,” according to this post over on The Verge. The site has been flooded by thousands of bot-generated answers, making it difficult for moderators to sift through them.

It may be time to welcome our new AI-based overlords.