SiliconANGLE: Databases then and now: the rise of the digital twin

When I first started in IT, back in the Mainframe Dark Ages, we had hulking big databases that ran on IBM’s Customer Information Control System, written in COBOL. These mainframes ran on a complex collection of hardware and operating systems that was owned lock, stock, and bus and tag barrel by IBM. The average age of the code was measured in decades, and code changes were measured in months. They contained millions of transactions, and the data was always out of date since it was a batch system, meaning every night new data would be uploaded.

Contrast that to today’s typical database setup. Data is current to the second, code is changed hourly, and the nature of what constitutes a transaction has changed significantly to something that is now called a “digital twin,” which I explain in my latest post for SiliconANGLE here.

Code is written in dozens of higher-level languages that have odd names that you may never have heard of, and this code runs on a combination of cloud and on-premises equipment that uses loads of microprocessors and open source products that can be purchased from hundreds of suppliers.

It really is remarkable, and that these changes have happened all within the span of a little more than 35 years. You can read more in my post.

 

 

SiliconANGLE: Cloud conundrum: The changing balance of microservices and monolithic applications

The cloud computing debate isn’t just about migrating to the cloud, but how the cloud app is constructed. Today’s landscape has gotten a lot more complicated, with virtual machines, cloud computing, microservices and containers. The modern developer has almost too many choices and has to balance the various tradeoffs among those architectures. I examine how to pick the right mix of cloud apps from a variety of tech, what I call the cloud conundrum in my latest analysis for SiliconANGLE.

 

Invicti blog: Ask an MSSP about DAST for your web application security

When evaluating managed security service providers (MSSPs), companies should make sure that web application security is part of the offering – and that a quality DAST solution is on hand to provide regular and scalable security testing. SMBs should evaluate potential providers based on whether they offer modern solutions and services for dynamic application security testing (DAST), as I wrote for the Invicti blog this month.

CSOonline: What is the Traffic Light Protocol and how it works to share threat data

Traffic Light Protocol (TLP) was created to facilitate greater sharing of potentially sensitive threat information within an organization or business and to enable more effective collaboration among security defenders, system administrators, security managers and researchers. In this piece for CSOonline, I explain the origins of the protocol, how it is used by defenders, and what IT and security managers should do to make use of it in their daily operations.

Wreaking Havoc on cybersecurity

A new malware method has been identified by cybersecurity researchers. While it hasn’t yet been widely used, it is causing some concern. Ironically, it has been named Havoc.

Why worry about it if it is a niche case? Because of its sophistication of methods and the collection of tools and techniques (shown in the diagram above from ZScaler) that it used. It doesn’t bode well for the digital world. Right now it has been observed targeting government networks.

Havoc is a command and control (C2) framework, meaning that it is used to control the progress of an attack. There are several C2 frameworks that are used by bad actors, including Manjusaka, Covenant, Merlin, Empire and the commercial Cobalt Strike (this last one is used by both attackers and red team researchers). Havoc is able to bypass the most current version of Windows 11 Defender (at least until Microsoft figures out the problem, then releases a patch, then gets us to install it). It is also able to employ various evasion and obfuscation techniques.

One reason for concern is how it works. Researchers at Reversing Labs “do not believe it poses any risk to development organizations at this point. However, its discovery underscores the growing risk of malicious packages lurking in open source repositories like npm, PyPi and GitHub.” Translated into English, this means that Havoc could become the basis of future software supply chain attacks.

In addition, the malware disables the Event Tracing for Windows (ETW) process. This is used to log various events, so is another way for the malware to hide its presence. This process can be turned on or off as needed for debugging operations, so this action by itself isn’t suspicious.

One of the common techniques is for the malware to go to sleep once it reaches a potential target PC. This makes it harder to detect, because defender teams can perhaps track when some malware entered their system but don’t necessarily find when it wakes up with further work. Another obfuscation technique is to hide or otherwise encrypt its source code. For proprietary applications, this is to be expected, but for open-source apps the underlying code should be easily viewable. However, this last technique is bare bones, according to the researchers, and easily found. The open source packages that were initially infected with Havoc have been subsequently cleansed (at least for now). Still, it is an appropriate warning for software devops groups to remain vigilant and to be on the lookout for supply chain irregularities.

One way this is being done is called static code analysis, where your code in question is run through various parsing algorithms to check for errors. What is new is using ChatGPT-like products to do the analysis for you and here is one paper that shows how it was used to find code defects. While the AI caught 85 vulnerabilities in 129 sample files (what the author said was “shockingly good”), it isn’t perfect and is more a complement to human code review and traditional code analysis tools.

Skynet as evil chatbot

Building the Real Skynet - The New StackWhen we first thought about the plausible future of a real Skynet, many of us assumed it would take the form of a mainframe or room-sized computer that would be firing death rays to eliminate us puny humans. But now the concept has taken a much more insidious form as — a chatbot?

Don’t laugh. It could happen. AI-based chatbots have gotten so good, they are being used in clever ways: to write poems, songs, and TV scripts, to answering trivia questions and even writing computer code. An earlier version was great at penning Twitter-ready misinformation.

The latest version is called ChatGPT which is created by OpenAI and based on its autocomplete text generator GPT-3.5. One author turned it loose on trying to write a story pitch.Yikes!

The first skirmish happened recently over at Stack Overflow, a website that is used by coders to find answers to common programming problems. Trouble is, ChatGPT’s answers are so good that they at first blush seem right, but upon further analysis, they are wrong. Conspiracy theories abound. But for now, Stack Overflow has banned the bot from its forums. “ChatGPT makes it too easy for users to generate responses and flood the site with answers that seem correct at first glance but are often wrong on close examination,” according to this post over on The Verge. The site has been flooded by thousands of bot-generated answers, making it difficult for moderators to sift through them.

It may be time to welcome our new AI-based overlords.

Network World: Lessons learned from the Atlassian network outage

Last month, software tools vendor Atlassian suffered a major network outage that lasted two weeks and affected more than 400 of their over 200,000 customers. It is rare that a vendor who has been hit with such a massive and public outage takes the effort to thoughtfully piece together what happened and why, and also provide a roadmap that others can learn from as well.

In a post on their blog last week, they describe their existing IT infrastructure in careful detail, point out the deficiencies in their disaster recovery program, how to fix its shortcomings to prevent future outages, and describe timelines, workflows and ways they intend to improve their processes. I wrote an op/ed for Network World that gleans the four takeaways for network and IT managers.

Avast blog: Yandex is causing serious data privacy concerns for mobile users

Yandex — Company newsPrivate data could be collected from thousands of Android and iOS apps,according to security researchers. The issue revolves around Yandex, the leading search engine in Russia, and how this data might be available to Russia state agencies. In addition to being a search portal, Yandex also makes an SDK called AppMetrica, which does app usage analytics and marketing and is similar to Google’s Firebase. The SDK has been incorporated into more than 52,000 different apps, including games and messaging apps.

In this post for Avast’s blog, I provide details about the problems with this SDK and things to watch out for when you download your next app.

Infoworld: How to evaluate software asset management tools

The vulnerabilities of the Apache Log4j logging package—and the attacks they’ve drawn—have made one thing very clear: If you haven’t yet implemented a software inventory across your enterprise, now is the time to start evaluating and implementing such tools. These aren’t new —  I recall testing one of the earlier products, Landesk, which is now a part of Ivanti, back in the early 1990s. In this post for Infoworld, I go into detail about how you can evaluate Ivanti and four of other leading tools from Atlassian, ServiceNow (shown above), ManageEngine and Spiceworks, why these tools are needed in modern software development organizations, how you should go about evaluating them, what their notable features are, and what these tools will cost.