When it comes to the future relationship between artificial intelligence (AI) and cybersecurity, usually the first thing most of us think about is the Terminator “Skynet” scenario where machines take over the world and inflict harm on the human race. But one security professional has a somewhat more rosy view, and suggests that AI needs to be viewed across a wider landscape, both in terms of understanding how it will influence cybersecurity, and how IT can use AI to plan for its future security technology purchases.
Earlier this year, Dudu Mimran gave a speech and wrote a subsequent blog post for the OECD FORUM. Mimran is the CTO at Deutsche Telecom Research Labs. I began with an interview at his offices Beersheva, Israel on a recent visit, along with continuing the conversation with a series of email questions.
“While the threat of cyberattacks powered by AI is increasingly likely, I am less concerned in the short and mid-term of machines making up their mind and able to harm people,” he says. Instead, “our lives are becoming more and more dependent on technology, and this will be exploited by adversaries much before we have conscious machines. Nevertheless, today most of the attackers’ goals can be attained without the sophistication in AI, and that is why we don’t see a big new wave of AI-based attacks.”
In his speech, he mentions the four different time horizons for AI and security:
- Short-term hyper-personalization, where algorithms are getting to know us better then we know ourselves,
- Medium-term disruptions that are based on various focused automation efforts,
- Long-term pervasive autonomous machines, such as driverless cars, and
- Very long-term situations such as malicious Skynet-type situations.
One of the biggest potential benefits is where AI can have an impact on malware attribution. If you know your attacker and can respond quickly, “the chances you will be hitting back your true adversary are higher if you can react in real-time,” he says.
However, “attribution is a field of cybersecurity which suffers from under-investment because it lacks commercial viability,” he says in his OECD speech. Certainly this a well-known problem, because researchers have to check so many variables such as what non-coding language the malware is written in, what cultural or political references are used, what code fragments mimic existing malware structures, and other factors. This recent blog post in the Talos blog about trying to figure out who was behind the Olympic Destroyer malware that we saw during that event is a case in point about the difficulties about attribution.
Mimran suggests two ways policymakers can improve attribution: the first one is supporting and building a joint global threat intelligence network that can track threats across different geographies, and include participation by both business and government researchers.
The second suggestion is to fund ongoing research that would help to improve attribution but at the same time preserve data privacy. “Attribution is a distributed problem, spanning across different technology stacks, systems and organizations and these central entities can help weave such a thread,” he said. He has hope, especially about new security startups that are focused on these collaboration ideas, and an initiative with the largest European banks to collaborate on shared threat intelligence.
The data privacy element is an important consideration. Mimran wrote in his own blog last year: “High amounts of personal data distributed across different vendors residing on their central systems can increase our exposure and create green field opportunities for attackers to abuse and exploit us in unimaginable ways.“
One solution to the privacy issue is some form of blockchain-based innovation. Mimran mentioned ForgeRock and others that have recently gotten funded. “The challenge for these companies is integration with the rest of the world. Identity is mostly embedded deep into online services and products and creating an external neutral entity that will enable the same smooth experience with all the services out there is a significant challenge.”
AI also has applications for other cyber defensive tactics. “We do see an initial effort of AI used as an automation tool in the SOC but these are just preliminary and somewhat premature,” he says. However, caution is advised, particularly when vendors try to oversell their tools and claim they are AI-based. One product reviewer for CSO Online makes the point of delineating between products that have rules-based detection engines and AI, “because many vendors with hundreds of rules feel they have accomplished some sort of near version of AI.” That isn’t true, and the post points out that just verifying an existing malware signature isn’t AI but mere pattern matching.
Mimran also mentioned the threat that IoT botnets have become. “The problem of IoT botnets touches on many loose ends in the way technology is built today, and there is no silver bullet for that. The best way to tackle botnets is when cooperation emerges between the hosts of the bots, along with the communication or services provider which tunnels the bots traffic and law enforcement.”