Avast blog: Using AI as an offensive cyber weapon

The rise of offensive AIAI is a double-edged sword. It has enabled the creation of software tools that have helped to automate tasks such as prediction, information retrieval, and media synthesis, which have been used to improve various cyber defensive measures. However, AI has also been used by attackers to improve their malicious campaigns. For example, AI can be used to poison ML models and thus target their datasets and steal login credentials (think keylogging, for example). I recently spent some time at a newly created Offensive AI Research Lab run by Dr. Yisroel Mirsky. The lab is part of one of the research efforts at the Ben Gurion University in Beersheva, Israel. Mirsky is part of a team that published a report entitled “The Threat of Offensive AI to Organizations”. The Offensive AI Research Lab’s report and survey show the broad range of activities (both negative and positive) that are made possible through offensive AI.

You can read my latest post for Avast’s blog here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.