AI is a double-edged sword. It has enabled the creation of software tools that have helped to automate tasks such as prediction, information retrieval, and media synthesis, which have been used to improve various cyber defensive measures. However, AI has also been used by attackers to improve their malicious campaigns. For example, AI can be used to poison ML models and thus target their datasets and steal login credentials (think keylogging, for example). I recently spent some time at a newly created Offensive AI Research Lab run by Dr. Yisroel Mirsky. The lab is part of one of the research efforts at the Ben Gurion University in Beersheva, Israel. Mirsky is part of a team that published a report entitled “The Threat of Offensive AI to Organizations”. The Offensive AI Research Lab’s report and survey show the broad range of activities (both negative and positive) that are made possible through offensive AI.
This article shows how dangerous deepfake audio can be — all any bad actor needs is a three-second recording of someone to produce a very realistic cloned call.