Last year I was fortunate enough to attend in person the CyberSecAI conference in Prague, a unique blend of academic and business researchers and practitioners involved in both cybersecurity and AI fields. This year the conference went completely virtual. I covered most of the sessions through live Tweets and wrote two blog posts that are now up on Avast’s website:
- Creating and weaponizing deep fakes. Dr. Hany Farid of UC Berkeley spoke about their evolution, the four different types of fakes, and ways that we can try to solve their challenges. I found his analysis intriguing, and his use of popular figures that were deliberately fakes brought home how sophisticated AI algorithms is needed to flag them definitively.
- Understanding bias in AI algorithms. A blue-ribbon panel of experts discussed how to reduce AI algorithmic bias. Should we hold machines at higher standards than we do of ourselves? It was moderated by venture capitalist Samir Kumar, who is the managing director of Microsoft’s internal venture fund M12 and included:
- Noel Sharkey, a retired professor at the University of Sheffield (UK) who is actively involved in various AI ventures,
- Celeste Fralick, the Chief Data Scientist at McAfee and an AI researcher,
- Sandra Wachter, an associate professor at the University of Oxford (UK) and a legal scholar, and
- Rajarshi Gupta, a VP at Avast and head of their AI and Network Security practice areas.
Part of the problem with defining bias is in separating correlation from causation, which was brought up several times during the discussion.