Abuses and Misuses of AI: Prevention vs. Reaction with Cristian Canton
As AI is becoming more ubiquitous and part of almost every aspect of our lives, professional and personal, it is necessary to consider potential harmful aspects of it: from exploitation of AI weaknesses for nefarious purposes (e.g. adversarial attacks against classifier) to abuses of harmless technologies (e.g. deepfakes to spread misinformation). Reactively addressing these AI mis/ab-uses when they have been already executed has proven to be costly in many dimensions (human, economic, etc.); hence a more preventive approach emerges as an alternative. In this talk we will do a walkthrough of some of these adversarial AI scenarios and how a red team mentality may be a viable strategy, with some examples.
Cristian Canton is a Research Manager on the AI Integrity Team at Facebook AI. He currently supports the AI Red Team, which focuses on understanding weaknesses and vulnerabilities derived from the use (or misuse) of AI, related to misinformation and election interference. From 2012-16, he was at Microsoft Research in Redmond (USA) and Cambridge (UK); from 2009-2012, he was at Vicon (Oxford), bringing computer vision to produce visual effects for the cinema industry. He got his PhD and MS from Technical University of Catalonia (Barcelona) and his MS Thesis from EPFL (Switzerland) on computer vision topics.