Adversarial Machine Learning with Ian Goodfellow
Most machine learning algorithms involve optimizing a single set of parameters to decrease a single cost function. In adversarial machine learning, two or more "players" each adapt their own parameters to decrease their own cost, in competition with the other players. In some adversarial machine learning algorithms, the algorithm designer contrives this competition between two machine learning models in order to produce a beneficial side effect. For example, the generative adversarial networks framework involves a contrived conflict between a generator network and a discriminator network that results in the generator learning to produce realistic data samples. In other contexts, adversarial machine learning models a real conflict, for example, between spam detectors and spammers. In general, moving machine learning from optimization and a single cost to game theory and multiple costs has led to new insights in many application areas.
Ian Goodfellow
Ian Goodfellow is a staff research scientist at Google Brain, where he leads a group of researchers studying adversarial techniques in AI. He developed the first defenses against adversarial examples, was among the first to study the security and privacy of neural networks, and helped to popularize the field of machine learning security and privacy. He is the lead author of the MIT Press textbook Deep Learning (www.deeplearningbook.org). Previously, Ian has worked at OpenAI and Willow Garage, and has studied with Andrew Ng and Gary Bradski at Stanford University, and with Yoshua Bengio and Aaron Courville at Université de Montréal. In 2017, Ian was listed among MIT Technology Review’s “35 Innovators Under 35,” recognizing his invention of generative adversarial networks.