Current deep-learning benchmarks focus on generalization on the same distribution as the training data. However, real-world applications require generalization to new, unseen scenarios, domains, and tasks. I'll present key ingredients that I believe are critical towards achieving this, including (1) compositional systems that have modular and interpretable components; (2) unsupervised learning to discover new concepts; (3) feedback mechanisms for robust inference; and (4) causal discovery and inference that capture underlying relationships and invariances. Domain knowledge and structure can help enable learning in these challenging settings. This talk is beginner-friendly and will give a high-level overview of these challenges.
Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as the Alfred. P. Sloan Fellowship, NSF Career Award, Young Investigator Awards from DoD, and Faculty Fellowships from Microsoft, Google, Facebook, and Adobe. She is part of the World Economic Forum's Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.