View Our Recent TechTalk
"Reproducing 150 Research Papers and Testing Them in the Real World: Challenges and Solutions" with Grigori Fursin
After completing the MILEPOST project in 2009, I opened the cTuning.org portal and released into the public domain all my research code, data sets, experimental results, and Machine Learning models (ML) for our self-optimizing compiler. My goal was to continue this research and developments as a community effort while crowdsourcing ML training across diverse programs, data sets, compilers, and platforms provided by volunteers. Unfortunately, this project quickly stalled after we struggled to run experiments and reproduce results across rapidly evolving systems in the real world.
This experience motivated me to introduce artifact evaluation at several ACM conferences including CGO, PPoPP, and ASPLOS and learn how to reproduce 150+ research papers. In this talk, I will present numerous challenges we faced during artifact evaluation and possible solutions. I will also describe the Collective Knowledge framework (CK) developed to automate this tedious process and bring DevOps and FAIR principles to research.
The CK concept is to decompose research projects into reusable micro-services that expose characteristics, optimizations, and SW/HW dependencies of all sub-components in a unified way via a common API and extensible meta descriptions. Portable workflows assembled from such plug & play components allow researchers and practitioners to automatically build, test, benchmark, optimize, and co-design novel algorithms across continuously changing software and hardware. Furthermore, the best results can be continuously collected in public or private repositories together with negative results, unexpected behavior, and mispredictions for collaborative analysis and improvement.
I will also present the cKnowledge.io platform to share portable, customizable, and reusable CK workflows from reproduced papers that can be quickly validated by the community and deployed in production. I will conclude with several practical use-cases of the CK technology to improve reproducibility in ML and Systems research and accelerate real-world deployment of efficient deep learning systems from the cloud to the edge in collaboration with General Motors, Arm, IBM, Intel, Amazon, TomTom, the Raspberry Pi foundation, ACM, MLCommons, and MLPerf.
ACM award winners, leading researchers, industry veterans, thought leaders, and innovators address today and tomorrow's hottest topics and issues in computing for busy practitioners, as well as educators, students, and researchers. Check out our archive of these ACM TechTalks, free for members and non-members alike.
Talks from some of the leading visionaries and bleeding-edge researchers in AI/ML: Fei-Fei Li on visual intelligence in computers and ImageNet; Eric Horvitz on AI solutions in the open world; and Tom Mitchell on using ML to study how the brain creates and represents language.
View the recent ACM TechTalk, "Democratizing AI: Creating Cognitive AI Assistants with No Coding," presented on Tuesday, April 13, at 1:00 PM ET/10:00 AM PT by Michelle Zhou, CEO of Juji, Inc., ACM Distinguished Member, and Editor in Chief of ACM Transactions on Interactive Intelligent Systems (TiiS). Wenxi Chen, AI Software Engineer at Juji, Inc., moderated the questions and answers session. Continue the discussion on ACM's Discourse Page.
An Industry Perspective on What We Should Be Teaching Our Next Generation of Software Practitioners in the Universities
View the recent ACM TechTalk, "An Industry Perspective on What We Should Be Teaching Our Next Generation of Software Practitioners in the Universities," presented by author Paul E. McMahon, Principal Consultant at PEM Systems. Will Tracz, Lockheed Martin Fellow Emeritus and member of the ACM Professional Development Committee, moderated the questions and answers session. Continue the discussion on ACM's Discourse Page.