ML Fairness is a critical consideration in machine learning development. As we build machine learning models intended for a global and diverse user base, it is also important to ensure that the outcome is inclusive of that user base. To truly address fairness requires designing and developing products with diverse stakeholders in the room and a deep understanding of the impacts and factors at play. There are, however, approaches that can help in the evaluation and mitigation of common fairness concerns. In this talk, I will present a few lessons Google has learned through our products and research, and share some of the approaches developers can take to evaluate and improve fairness concerns. Lastly, I will touch on the importance of explainability in addressing fairness concerns, and the tools and techniques that are available in this space.
Tulsee Doshi is the product lead for Google’s ML fairness effort, where she leads the development of Google-wide resources and best practices for developing more inclusive and diverse products. Previously, Tulsee worked on the YouTube recommendations team. She studied Symbolic Systems and Computer Science at Stanford University.