Unpredictable Black Boxes are Terrible Interfaces with Maneesh Agrawala
Modern generative AI models are capable of producing surprisingly high-quality text, images, video, and even program code. Yet, the models are black boxes, making it impossible for users to build a mental model for how the AI works. Users have no way to predict how the black box transmutes input controls (e.g., natural language prompts) into the output text, images, video or code. Instead, users have to repeatedly create a prompt, apply the model to produce a result and then adjust the prompt and try again, until a suitable result is achieved. In this talk I’ll assert that such unpredictable black boxes are terrible interfaces and that they always will be until we can identify ways to explain how they work. I’ll also argue that the ambiguity of natural language and a lack of shared semantics between AI models and human users are partly to blame. Finally I’ll suggest some approaches for improving the interfaces to the AI models.
Maneesh Agrawala Bio
Maneesh Agrawala is the Forest Baskett Professor of Computer Science and Director of the Brown Institute for Media Innovation at Stanford University. He works on computer graphics, human computer interaction, and visualization. His focus is on investigating how cognitive design principles can be used to improve the effectiveness of audio/visual media. The goals of this work are to discover the design principles and then instantiate them in both interactive and automated design tools. Honors include an Okawa Foundation Research Grant (2006), an Alfred P. Sloan Foundation Fellowship (2007), an NSF CAREER Award (2007), a SIGGRAPH Significant New Researcher Award (2008), a MacArthur Foundation Fellowship (2009), an Allen Distinguished Investigator Award (2014), and induction into the SIGCHI Academy (2021). He was named an ACM Fellow in 2022.