University of Chicago Prof. Chenhao Tan gave a talk at the Center for Human-Computer Interaction + Design on Thursday that focused on a new way to improve decision making — harnessing the power of explainable AI.
Tan is not advocating for a world where AI makes all of our decisions. Instead, he said he uses AI as a complement to human decision making, placing humans at the center.
This approach to artificial intelligence is called human-centered AI, and it is the focus of Tan and his team’s work at the Chicago Human+AI Lab.
“I want to build AI where the goal is to help humans,” Tan said. “We need to take into account human psychology, human goals and human values into the process of building AI.”
This was the central theme of Tan’s talk, titled “Towards Human-Centered AI: How to Generate Useful Explanations for Human-AI Decision Making.” About 25 people gathered in the Frances Searle building to hear Tan present his team’s research at the Technology and Social Behavior Ph.D. program’s winter colloquium.
Tan began his presentation by drawing connections between human decision making and AI’s predictive process.
“Decisions can be thought of as prediction problems,” Tan said.
He gave the example of cancer diagnosis as a prediction problem. When a doctor decides if a medical scan shows cancer, they’re actually making a prediction about whether it’s cancer based on their past experience, Tan explained.
This is exactly what predictive AI models are trained to do — analyze data to make various predictions, or decisions, according to Microsoft.
Understanding an AI model’s algorithmic rationale, called “explanations,” Tan said, can allow us to find AI models’ faults and begin learning from the AI’s reasoning to become better decision makers ourselves. These explanations, however, must build on knowledge of human intuition to truly help people make better decisions, he said.
“Humans are important,” he said. “There’s not enough work on understanding humans, and understanding humans is a prerequisite for building human-centered AI.”
The TSB program offers a joint Ph.D. in computer science and communication. The program places an emphasis on understanding the impact of technology in social contexts, according to its website, with students engaging in hands-on research starting their first year.
Fourth-year Ph.D. candidate in technology and social behavior Sachita Nishal designs AI systems for journalists and said the talk was an opportunity to meet and learn from people across different disciplines.
“The talk makes a strong case for relying on human intuition when generating explanations, and I think that’s a really important takeaway,” Nishal said. “When designing for (reporters), trying to understand what their intuitions are about what is newsworthy and significant and worth covering is important.”
Tan’s interdisciplinary approach to human-centered AI was part of the reason the TSB department invited him to speak at the colloquium, said communication studies Prof. Nicholas Diakopoulos, director of graduate studies for the TSB program.
“I think the intersection with (human-computer interaction) and AI is just this really, really exciting space, and a lot of people want to know about it,” Diakopoulos said.
Email: [email protected]
Related Stories:
— University Libraries offers workshop for students on generative artificial intelligence
— Cognitive Science Program hosts panel on effects of AI in politics
— University of Chicago Prof. Sarah Sebo presents robot-human interaction research