Northwestern University and Evanston's Only Daily News Source Since 1881

The Daily Northwestern

Northwestern University and Evanston's Only Daily News Source Since 1881

The Daily Northwestern

Northwestern University and Evanston's Only Daily News Source Since 1881

The Daily Northwestern

Advertisement
Email Newsletter

Sign up to receive our email newsletter in your inbox.



Advertisement

Advertisement

Cognitive Science Program hosts panel on effects of AI in politics

University+of+Wisconsin-Madison+philosophy+Prof.+Annette+Zimmermann+said+it+would+not+be+beneficial+to+pause+AI+deployment+in+the+U.S.+because+it+would+harm+competition+with+foreign+adversaries.
Illustration by Shveta Shah
University of Wisconsin-Madison philosophy Prof. Annette Zimmermann said it would not be beneficial to pause AI deployment in the U.S. because it would harm competition with foreign adversaries.

In response to the widespread adoption of generative artificial intelligence tools in recent months, Northwestern’s Cognitive Science Program hosted an online panel discussion on the effects of Large Language Models on politics and civic engagement Tuesday.

The multidisciplinary panel featured scholars with expertise spanning political science, philosophy and computer science.

University of Massachusetts-Lowell political science Prof. Emma Rodman said she was concerned about how those who hold political power could use AI tools to maintain control.

However, Rodman said LLMs could also empower people to articulate their views and give them confidence to express their opinions in the political sphere.

“I think people could use these models as conversational partners to help develop their own voices,” Rodman said. “I think the models are also very good at helping to prompt creative thinking — helping people to think outside of their boxes.”

University of Washington computer science Prof. Amy Zhang said she is hopeful about the potential for a complementary relationship between researchers — who are better at fact-checking and establishing relationships between variables — and LLMs, which excel at summarizing text.

University of Wisconsin-Madison philosophy Prof. Annette Zimmermann said LLMs can contribute to citizens’ capacity to think creatively about important issues, but they can also spread misinformation and disinformation through its mistakes — known as “hallucinations.”

“Thinking about accelerating misinformation and disinformation — the reason why generative AI tools specifically would accelerate these problems is just that human users over-trust these tools much more than other forms of technology,” Zimmermann told The Daily.

Zimmermann said there is a danger that AI tools will replicate discriminatory views and biases in the future. To combat this, they said it’s important to incorporate diverse views in the accountability process for LLMs.

Student representatives from the NU Political Union, the Responsible AI Student Organization and the Cog Sci Club asked questions of the panelists. 

SESP sophomore Kiran Bhat, an executive board member of the Cog Sci Club, said the club represents many varying interests in the field. She attended the meeting on behalf of the group.

“My biggest takeaway was that there’s really a need for complexity and nuance when we’re talking about Large Language Models and AI,” Bhat said. “I’m always interested in accountability and how those issues will come into play when we’re designing models that are humongous and can have pretty serious impacts.”

Bhat said she appreciated Zimmermann’s emphasis on incorporating diverse voices into discussions on LLMs and AI in general.

Zimmermann said it is important to discuss AI as part of the democratic process so citizens can employ their own values into their voting decisions on AI regulation measures.

“I think we need to get over this AI exceptionalism that is going on and empower ourselves as ordinary citizens to make value-based judgments about generative AI,” Zimmermann said.

Comparing AI regulation to climate change, they said citizens can make informed decisions on political issues based on their values, even when they are not experts on a topic.

Rodman said the multidisciplinary aspect of the panel discussion should be applied to future discussions on AI.

“You cannot grapple with the multifaceted problems and opportunities that this technology offers unless you have a lot of people in the room bringing a lot of different perspectives,” Rodman said. “The stakes are so high that to try to talk about these issues without bringing all of our knowledge resources to bear would be very foolish.”

Samantha Powers contributed reporting.

Email: [email protected]

X: @IsaiahStei27

Related Stories:

Northwestern’s AI Club looks towards the future

The Office of the Provost hosts new Generative AI conversation series

Buffett Institute and McCormick host second annual Joint Conference on AI and National Security

More to Discover