The Buffett Institute for Global Affairs held a symposium on artificial intelligence and geopolitics Thursday to explore the implications of AI technologies on national security.
McCormick Prof. V.S. Subrahmanian and Ireland’s University College Cork Prof. Barry O’Sullivan moderated several panels on the implications of AI in national security, economics and governance.
Subrahmanian has almost 20 years of experience researching the fields of counterterrorism and AI. O’Sullivan called him a “trailblazer” in the areas of security and democracy and said he was among the first to look at AI and elections.
Yet Subrahmanian said he wanted the information in his panel to remain understandable to people regardless of their AI knowledge and said he hopes to train people on how to recognize AI issues before they occur.
First-year engineering management graduate student Ankesh Pandey said he found the Buffett Institute’s symposium important because the field of geopolitics is particularly vulnerable to the adverse effects of malicious AI usage.
Pandey is an aspiring entrepreneur and has worked in the engineering field since 2013. He said he is interested in how technology shapes politics, and the symposium would better acquaint him with the skills he needs to find success in entrepreneurship.
Nicole Yost, a consultant IT project manager, said she attended the symposium to learn more about recent developments in AI technology. Yost said she recently started an AI organization for people of all ages and backgrounds to have authentic conversations about AI.
“I’m learning great stuff,” Yost said. “It’s just been an interesting day and one that I’m going to share with my group of AI-curious coworkers.”
Lectures varied from conversations on the effects of AI in economics to how malign operatives could use AI to influence foreign elections.
In his panel titled “AI, Deepfakes & Malign Ops,” Prof. Subrahmanian invited the audience to take a straw poll on whether they believed a certain photo, video or audio clip was generated with AI.
He then used the results to illustrate comparative efficacy of the Global Online Deepfake Detection System (GODDS), a technology created by members of the Northwestern Security & AI Lab (NSAIL), which Subrahmanian directs.
GODDS is a free system that allows verified journalists to upload content to receive an expert opinion on whether an artifact is likely to be a deepfake. Subrahmanian cited The New York Times, Al Jazeera and BBC as several examples of high-profile institutions that used the software.
“In cybersecurity, there’s an adage that the bad guys only have to be right once, but the good guys have to be right all of the time,” Subrahmanian said.
He added that, more than ever, this attitude would shift toward what he called a “GPU war,” a term used to describe how processing power will ultimately be the decisive factor in determining the victor of cybersecurity conflicts.
“These tools are neither good nor evil,” Subrahmanian said. “It just depends how you use them.”
Email: [email protected]
Email: [email protected]
Related Stories:
—Northwestern Security and AI Lab continues to explore relationship between cybersecurity, AI
—‘A tale of two domains’: McCormick professor discusses AI policymaking
—Anne-Marie Slaughter discusses international law at Buffett Institute