Mills: Will you win the imitation game, or will AI prevail?

Kadin Mills, Opinion Editor

The imitation game, as proposed by mathematician Alan Turing in his 1950 paper “Computing Machinery and Intelligence,” tasks two respondents, a person and a computer, with answering questions posed by a third person or judge. The questioner must then determine which is human. This has come to be known as the Turing Test.

Turing proposed that if a machine could successfully imitate a person, then it possessed “artificial intelligence.” This notion of AI has evolved immensely since Turing’s original thought experiment, but have we gotten to the point where computers can hold coherent conversations with people? What is artificial intelligence, and could you tell the difference between a person and AI?

AI refers to the creation of intelligent machines that can think and act like human beings. These systems are designed to perform tasks that typically require human intelligence, such as understanding natural language, recognizing images and making decisions.

AI has come a long way in recent years and is now being used in a variety of industries, from healthcare and finance to retail and transportation. With advances in machine learning and natural language processing, AI is becoming increasingly sophisticated and is capable of performing more complex tasks with greater accuracy.

As a person, I believe that AI has the potential to greatly improve our lives. However, as with any new technology, there are also concerns about the impact of AI on our society.

So what do you think of AI? How did you do? I asked ChatGPT by OpenAI to “tell me as humanly as possible what AI is,” and the previous three paragraphs are its answer (lightly edited for AP style conventions and concision). You are the judge, and I am the human respondent. Did you think my intro was AI? Or did you think AI’s description of itself was me? Am I me? Who knows. What I do know is that I am scared of AI.

Programs like ChatGPT use machine learning to generate text in response to user input. This means the program learns from existing content to generate new content, which can blur the lines between an original and plagiarized work.

ChatGPT is capable of passing common law exams at the University of Minnesota, as well as an operations management assessment at The Wharton School at The University of Pennsylvania. It has even “performed at or near the passing threshold” in all three parts of the United States Medical Licensing Exam.

AI as a tool for students and professionals is nothing new. Grammarly, a tool used by over 30 million people, is a prime example of AI’s usefulness in higher education. The program uses machine learning to catch and correct common grammar mistakes. Similarly, college professors are using AI to try to catch us cheating. Turnitin, a plagiarism software utilized by professors, has been utilizing AI processing capabilities since 2015. Even predictive text runs on machine learning.

Despite these existing tools, new concerns of plagiarism and misinformation are mounting as programs like ChatGPT and other AI technologies use existing writing, art and knowledge to generate human-like work. This opens the door to numerous debates on the ethics of AI, machine learning and the information these programs are learning from. Are these sources biased? Are they accurate? And is the machine learning algorithm committing theft?

It is our job — that is, as humans — to keep AI in check, to ensure its ethical development and usage. But I don’t think we have that quite figured out. In the meantime, AI might not be ready to terminate humanity (yet). But I think it is still reasonable to be wary of AI’s progress.

Kadin Mills is a Medill Junior. He can be contacted at [email protected]. If you would like to respond publicly to this op-ed, send a Letter to the Editor to [email protected]. The views expressed in this piece do not necessarily reflect the views of all staff members of The Daily Northwestern.