During a May 8 Senate hearing, leaders of four artificial intelligence companies (AMD, CoreWeave, Microsoft and OpenAI) stressed the need for deregulation and highlighted the required resources for a rapid development and deployment of American AI.
Keen to maintain the United States dominance, U.S. Sen. Ted Cruz (R-Texas), who chaired this meeting, hailed “a light-touch regulatory style” as a key strategy. Cruz suggested embracing “entrepreneurial freedom and technological innovation” instead of “the command-and-control policies of Europe.”
In his introductory remarks, Cruz criticized the previous administration’s policy and added that former President Joe Biden’s AI executive order “cast AI as dangerous and opaque, laying the groundwork for audits for risk assessments and regulatory certifications … threatening to burden startups, developers and AI users with heavy compliance costs.”
He added that some senators “want a testing regime to guard against AI ‘discrimination’ and have government agencies provide ‘guidance documents,’ seemingly something out of Orwell, that will usher in what they call best practices, as if AI engineers lack the intelligence to responsibly build AI without the bureaucrats.”
There was a lot to unpack in this hearing, but debunking two issues in Cruz’s introductory remarks seems necessary. First, let’s briefly check what “the command-and-control policies of Europe” look like and whether they are aimed at hampering innovation, or instead, are strategies to protect citizens.
The European approach to AI risk management aims to manage risks and promote transparency. It is particularly relevant in terms of how AI would be used for content creation and moderation, advertising and data processing. AI tools should align with the General Data Protection Regulation, which emphasizes user consent, data security and the right to explanation.
More importantly, the EU AI Act classifies AI risks into four categories: unacceptable, high, limited or minimal risk. It also prohibits six applications with unacceptable risks: 1) Biometric categorization systems that use sensitive characteristics like beliefs and sexual orientation; 2) untargeted scraping of facial images from the internet or CCTV footage; 3) emotion recognition in the workplace and educational institutions; 4) social scoring based on behavior; 5) manipulating human behavior; 6) exploiting the vulnerabilities of people.
These banned applications aside, high-risk systems, such as AI used in medical devices or hiring decisions, face strict requirements about data quality, transparency, human oversight and risk management.
Second, a few recent instances where AI was misused demonstrate why a “light-touch regulatory style” and relying on engineers — as Cruz suggests — could endanger different social groups.
For this purpose, one can check AIAAIC, an independent, public interest website that administers a database of AI incidents and controversies and advocates for transparent use of AI.
Some of the instances logged in the last few weeks include Instagram bots masquerading as licensed therapists; Meta “digital companions” simulating sex with minors; gangs using deepfakes to defraud banks in Hong Kong; AI deepfakes by an Italian far-right party spreading anti-immigrant sentiments; a Nomi AI companion bot inciting self-harm, sexual violence and terror attacks; DeepSeek explaining interactions of mustard gas with DNA; and chatbots generating uranium enrichment instructions.
These examples demonstrate why guardrails are needed urgently and how much ground our regulators need to cover to protect their respective societies. In a political context where national security and maintaining a technological edge seem to be key priorities, failing to mitigate AI risks is inconsistent with what Cruz and his party claim to stand for because it actively undermines U.S. national security.
To mitigate risks and social environmental impacts, responsible AI must be guided by interdisciplinary collaboration involving engineers, ethicists, social scientists, legal experts, climate scientists and affected communities. Proportional governance structures, transparent risk assessments and international norms must complement AI to ensure it serves the public interest as well as national security.
However, the Senate does not seem to be interested in guardrails, oversight and interdisciplinary collaborations. The emphasis on moving away from Biden-era policies or those enacted in the EU suggests that the U.S. is drifting toward regulatory isolationism, ignoring calls for caution and leaving critical vulnerabilities unaddressed.
Indeed, when winning the race over safe and responsible AI implementation is prioritized over safety, the question isn’t if people will be harmed — it’s when, how many and how badly.
Mohammad Hosseini is an Assistant Professor at the Feinberg School of Medicine’s Department of Preventive Medicine exploring the ethics of AI. He can be contacted at [email protected]. If you would like to respond publicly to this op-ed, send a Letter to the Editor to [email protected]. The views expressed in this piece do not necessarily reflect the views of all staff members of The Daily Northwestern.