A U.S. congressman has begun advocating for a federal department to regulate the use of artificial intelligence, postulating a dystopian future where AIs will make key decisions and autonomous weapons roam America.
Rep. Ted Lieu (D-CA) authored an opinion piece in The New York Times on Monday, arguing that AI has emerged as a powerful tool that can be used to benefit humanity — or deceive it, and worse. In fact, the example he cited, reproduced above, wasn’t written by Lieu, but by ChatGPT, the AI chatbot developed by OpenAI. (OpenAI received a multibillion-dollar investment from Microsoft on Monday, amid user reports that it is unveiling a paid version.)
Lieu, who earned a B.S. degree in Computer Science from Stanford, noted that AI is now present in everything from smart speakers to Google Maps. But where AI fails, people can be hurt: The editorial points out that a driver blamed Tesla’s self-driving mode for an eight-car pileup on the San Francisco Bay Bridge.
According to Lieu, Congress is simply incapable of passing legislation that can regulate AI: The technology moves too fast, and legislators lack the necessary knowledge to set laws and guidelines. Instead, “[w]hat we need is a dedicated agency to regulate AI,” Lieu wrote. “An agency is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error. Creating such an agency will be a difficult and huge undertaking because AI is complicated and still not well understood.”
Lieu cited other agencies, such as the Food and Drug Administration (FDA) as evidence that the government could still regulate a new and emerging technology. But, he said, the process could not happen overnight. Instead, Lieu said he would recommend the formation of a non-partisan AI Commission to provide recommendations on how such a federal agency could be formed, what it should regulate and what standards could apply.
The National Institute of Standards and Technology has already published an AI Risk Management Framework, a non-binding document that Lieu proposes that government build upon and add compliance mechanisms to. “We may not need to regulate the AI in a smart toaster, but we should regulate it in an autonomous car that can go over 100 miles per hour,” Lieu wrote.
Already, ChatGPT has almost passed the bar exam, scoring a 50.3 percent correct response rate. (A score of 68 percent is required to pass.) Artists are concerned that AI art may threaten their own commissions. Is it possible Lieu is hoping to regulate AI before it comes for his job, too?
Author: Mark Hachman, Senior Editor
As PCWorld’s senior editor, Mark focuses on Microsoft news and chip technology, among other beats. He has formerly written for PCMag, BYTE, Slashdot, eWEEK, and ReadWrite.
Recent stories by Mark Hachman:
Microsoft’s Copilot AI is stealing one of Midjourney’s best featuresMore workers are using AI, but they’re ashamed to admit itPhotoshop took my favorite feature and massively boosted it with AI