On Tuesday, in a Congressional hearing on regulating artificial intelligence that featured the CEO of the company behind ChatGPT, lawmakers and AI experts provided hints at the kind of regulatory regime they would like to see govern the rapidly evolving space.
The hearing focused largely on the impacts of general-purpose AI on society at large, and risks associated with leveraging the technology to advance misinformation campaigns during elections, to manipulating or anticipating public opinion and the risks the technology poses to children.
Some of the testimony touched on what a regulatory regime with oversight over AI might look like and whether existing agencies are sufficient for the job, which would impact the compliance burdens financial institutions face when they leverage AI for consumer-facing purposes like credit underwriting.
OpenAI CEO Sam Altman headlined the hearing by the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, part of the Senate Judiciary Committee. In part of his remarks, he agreed with an idea pitched by Sen. Richard Blumenthal that would bring an approach to AI models that has gained prominence in other spaces, including cybersecurity: Ingredients lists.
Read more:
"Should we consider independent testing labs to provide scorecards and nutrition labels — or the equivalent of nutrition labels — packaging that indicates to people whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be because it could result in garbage going out?" Blumenthal asked Altman.
"I think that's a great idea," Altman said. He added that companies should disclose test results run on the model before releasing it to help people identify weaknesses and strengths. "I'm excited for a world where companies publish — with the models — information about how they behave, where the inaccuracies are, and independent agencies or companies provide that as well."
Altman later said the list of companies with the capabilities to create such models is relatively small because of the computational resources required to train such models. As such, he said "there needs to be incredible scrutiny on us and our competitors."
The subcommittee also invited Gary Marcus, a New York University researcher who
Marcus named the Federal Trade Commission and Federal Communications Commission as agencies that are able to respond today to abuses of AI technology, but he said he sees the need for "a cabinet-level organization" to coordinate efforts across the government.
"The number of risks is large," Marcus said. "The amount of information to keep up on is so much, I think we need a lot of technical expertise. I think we need a lot of coordination of these efforts."
The third panelist, IBM's chief privacy and trust officer Christina Montgomery, disagreed, saying no new agency is needed, in response to a question from Sen. Lindsey Graham.
Read more:
"Do you agree with me [that] the simplest way and the most effective way is [to] have an agency that is more nimble and smarter than Congress, which should be easy to create, overlooking what you do?" Graham asked the panel.
Altman and Marcus each agreed. Later in the hearing, Altman expanded on his answer.
"I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards," Altman said.
When Graham turned to Montgomery to ask about creating a new agency, she said she would "have some nuances" and "build on what we have in place already today."
"We don't have an agency that regulates the technology," Montgomery said.
"So should we have one?" Graham asked.
"I don't think so," Montgomery replied.
Montgomery tempered her answer later in an exchange with Sen. Cory Booker.
"You can envision that we can try to work on two different ways," Booker said. "A specific — like we have in cars: EPA, NHTSA, the Federal Motor Car Carrier Safety Administration, all of these things — you can imagine something specific that is, as Mr. Marcus points out, a nimble agency that could monitor other things, you can imagine the need for something like that, correct?"
"Oh, absolutely," Montgomery replied.
"So just for the record, then: In addition to trying to regulate with what we have now, you would encourage Congress and my colleague, Senator Welsh, to move forward with trying to figure out the right tailored agency to deal with what we know and perhaps things that might come up in the future," Booker said.
"I would encourage Congress to make sure it understands the technology has the skills and resources in place to impose regulatory requirements on the uses of the technology and to understand emerging risks as well," Montgomery replied. "So, yes."
Read more:
Throughout the hearing, Montgomery consistently returned to IBM's proposal of regulating uses of artificial intelligence, which differs from attempts at regulating general-purpose artificial intelligence, which people
In a 2020 blog post outlining the company's vision for so-called "precision regulation," the co-directors of IBM's policy lab outlined five policy imperatives. Chief among those imperatives are that AI providers should disclose the ingredients that go into their model (data sources, training methods, and the like), particularly in high-risk cases like lending decisions. IBM details this transparency imperative in its
Another key policy imperative from that proposal is explaining to users how the AI makes decisions — a notoriously difficult problem in the area — and testing models for bias.
"Owners should also be responsible for ensuring use of their AI systems is aligned with anti-discrimination laws, as well as statutes addressing safety, privacy, financial disclosure, consumer protection, employment, and other sensitive contexts," the blog post reads.
IBM has since reiterated this guidance, including in
"IBM urges Congress to adopt a precision regulation approach to AI," Montgomery said. "This means establishing rules to govern the deployment of AI and specific use cases, not regulating the technology itself."
Montgomery invoked the examples of chatbots and systems that support decisions on credit, highlighting the different impacts the two have and the different regulations each would need.
"In precision regulation, the more stringent regulation should be applied to the use cases with the greatest risk," Montgomery said.
Sen. Mazie Hirono criticized IBM's pitch for use-specific regulations, pointing out that a general-purpose AI can help users with anything from telling a joke to aiding an election misinformation scheme.
"The vastness of AI and the complexities involved, I think, would require more than looking at the use of it," Hirono said. "And I think that, based on what I'm hearing today, we're probably going to need to do a heck of a lot more than to focus on what AI is being used for."
Read more:
Both Republicans and Democrats on the committee voiced support during the hearing for a new agency charged with regulating artificial intelligence, including Graham and Booker. Sen. Richard Blumenthal, the chair of the subcommittee, expressed hesitation.
"You can create 10 new agencies, but if you don't give them the resources — and I'm talking not just about dollars, I'm talking about scientific expertise — you guys will run circles around it," Blumenthal said. "There's some real hard decision making, as Montgomery has alluded to, about how to frame the rules to fit the risks."