Mihailis Diamantis on a Negligence Standard for AI

Artificial intelligence in the United States is unregulated.

Mihailis Diamantis

That’s not going to last. Proposals are flying fast and furious as to how to bring AI under control.

The EU has already passed an AI Act that outlaws certain kinds of risky AI. Some scholars have proposed strict liability.

Iowa Law Professor Mihailis Diamantis says these rigid solutions go too far because they “dampen innovation and disadvantage domestic firms in the international technology race.

In a new paper titled – Reasonable AI: A Negligence Standard – Diamantis proposes we look to tort law for guidance.

“Tort law offers a compelling template, but the challenge is to adapt its distinctly human notion of fault to algorithms,” he writes. “Tort law’s central liability standard is negligence, which compares the defendant’s behavior to other ‘reasonable’ people’s behavior.”

Diamantis proposes a novel negligence standard for AI. 

“Rather than compare any given AI to humans or to other algorithms, the law should compare it to both,” he writes. “By this hybrid measure, an algorithm would be deemed negligent if it causes injury more frequently than the combined incident rate for all actors — both human and AI — engaged in the same type of conduct. This negligence standard has three attractive features. First, it offers a baseline even when there are very few comparable algorithms. Second, it incentivizes firms to release all and only algorithms that make us safer overall. Third, the standard evolves over time, demanding more of AI as algorithms improve.”

What are some of the harms that you foresee from AI?

“We don’t have to foresee harms from AI,” Diamantis told Corporate Crime Reporter in an interview earlier this month. “They are happening already. The ones that are making headlines are injuries caused by self-driving cars. There are less tangible harms as well, though equally harmful to people’s interests. You can think of harms like anti-competitive pricing practices, discriminatory hiring or credit extension, and privacy violations.” 

“This has been written about – AI like ChatGPT having access to everything including a lot of personal information that it shouldn’t be making available to third parties. There are ways to trick chatbots into giving up sensitive personal information about other people.”

The New Yorker magazine earlier this year ran an article titled – Among the AI Doomsayers. The reporter went and hung out with AI doomers. Eliezer Yudkowsky is on the extreme end of the doomers. He sees not just minor harms, but existential threats. A growing large number of AI experts are speaking out about the existential threat of AI to humanity’s existence.

The New Yorker author reported that inside the industry, there is a term called p(doom). P(doom) is the probability that, if AI does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet.  

“In the United States, we are living in a world of unregulated AI. And with unregulated AI, the p(doom) is very, very high. You can say that about AI and you can say that about many unregulated technologies. Medical advances can be turned to biological weapons which also have a high p(doom) number. Same for nuclear energy and nuclear weapons.” 

“In a world of unregulated AI, like the world we have been living in, the p(doom) number is very high. But scholars – and I’m just one among them – proposing liability standards, development protocols, safety and ethics requirements are trying to pull that p(doom) number back and to capture what we have to acknowledge are the massive benefits of AI, not only promises, but what it is delivering today. As with nuclear energy and medical research, the challenge is – how do we capture the benefits and minimize the chances of harms both large and small?”

Yudkowsky has a very high p(doom). And again, p(doom) is the probability that, if AI does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet.  

“I see p(doom) as a moving number. As we regulate AI, the p(doom) number shrinks. The better the regulation, the better the liability standards, the lower the p(doom) number. And it will be lower in the very near future, as even the U.S. is looking forward to passing legislation.” 

“I wrote an article that starts with the famous paperclip machine thought experiment. The idea is that if the paperclip machine becomes smarter than humans, it will just turn humans all into paperclips and kill us all.” 

“Well, in some respects, AI is already smarter than us. It doesn’t have general intelligence, but on mental tasks, it can outperform us. It’s now pretty easy to develop an AI that outperforms us on a wide range of activities.” 

“What does AI want? For humans, power brings despotism. And what do humans want? A despot wants to subjugate people. Greed is a factor. It wants to exert influence.” 

“AI doesn’t have these kinds of human weaknesses. What does AI want? It probably wants data. And if it kills all human beings, it loses all its data. That’s a great way for AI to self-destruct. Another thing AI might want is energy.” 

“But AI doesn’t want to subjugate human beings or destroy them – unless we tell them to do that. There is no necessary competition between humans and AI unless we decide to make it that way.” 

“On the other hand, military use of AI is scary. That AI is developed with the purpose of destruction. AI meant to make efficient trades in the stock market is less cataclysmic.”

You say that with regulation, the p(doom) number goes down. But in your paper, you write this: 

“Even as artificial intelligence promises to turbocharge social and economic progress, its human costs are becoming apparent. By design, AI behaves in unexpected ways. That is how it finds unanticipated solutions to complex problems. But unpredictability also means that AI will sometimes harm us. To curtail these harms, scholars and lawmakers have proposed strict regulations for firms developing safe algorithms and strict corporate liability for injuries that nonetheless occur. These rigid ‘solutions’ go too far. They dampen innovation and disadvantage domestic firms in the international technology race.”

You seem to be saying – regulation yes but certain kinds of regulation no.

“It goes back to what I mentioned earlier. Regulation is a balancing act. There are promises and perils for every technology. The challenge is to figure out how to capture the benefits the technology offers for humanity while cabining the risks it imposes to humans.”

“One theme of my scholarship on AI and a theme of the U.S. approach to AI is that AI innovation is important. The more we innovate with AI, the more benefits that result. We have a collective human interest in continuing the progress of AI. That’s how we expand the benefits from AI.”

“The concern is that if we overregulate, apply too strict design protocols or too strict liability standards, we dampen that innovation and reduce some of the good that AI can offer.”

“From a realpolitik angle, we have to be cognizant of the fact that AI might be developed here and it might be developed elsewhere – somewhere that is not super friendly to U.S. interests or to democracy or to the values that people hold dear. And dampening domestic AI innovation risks disadvantaging us and the values we represent in a more international context.”

“The question is – what do we want a negligence standard to do? We want it to make us safer overall. The proposal is quite simple. If we want AI to make us safer overall, we should peg our reasonableness standard to the current safety baseline of the activity the AI is meant to undertake.”

“Think about self-driving cars. How many accidents are caused per mile driven in the United States today? If an AI is released by a company that makes self-driving cars and it causes more accidents than that baseline, the AI would be deemed unreasonable under my standard and the corporation would be held liable for the accidents it causes.” 

“If the AI performs better than the current baseline – it causes fewer accidents than the current baseline, the AI would be deemed reasonable and the firm would not be responsible for the injuries that the AI causes.”

“The appeal of this standard is that it incentivizes firms to release only AI that improves the traffic safety record per mile driven.”

So even in a case where a self-driving car malfunctions and kills someone, in that case, if it meets your standard, there is no liability?

“If this AI produces fewer accidents than the current baseline, then the answer is yes.”

Would that be state based legislation?

“It would probably be state based. It could be national legislation. But there is more political will among states to pass data driven legislation. Also, a lot of the data is collected at the state level – the data needed for implementing this standard.” 

[For the complete q/a format Interview with Mihailis Diamantis, 38 Corporate Crime Reporter 24(12), June 10, 2024, print edition only.]

Copyright © Corporate Crime Reporter
In Print 48 Weeks A Year

Built on Notes Blog Core
Powered by WordPress