Joep Meindertsma on the Existential Threat Posed by Artificial Intelligence

When members of Congress talk about risks from artificial intelligence, they talk about every risk imaginable.

Joep Meindertsma
Pause AI

So you will hear politicians talk about risks like fake news, polarization, threats to democracy, deepfake harassment, scams, bias, discrimination, job loss, economic inequality, mental health, addiction and loss of privacy, power accumulation, tyranny, computer viruses and cyber attacks.

What you will rarely hear a member of Congress address is existential risk.

AI researchers on average believe there’s a 14 percent chance that once we build a superintelligent AI – an AI vastly more intelligent than humans – it will lead to very bad outcomes – for example, human extinction.

“Would you choose to be a passenger on a test flight of a new plane when airplane engineers think there’s a 14 percent chance that it will crash?” asks Joep Meindertsma, leader of the Netherlands based citizens group – Pause AI.

The Pause AI web site is the best in the world in laying out in layman’s terms the risks posed by AI.

On the website is a letter calling for pausing AI development launched in April 2023, and has been signed over 33,000 times, including by many AI researchers and tech leaders.

The list includes people like: 

Stuart Russell, writer of the leading textbook on artificial intelligence used in most AI studies. 

“If we pursue [our current approach], then we will eventually lose control over the machines,” Russell said.

Yoshua Bengio, deep learning pioneer and winner of the Turing Award says that “rogue AI may be dangerous for the whole of humanity – banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.”

Stephen Hawking, theoretical physicist and cosmologist says that “the development of full artificial intelligence could spell the end of the human race.”

Geoffrey Hinton, the Godfather of AI and Turing Award winner, left Google to warn people of AI: “This is an existential risk.”

Eliezer Yudkowsky, founder of MIRI and conceptual father of the AI safety field says that “if we go ahead on this everyone will die.”

What is AGI?

“AGI stands for artificial general intelligence,” Meindertsma told Corporate Crime Reporter in an interview last week. “People have different definitions of what it exactly means, but I think one of the most common ways to look at it is when AI can do all of humanity’s intellectual work, and that includes lobbying politicians, that includes writing articles, that includes doing research.” 

“And that also means that pretty soon after that, we will likely have a true super intelligence, which is sometimes called ASI – artificial super intelligence. That would be an AI that is way smarter than pretty much anything on this planet.” 

“The reason we are in control of this planet as humans is because we are smart, and some of the smartest people on earth also have the highest amount of control over this planet. And that’s exactly the thing that makes ASI incredibly dangerous – they will be able to control things. It will be able to steer the universe towards the outcomes that it wants.”

“If an artificial super intelligence, at some point, wants something, and that thing exists in the world, it is very likely to succeed at getting that thing. So if the AI wants to calculate pi as effectively as possible, it might make sense to take over all compute power in the world and convert as many atoms in the world to get more compute power to calculate pi. This might seem like a silly example, but the point is, if something is incredibly smart, it’s really good at getting what it wants.”

“Even if an artificial super intelligence seems like it’s locked inside a machine, it can actually do a lot of things in the real world. It can send emails, create fake phone calls, control computers all across the world.”

“And a true super intelligence is probably able to be really good at multitasking. From a data center, it could run the world. I know these things sound pretty science fiction. And right now, you know, AI models are just not smart enough to do this, but we’re rapidly getting towards this point where AI becomes smart enough to aid in AI research.” 

“The main goal of Pause AI is to prevent us from getting to that level. We need to prevent AI from becoming powerful enough where it can meaningfully contribute to AI research. Because if we get there, then there will be models way smarter than us.” 

“So we need to be cautious and prevent this from happening. That’s basically what we’ve been saying since day one. The only way to get towards that world where we don’t build an AI that can self improve is by having some sort of global pause.”

“We cannot expect any individual company to pause voluntarily. We cannot expect individual nations to pause on the national level, because that would mean they would get left behind in the AI race.”

“We need something international, where there are guarantees, where multiple countries agree that they do anything in their power to prevent such a dangerous AI from coming into existence. And this is also where I become more hopeful. Building something that is really smart is also extremely difficult. It takes a lot of money. You need an immense amount of compute power. And this compute power, these AI chips, are highly specialized and very difficult to produce.” 

“If we look at how these things are made, we can see some pretty centralized supply chains where only a handful of companies are able to build this technology. But they’re incredibly interdependent of each other.”

“There’s basically one company that is producing the actual chips, which is TSMC. Then there is one company that makes the machines that make the chips, which is ASML. And there is another company that basically does the high bandwidth memory. I could go on and on and dive into the technical details of how these chips are made. But the point is, because this supply chain for AI chips is so incredibly interdependent and technologically complicated, it also means that governance is very much feasible. Building a superintelligent AI is probably more difficult than building a nuclear bomb. And we can prevent countries from making nuclear bombs. So we could probably also prevent countries from making artificial superintelligence.”

How would you close down AI? Would you shut down TSMC and ASML?

“Focus on the AI chips supply chain. But instead of completely shutting down the chip supply chain, I would say just closely monitor it. Make sure that whoever buys these chips, they are put under some formal supervision.” 

“Countries can get the hardware to enrich uranium. But if you get that hardware, you also get surveillance, monitoring and safety checks. We need the same for AI. If you build an AI model, you need these chips. You can get the chips, but only if you do things safely. Which would mean, you don’t try to build AI models that are too powerful or too large.”

A journalist in the United States, Karen Hao has a new book out titled – Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. 

I’ve heard her interviewed on television. And she says there is a debate between boomers and doomers. Boomers believe AIG is going to save us, cure cancer, do all these wonderful things. Doomers believe it’s going to kill us. But she says that both are based on the religion that we will get to AGI. She says we are unlikely to get to AGI anytime soon.

Have you heard that argument?

“Yes, it’s the most common form of response. Most people don’t believe we will ever have something that will outsmart humans. Many people feel intuitively that we are in charge as humans, this earth was made for us and we are special.” 

“But intelligence happens when you combine a bunch of neurons in a certain way. And we can simulate neurons on computers. We haven’t found out the exact way where we can get human level intelligence. But at some point, some of these companies will get there. And once they do, there is no physical limit anymore. Our human intelligence is constrained by the size of our heads. Our heads need to make it through the birth canal. And we are running on a biological substrate, which is actually quite slow compared to silicon substrates. Computer chips can act way faster than we can. If you chat with an AI model, you can see it think way faster than you can think. And these chips are going to get faster and faster.”

“That argument is very naive. It assumes humans have a special place in the world. It could be a form of coping. Instead of looking at the trend lines, looking at how these models are becoming smarter, looking at how we are already being outperformed across these many domains, it can be very comfortable to believe in the fable that we’ll forever have a special place on earth. And there is nothing that will ever beat humanity.”

I’ve heard Eliezer Yudkowsky say that AI could release an infectious disease that could wipe us all out. It could hire humans to hire people to do things that would wipe us out.

What do you see as the most likely route to extinction?

“Another analogy I got from Yudkowsky is that we humans don’t really plan to kill ants, but we just build a road somewhere and maybe there is an ant hill underneath that road. And the ants are wiped out.” 

“If we are going to be wiped out by AI, it’s quite likely to be something like that. The AI could maybe build factories for small machines that slowly take up more and more space on this earth and at some point, it will compete for energy, resources, clean air, space with us. And we will lose because we would no longer be the smartest species on earth.” 

“Maybe we suffocate. Maybe our cities get demolished. Maybe we will get buried under rubble. There are many ways it could go down. I wouldn’t bet on any one of those. But it will be – we don’t matter anymore.”

[For the complete q/a format Interview with Joep Meindertsma, 39 Corporate Crime Reporter 29(12), July 21, 2025, print edition only.]

Copyright © Corporate Crime Reporter
In Print 48 Weeks A Year

Built on Notes Blog Core
Powered by WordPress