Can a super AI really take over the world?

Editor’s note: This is the third of a four-part series about the future of Artificial Intelligence.

(Jan. 9, 2024) — President Ronald Reagan was alarmed. It was 1983 and he had just watched the movie “WarGames” starring Matthew Broderick as a teen who accidentally hacks into the controls of NORAD, the North American Aerospace Defense Command.

Located deep in a mountain in Cheyenne, Wyo., NORAD was built during the Cold War to provide continuous worldwide detection of a USSR ballistic missile attack on North America. Broderick thinks he’s just playing an online game called “Thermonuclear War.” Instead, it’s a top-secret program used to train the NORAD computer, and he almost arms the missile silos to set off a nuclear war.

Deeply disturbed by the movie, Reagan asked his general: “Can something like this really happen?” After looking into it, the general replied: “Mr. President, the problem is much worse than you think.”

AI has advanced exponentially since those days. Last May, more than 100 technology leaders, corporate CEOs and scientists warned that “AI poses an existential threat to humanity.”

Losing control of AI

Researchers are already using meta-learning to train AI over and over to improve itself. This could result in Artificial General Intelligence, or AGI.

When AI learns to learn until it reaches “the singularity,” it can escape our control. That’s what happened in “WarGames,” when the computer took over until – spoiler alert – Broderick tricked it to learn that there was no way to win such a war.

We are now faced with the ugly fact that computers can outperform us, their human creators.

“These artificial brains are not constrained by the factors that limit human brains – like having to fit inside a skull,” Douglas Hofstadter, an eminent cognitive scientist, told columnist David Brooks.

And, he emphasizes, “they are improving at an astounding rate, while human intelligence isn’t.”

Another dangerous aspect is that we, the creators, don’t really understand what we’ve created.

“It’s almost like you’re deliberately inviting aliens from outer space to land on your planet, having no idea what they’re going to do when they get there except that they’re going to take over the world,” said Stuart Russell, a computer scientist at UC Berkeley.

If technology experts are successful in creating AGI, “a superintelligent computer system that amasses economic, political and military power could hold the world hostage,” according to Stuart Armstrong, cofounder of Aligned AI.

“WarGames” is no longer a fantasy of Hollywood. In the wrong hands, AGI could start World War III.

Taming the monster

But not everyone sees doom and gloom. Keith Holyoak, a psychology professor at UCLA, noted that ChaptGPT4 “can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems – some of which children can solve quickly – the things it suggested were nonsensical.”

Carl Newport, an associate professor of computer science at Georgetown University, writes that the worries about AI are overblown.

“Programs like ChatGPT don’t represent an alien intelligence with which we must now learn to coexist. … We can be assured that they’re incapable of hatching diabolical plans and are unlikely to undermine our economy. It’s clear that what’s been unleashed is more automaton than golem.”

The good news is we’re getting some of the less threatening aspects under control:

Using an old-school method, high-school and college teachers are prohibiting notes and classroom laptops for exams. Instead, they’re requiring that essays be hand-written in the classroom on “blue books.”

Technologists are developing tools to detect plagiarism and disinformation. For example, Microsoft is rolling out cryptographic methods to watermark and sign AI-generated content with metadata about the origin of an image or video.

In a broader educational context, Harvard’s Kemper Center is bringing together researchers in neurobiology and computer science to study the relationship of the human brain and AI together, “within ethical frameworks and a desire to improve the world.”

Still, Russell warns: “If we believe we have sparks of AGI, that’s a technology that could completely change the face of the earth and civilization. How can we not take that seriously?”

In my final article, I’ll write about what the tech companies, the Biden administration and the European Union are putting in place to take seriously the potential threat of AI.

Gail Murray
Gail Murray

Gail Murray served in Walnut Creek as Mayor and city councilmember for 10 years. From 2004-2016 she served as District 1 Director, Board of Directors of the San Francisco Bay Area Rapid Transit District (BART). She is the author of "Lessons from the Hot Seat: Governing at the Local and Regional Level."

[USM_plus_form]