Will global efforts to manage AI come too late?

Do we need global efforts to manage AI before it’s too late?
Editor’s note: This is the final article in a four-part series about the future of Artificial Intelligence. (Photo by Chris Yang on Unsplash.com)

(Jan. 13, 2024) — Are we the frog sitting in tepid water, dazzled by all the wonderful innovations AI is offering us?

As we struggle with how to tame AI, will we, like the frog in the fable, be unaware that the water is starting to boil – and suddenly AI escapes our control? Will it evolve into a superintelligent computer that holds the world hostage?

The academic godfathers who created AI, as well as top executives of Google, Microsoft, Open AI and Chinese and Russian scientists, all signed a statement stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Bill Gates and Elon Musk have also weighed in on its potential risks.

Geoffrey Hinton, an AI pioneer and one of these signers, told TV journalist Farheed Zakaria that since the United States, China and Russia all agree that AI can be an existential threat to humanity, “they should be able to agree that they don’t want to let it then wipe us all out.”

But Marc Andreessen, venture capitalist and co-creator of the Mozilla internet browser, believes these fears are overblown. He says AI is a program controlled by humans and doesn’t have its own goals.

Controlling AI through legislation

The existential threat is no less than humans being subjugated or even wiped out by an out-of-control AI. Can we tame it before it’s too late? Or should we rejoice in all the great good that can come of AI while understanding its downsides and taking steps to mitigate it? The answer is fraught with difficulty.

An existential threat requires global cooperation. In the meantime, political bodies are tackling more immediate issues for which they can provide some control: how to combat misinformation, bias, artistic rights and privacy while insuring transparency and accountability.

The European Union is the leader right now in developing regulations for AI. In June, the European Parliament passed the draft AI Act to ensure confidence in users. Parliament’s priority is to have AI systems overseen by people rather than by automation. The draft includes a series of restrictions that can be enforced, such as:

  • Curtail the use of most facial recognition software.
  • Require chatbot systems to disclose that the content was created by AI.
  • Publish summaries of copyrighted material used to train AI systems.
  • Safeguard against creating illegal content.
  • Conduct risk assessments before allowing AI to operate infrastructure, such as energy and water.
  • Ban classifying people based on their behavior, status or personal characteristics.
  • Assess bias in AI programs used for education and employment.

U.S. regulating pieces of the puzzle

The U.S. approach is expected to be less prescriptive. In a meeting in July with President Joe Biden, seven major companies in AI development made voluntary commitments to new standards for safety, security and trust.

Even though they compete, Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreed to safeguards, such as testing products for security risks and using watermarks to make sure consumers can spot AI-generated material. But the rules are not enforceable and can be interpreted by every company differently.

Several states have passed laws regulating AI, and more have proposed similar bills. Last November, California amended the Consumer Privacy Act with additional limitations on data retention, data sharing and use of sensitive personal information. Twelve creative organizations comprised of artists and authors banded together to propose that the U.S. Copyright Act “be amended so that it is a violation to intentionally remove ‘copyright management information’ from a copyrighted work without permission of the copyright owner, whether or not it can be proven that it was knowingly done to induce or enable infringement.”

Unlike the EU’s broad approach, proposals like these by states and organizations may result in a decentralized patchwork of laws in the United States regulating various aspects of AI use.

Developing a united front

Some believe the situation is so urgent that they have called for a worldwide agreement on regulating AI, suggesting it be patterned after the 1996 multilateral nuclear test ban treaty. The way the world is exploding right now, a global AI agreement seems highly unlikely.

However, the G7 Summit in Hiroshima in May offered a hopeful step. Leaders from the U.S., UK, France, Germany, Japan, Canada, Italy and the EU established the Hiroshima AI Process. They agreed to develop AI governance and technical standards as well as methods to address misinformation.

A working group was tasked with coming up with an action plan in coordination with the Organization for Economic Cooperation and Development, a forum of 37 democracies.

Key factors are stopping the United States – and the world community – from taming AI:

  • Who will be the regulator?
  • Will it be the heavy hand of the government, which doesn’t understand the field like the tech companies?
  • Should we instead trust Big Tech? These companies have competing interests, such as making a profit, being top in the field and capturing a return on their enormous investments.
  • If it’s an independent entity, who appoints the regulator? Who does it report to? Who controls it?

Regulation vs. innovation

Big companies fear that regulations will hamstring their ability to compete worldwide. China recently announced its ambition to become the global leader in all AI research by 2030. Will it or other non-Western countries grab the biggest piece of the market when we impose regulations on ourselves?

There’s also concern that regulation will stifle innovation. AI is out there; we can’t go back. A pause while we evaluate its risks could erode U.S. leadership in this field.

There is a current debate about how much of the data set to train AI should be public. Meta released LLaMA this year, short for Large Language Model Meta Artificial Intelligence, and Abu Dhabi released Falcon, both of which are data sets available to anyone.

The Seattle-based Allen Institute for AI wants to democratize AI by opening up the models even more. The Mozilla Foundation, a non-profit that supports the internet as an open public resource, agrees, because it worries about the current concentration in Big Tech of AI technology and the economic system.

But the tech companies say that making transparent how their models are trained will risk that criminals will hijack the technology to scam the public or engage in highly dangerous behavior. They maintain that keeping some of the research and data private actually protects the public.

The EU, U.S and G7 all recognize the need to set boundaries in the development of AI. How we go further to confront the larger existential risk is still unaddressed.

But we had best heed what Vladimir Putin asserted in 2017:

“Artificial intelligence is the future not only of Russia, but of all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”

Gail Murray
Gail Murray

Gail Murray served in Walnut Creek as Mayor and city councilmember for 10 years. From 2004-2016 she served as District 1 Director, Board of Directors of the San Francisco Bay Area Rapid Transit District (BART). She is the author of "Lessons from the Hot Seat: Governing at the Local and Regional Level."

[USM_plus_form]