AI impacts everything from politics and business to the environment

Editor’s note: This is the second of a four-part series about the future of Artificial Intelligence.

(Jan. 5, 2024) — I believe the most disturbing application of AI is its effect on our elections and democracy.

The same technology that allowed Paul McCartney to use John Lennon’s voice from an old demo tape to create a new Beatles song is already being used to mislead voters and impersonate candidates.

Examples include a doctored video of President Joe Biden appearing to give a speech attacking transgender people and AI-generated images of children supposedly learning Satanism in libraries.

After CNN aired its May 10 Town Hall meeting with Donald Trump, he posted a video on Truth Social that had been manipulated to mislead. The footage had been shifted to a Town Hall with President Biden in October 2021, making it appear that Anderson Cooper’s outrage about the Trump Town Hall was directed not at Trump but at Biden.

According to David Klepper and Ali Swenson of the Bay Area News Group, AI will “soon allow anyone to create fake images, video and audio that are realistic enough to fool voters and perhaps sway an election.”

AI replacing workers

Another concern is AI’s impact on the labor major. Harvard Business School professor Joseph Fuller says that AI tools will lead to the eventual elimination of many jobs and the restructuring of many others.

“The effect will be particularly acute among knowledge workers – those who have been traditionally defined as non-routine cognitive work,” he told writer Kristen Senz. “Many people in such roles have been insulated from automation and globalization. That is about to change.”

His colleague, assistant professor Iavor Bojinov, counters that “automating these tasks will enable knowledge workers to concentrate on value-adding activities where human expertise is indispensable.”

But Chon Tang, a general partner at UC Berkeley’s startup accelerator, points out another problem. “What happens to that 20% or 50% or 70% of the population that is economically of less value than a machine?” he asked Ethan Baron of the Bay Area News Group.

Addressing errors and biases

Even if Bojinov’s enthusiasm for AI is justified, the bad news is that it makes mistakes.

My daughter is COO at the consulting firm Fountainworks, which specializes in public sector strategy sessions for cities and nonprofits. She recently received an email that stated: “We believe that businesses like yours play a vital role in raising awareness about the importance of staying hydrated while also addressing environmental concerns related to single-use plastic bottles.”

Fountainworks consults with stakeholders on leadership challenges, not on peoples’ drinking habits.

AI also has implicit bias. Because ChatGPT was trained on billions of words, it can reinforce social inequities. The voices of women and people of color are underrepresented, since much of ChatGPT’s training data was drawn from internet forums where men were dominant.

In addition, because offensive, racist and anti-Semitic language is included up in the data compiled, bad actors can misuse it.

Artists are raising concerns about AI, as well. A lawsuit by an artist, painter and cartoonist in U.S. District Court in San Francisco alleges that a UK-based image-generation company “violated the rights of millions of artists by training its software on more than five billion copyrighted images scraped from the internet without permission or compensation.”

One of the key demands in last summer’s writers’ strike was that AI be used only as a tool for research or script ideas – not to replace the writers. Other lawsuits over intellectual property are likely to follow from musicians, authors and actors.

And what about our own privacy? We were never asked about usage of our texts and emails as AI scoops up our information.

Environmental issues

Finally, there’s the energy use. The New Yorker quotes an article in Science for the People that says training an AI engine requires tons of carbon-emitting energy.

“While a human being is responsible for five tons of CO2 per year, training a large neural LM (language model) costs 284 tons,” Sue Halpern wrote. “It’s only going to get worse. The computing power required to train the largest models has grown 300,000 times in six years.”

Once again, I asked ChatGPT to summarize concerns about AI:

“As AI systems become more complex and sophisticated, there is a concern regarding their lack of transparency and accountability. Black-box algorithms that make decisions without clear explanations can lead to distrust and bias. Developing transparent AI systems and establishing mechanisms to ensure accountability and fairness are critical to building trust in AI technologies.”

In my final two articles, I’ll address “The Ugly” aspects of AI and how we might address them.

Gail Murray
Gail Murray

Gail Murray served in Walnut Creek as Mayor and city councilmember for 10 years. From 2004-2016 she served as District 1 Director, Board of Directors of the San Francisco Bay Area Rapid Transit District (BART). She is the author of "Lessons from the Hot Seat: Governing at the Local and Regional Level."

[USM_plus_form]