Bomb explodes in downtown Concord, and other lies AI told me

Graphic generated by ChatGPT. There was no bomb in downtown Concord. This doesn’t even look like Concord.

CONCORD, CA (July 2, 2025) — First off, as you may have guessed, there was no bomb that exploded in downtown Concord. And yes, the headline was designed to try and trick you into clicking on this story, which is actually about how Artificial Intelligence is lying to you. But the truth is, AI didn’t write the headline, I did. And I am a human. You believe me, right?

You may be fuming right now, annoyed that I tricked you into clicking on a story that wasn’t about the destruction you hoped for. Not that you wanted the bomb to be real, but that you wanted the information that was promised in the headline. You went to the trouble of clicking the link and reading the first sentence and you didn’t get what you were expecting. Now you are mad, which means we both can agree that lying is bad.

And AI is lying to you.

Now, AI experts (and AI itself) will tell you that AI doesn’t lie. If you ask Gemini or ChatGPT, they will say the same thing: AI doesn’t have the ability to lie because it doesn’t have motives, beliefs, or desires of a human, nor does it have the intent to lie. It’s only a lie if it’s deliberate. Right?

Instead, when AI produces false information, AI creators call it a “hallucination,” one that arises from the way AI gathers and outputs information. If you ask it, AI will admit that it can make mistakes. It doesn’t know that it’s wrong to make up quotes and even make up the person saying the quote, so it’s not lying. It’s just dispensing wrong information with no motive. AI doesn’t know the information is wrong. That’s just the way it does things. An author named Devansh writes on Medium.com “Why ChatGPT lies” that if you boil it down, “ChatGPT is fancy autocomplete.”

But it’s fun and harmless

This is a totally real photo. (Pete Cruz photo)

Why am I telling you this totally true information? Here’s my experience. I’m no AI expert, nor am I a scientist or programmer. But I’ve been using AI (both ChatGPT and Gemini) to help  explore the rules of a complicated table top role-playing game. The kind with weird dice.

AI seems great for this. The rules are all online and available to AI, so rather than reading hundreds of pages of rules and online discussion about the rules myself, I can ask questions of ChatGPT and it can sift through all the data and help me understand how a specific aspect of the game works.

But as I said earlier, even when dealing with something that seems as straightforward as the rules of a game, AI lies. Mostly it tells you accurate information, but sprinkled in with the truth are lies. And when I called Gemini on the lies, it insisted that I had “identified a very common point of confusion and a nuance in how these rules are presented and implemented!” (exclamation point generated by AI).

It went on to explain that the false rule it told me about is not in the main section of the rules but is found elsewhere. “This is a weakness of how that particular [rule set] organizes some information” it told me. I searched extensively but could not find the false information AI insisted was real. “However, the rule is there” (italics generated by AI), it told me, saying I just needed to dig into the specific entry.

I dug. I dug deep, but this rule definitely did not exist. In fairness, after I complained that it was wrong, Gemini did note that it’s helpful to verify its output against the core rules. However, it concluded that the false rules it gave me were “definitely a thing, even if they’re a bit tucked away in the text” (they weren’t).

Truth bomb

So, here’s the thing that false headlines and AI generated information have in common: we, as humans, are kind of programmed to believe that they’re true. We see a headline as we scroll through our feed, and we think a headline reporting a bomb will be a true story so we click on it. We see AI’s eloquent, polished output, and we assume, because it’s a computer that doesn’t know how to lie, that this information will be true — especially when it feels like we’re talking to a human and that human insists that what it’s saying is “definitely a thing.” We want to believe these things, so we click on the headline and are disappointed. Or we add the false information into a school report and flunk the assignment. Or we lose a game because we thought the rules worked differently.

Even though they’re still working out its many bugs, AI has come to the masses. The ability of AI to absorb and organize huge amounts of data can be valuable, but currently, it’s only as valuable as the human who can evaluate whether the information is a lie — sorry, a hallucination.

For folks who aren’t particularly analytical or knowledgeable, or who are too lazy to check the facts, AI is dangerous. And for people who want to spread lies, it’s even worse because AI gives them a way to present their false information in a way that seems true, or at least worth considering, when it might just be that person’s…hallucinations.

That’s just my opinion, by the way. With AI now available to everyone, it will quickly become indispensable to some, just as the smartphone and social media became part of our lives. The people who will make money from it want that. Sure, it seems fun and helpful but, thanks to AI, will it soon be impossible to tell whether anything we read is true? Will truth become something we each have to decide for ourselves?

Possibly.

Unfortunately, this entire article was a lie that AI told. I’m not a human. This story was generated entirely by AI.

Just kidding. Only the previous paragraph (and the first part of the headline) were lies. You believe me, right? It’s definitely a thing.

[USM_plus_form]