Responses from LLMs are not facts

by xd1936on 10/29/25, 9:40 PMwith 184 comments
by stavroson 10/29/25, 11:54 PM

> Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts. > They’re predicting what words are most likely to come next in a sequence.

I wish we'd move away from these reductive statements that sound like they mean something but are actually a non-sequitur. "Articles on Wikipedia are not facts. They're variations in magnetic flux on a platter transferred over the network".

Yeah, that doesn't make them not facts, though. The LLM should simply cite its sources, and so should Wikipedia, a human, or a dog, otherwise I'm not believing any of them. Especially the human.

by rlayton2on 10/29/25, 11:56 PM

Even in small companies, its important to discuss what the expectations around AI are. In the absence of any further requirements (i.e. assuming privacy is not a major issue, regulatory issues etc), it can be as simple as clearly saying: "You can use AI, but you are ultimately responsible for what you deliver. It is expected you verify the data, test the code, and otherwise validate the responses."

Something as simple as that gives an expectation, without being overbearing to start with.

by foxfiredon 10/29/25, 11:36 PM

We used to say "stop copying and pasting from stackoverflow without reading it first". Everything changed, yet everything is the same.

by gus_massaon 10/30/25, 12:02 AM

> Imagine someone who has read thousands of books, but doesn’t remember where they read what.

That sound like me! Well I probably read only a hundred, but I also mostly forgot the sources. I can halucinate a source, like "there is (probably) a Schaum book about that".

by geocrasheron 10/30/25, 12:28 AM

LLM's follow the old adage of "Garbage In, Garbage Out". LLM's work great for things that are well documented and understood.

If you use LLM's to understand things that are poorly understood in general, you're going to get poor information because the source was poor. Garbage in, Garbage out.

They are also terrible at understanding context unless you specify everything quite explicitly. In the tech support world, we get people arguing about a recommended course of action because ChatGPT said it should be something else. And it should, in the context for which the answer was originally given. But in proprietary systems that are largely undocumented (publicly) they fall apart fast.

by ixxieon 10/30/25, 1:27 AM

I know its disturbing to many, but there is something nice about the post-truth moment: it feels like more people are actually questioning things more than when I grew up in the 90s/00s.

I think we need to shift towards a socionormative understanding of knowledge; as Rorty put it: "a fact is just something we can't be bothered to argue about". I agree with him that talking about truth isn't so useful for moving our culture forward.

We should be talking about how to negotiate the diverse vocabularies of discursive communities as they increasingly clash in our globalized culture. Dialectical exclusion is the cultural catastrophe of the day.

by mr3martinison 10/29/25, 11:05 PM

Bosses love it when you call them foolish.

by 9x39on 10/30/25, 12:06 AM

"Can you share the chat so we can look at it together?"

Asking for the receipts so you can figure out where they put their thumb on the scale is more illuminating.

by lionkoron 11/11/25, 5:21 PM

It's quite easy to just go to the source quoted by the LLM, read it or skim it, and quote that.

by paulcoleon 10/30/25, 12:44 AM

I remember the guy who created Comic Sans said, “If you love Comic Sans you don’t know anything about typography and should get a new hobby. If you hate Comic Sans you also don’t know anything about typography and should get a new hobby.”

I feel like this applies to AI as well.

by Aeolunon 10/29/25, 11:59 PM

I think my main response to these messages is: “If ChatGPT is more trustworthy than me, the expert you hired, what do you have me for?”

I can spend hours refuting and explaining why what ChatGPT told you doesn’t apply in our situation, or you can… just trust me?

by hamashoon 10/30/25, 12:46 AM

Remind me this useful URL when boss says "we need carousel".

https://shouldiuseacarousel.com/

by yellow_postiton 10/30/25, 1:49 AM

This feels like getting taught in school not to cite Wikipedia when the actual digital literacy challenge is deeper— learn where the info comes from and to critically think.

by Brendinoooon 10/30/25, 2:01 AM

I mostly just think this is a bad response to a real problem.

Attitude problems aside[0], if you lead with "Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts", most people are probably going to respond with some form of "well it said this one fact and I know for sure that one was right" and move on from this unconvinced of anything.

I'm not sure what a better approach is though. Honestly, "Don’t copy-paste something that a chatbot said and send it to someone as if that’s authoritative" feels like a better starting point. Another comment in this thread about asking to share the prompt and demonstrating how it can be manipulated could help. Or talking about LLM bias. I dunno.

P.S. what's up with those form inputs/submits for the good/bad uses?

[0]: "lmgtfy" or even "rtfm" seemed to fade after the novelty wore off and everyone realized it was a jerk move to use terms like that. Or maybe it's a generational thing?

by purplecatson 10/29/25, 10:57 PM

ironically, would be more valuable if this cited each of its claims rather than just "further reading". "But Stopcitingai Said "

by mmis1000on 10/31/25, 4:02 AM

Sometimes I wonder, why people ask AIs to validate their output by performing tool search. But not do the search or even open the reference AI provided by themselves.

In any case, whether it is real person or AI do the task for you. It's the user who submitted the result to be responsible for it. Why do people think it's bad to do this with real human and suddenly be OK to do this with an AI?

by tim333on 10/30/25, 8:02 AM

They can be facts. Sometimes they get thing right, sometimes wrong like other sources.

Now "Responses from LLMs are not facts" in not a fact. I guess it provides a kind of meta illustration of not believing everything you read. A more subtle problem is they are poor as citations as they can change rapidly unlike saying say, last weeks NYT said so and so.

by ffsm8on 10/30/25, 9:16 AM

I personally like to call it "asking the oracle" https://en.wikipedia.org/wiki/Oracle

Because it makes it clear that this might as well be entirely made up BS, just like the oracle's were frequently doing

by alexey-salminon 10/30/25, 9:41 AM

I usually reply with this image: https://www.reddit.com/r/MemeTemplatesOfficial/comments/lhwy...

by SilverElfinon 10/29/25, 11:57 PM

This snarky site may make you feel smart but really there’s no reason to cite and trust anything, and AI isn’t much worse than alternatives. Even peer review isn’t the guarantee you think it is. AI is often right as well and we should keep that in mind.

by codyswannon 10/30/25, 9:49 AM

No. Facts are facts. Opinions are opinions. And statements of fact are unverified facts.

I wish people would start understanding the difference.

"Ice cream is cold" is an opinion.

"Ice cream melts at 50 degrees Fahrenheit" is a statement of fact.

by GaryBlutoon 10/30/25, 5:20 PM

While I somewhat agree with the sentiment, I find these sorts of sites obnoxiously lecturous and won't be read by the target audience anyway, it's just too snarky.

by mcvon 10/30/25, 7:14 AM

ChatGPT told me I function two pay scales higher than I'm being paid. I think that's going to be my response when someone cites ChatGPT as an authority to me.

by themafiaon 10/29/25, 11:51 PM

If your boss says this to you, quit.

by bgwalteron 10/30/25, 12:04 AM

Alternatively, give the same prompt to another model and get a completely different answer. Sometimes the opposite. Or give the same prompt to the same model after its latest fine tuning and get a completely different answer. Or warm up the model with leading prompts and get a different answer.

These things are just addictive toys, nothing more.

by dude250711on 10/29/25, 11:28 PM

Sweet of you to think LLM consumers read things.

by nerder92on 10/30/25, 12:11 AM

Is this lmgtfy of the AI era?

by Rileyenon 10/31/25, 3:02 AM

Just because AI sounds convincing does not mean it is telling the truth. I have believed something it said before because it sounded right, only to find out later that it was completely wrong. These days I never rely only on what AI says and always check the original source. The most reliable way I have found to use it is to treat it like a smarter search engine.

by aspbee555on 10/30/25, 4:34 AM

it really can't be that difficult to have a llm reference a known factual reference before giving an answer. Its really good at figuring out what you want and what to say, not far off to check references

by xd1936on 10/29/25, 9:40 PM

A simple static webpage, inspired by motherfuckingwebsite.com, comicsanscriminal.com, etc.

by analog8374on 10/30/25, 1:59 AM

When you mostly get your facts secondhand from utter strangers (as most of us do), any statement made clearly and confidently enough is indistinguishable from fact.

by ninetyninenineon 10/30/25, 2:59 AM

Who on the face of the earth doesn’t know this? Scroll through the comments and if you find one person who is like “oh shit really?? I had no idea!” Then you’re hallucinating.

This page isn’t designed to be informative it’s designed as self affirmation to people who really hate AI.

Ai is not fully reliable. But it’s reliable enough to use as a tool. But there are tons of people who hate it and want to inform others it’s bad even though the world already knows. They see this and are like “yeah I’m right”

by ProofHouseon 10/30/25, 12:46 AM

Tell me you know nothing about AI without telling me you know nothing about AI

by gnarlouseon 10/30/25, 12:27 AM

This is so passive aggressive it’s fireable—or at the very least unpromotable.

by echelonon 10/29/25, 11:51 PM

LLMs are still better than the trash-filled waste bin Google Search has become.

by exasperaitedon 10/30/25, 12:18 AM

No, don’t do this. It’s as bad as the “no hello” thing.

If it bothers you when people do the “chatgpt said” thing (and it should), put your concerns into your own words. Or at least respond with an article in the news that you can discuss with that person.

Responding with one of these sites is just as worthless and devoid of interpersonal investment as responding with AI. Don’t be that person.