Even in small companies, its important to discuss what the expectations around AI are. In the absence of any further requirements (i.e. assuming privacy is not a major issue, regulatory issues etc), it can be as simple as clearly saying: "You can use AI, but you are ultimately responsible for what you deliver. It is expected you verify the data, test the code, and otherwise validate the responses."
Something as simple as that gives an expectation, without being overbearing to start with.
We used to say "stop copying and pasting from stackoverflow without reading it first". Everything changed, yet everything is the same.
> Imagine someone who has read thousands of books, but doesn’t remember where they read what.
That sound like me! Well I probably read only a hundred, but I also mostly forgot the sources. I can halucinate a source, like "there is (probably) a Schaum book about that".
LLM's follow the old adage of "Garbage In, Garbage Out". LLM's work great for things that are well documented and understood.
If you use LLM's to understand things that are poorly understood in general, you're going to get poor information because the source was poor. Garbage in, Garbage out.
They are also terrible at understanding context unless you specify everything quite explicitly. In the tech support world, we get people arguing about a recommended course of action because ChatGPT said it should be something else. And it should, in the context for which the answer was originally given. But in proprietary systems that are largely undocumented (publicly) they fall apart fast.
I know its disturbing to many, but there is something nice about the post-truth moment: it feels like more people are actually questioning things more than when I grew up in the 90s/00s.
I think we need to shift towards a socionormative understanding of knowledge; as Rorty put it: "a fact is just something we can't be bothered to argue about". I agree with him that talking about truth isn't so useful for moving our culture forward.
We should be talking about how to negotiate the diverse vocabularies of discursive communities as they increasingly clash in our globalized culture. Dialectical exclusion is the cultural catastrophe of the day.
Bosses love it when you call them foolish.
"Can you share the chat so we can look at it together?"
Asking for the receipts so you can figure out where they put their thumb on the scale is more illuminating.
It's quite easy to just go to the source quoted by the LLM, read it or skim it, and quote that.
I remember the guy who created Comic Sans said, “If you love Comic Sans you don’t know anything about typography and should get a new hobby. If you hate Comic Sans you also don’t know anything about typography and should get a new hobby.”
I feel like this applies to AI as well.
I think my main response to these messages is: “If ChatGPT is more trustworthy than me, the expert you hired, what do you have me for?”
I can spend hours refuting and explaining why what ChatGPT told you doesn’t apply in our situation, or you can… just trust me?
Remind me this useful URL when boss says "we need carousel".
This feels like getting taught in school not to cite Wikipedia when the actual digital literacy challenge is deeper— learn where the info comes from and to critically think.
I mostly just think this is a bad response to a real problem.
Attitude problems aside[0], if you lead with "Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts", most people are probably going to respond with some form of "well it said this one fact and I know for sure that one was right" and move on from this unconvinced of anything.
I'm not sure what a better approach is though. Honestly, "Don’t copy-paste something that a chatbot said and send it to someone as if that’s authoritative" feels like a better starting point. Another comment in this thread about asking to share the prompt and demonstrating how it can be manipulated could help. Or talking about LLM bias. I dunno.
P.S. what's up with those form inputs/submits for the good/bad uses?
[0]: "lmgtfy" or even "rtfm" seemed to fade after the novelty wore off and everyone realized it was a jerk move to use terms like that. Or maybe it's a generational thing?
ironically, would be more valuable if this cited each of its claims rather than just "further reading". "But Stopcitingai Said "
Sometimes I wonder, why people ask AIs to validate their output by performing tool search. But not do the search or even open the reference AI provided by themselves.
In any case, whether it is real person or AI do the task for you. It's the user who submitted the result to be responsible for it. Why do people think it's bad to do this with real human and suddenly be OK to do this with an AI?
They can be facts. Sometimes they get thing right, sometimes wrong like other sources.
Now "Responses from LLMs are not facts" in not a fact. I guess it provides a kind of meta illustration of not believing everything you read. A more subtle problem is they are poor as citations as they can change rapidly unlike saying say, last weeks NYT said so and so.
I personally like to call it "asking the oracle" https://en.wikipedia.org/wiki/Oracle
Because it makes it clear that this might as well be entirely made up BS, just like the oracle's were frequently doing
I usually reply with this image: https://www.reddit.com/r/MemeTemplatesOfficial/comments/lhwy...
This snarky site may make you feel smart but really there’s no reason to cite and trust anything, and AI isn’t much worse than alternatives. Even peer review isn’t the guarantee you think it is. AI is often right as well and we should keep that in mind.
No. Facts are facts. Opinions are opinions. And statements of fact are unverified facts.
I wish people would start understanding the difference.
"Ice cream is cold" is an opinion.
"Ice cream melts at 50 degrees Fahrenheit" is a statement of fact.
While I somewhat agree with the sentiment, I find these sorts of sites obnoxiously lecturous and won't be read by the target audience anyway, it's just too snarky.
ChatGPT told me I function two pay scales higher than I'm being paid. I think that's going to be my response when someone cites ChatGPT as an authority to me.
If your boss says this to you, quit.
Alternatively, give the same prompt to another model and get a completely different answer. Sometimes the opposite. Or give the same prompt to the same model after its latest fine tuning and get a completely different answer. Or warm up the model with leading prompts and get a different answer.
These things are just addictive toys, nothing more.
Sweet of you to think LLM consumers read things.
Is this lmgtfy of the AI era?
Just because AI sounds convincing does not mean it is telling the truth. I have believed something it said before because it sounded right, only to find out later that it was completely wrong. These days I never rely only on what AI says and always check the original source. The most reliable way I have found to use it is to treat it like a smarter search engine.
it really can't be that difficult to have a llm reference a known factual reference before giving an answer. Its really good at figuring out what you want and what to say, not far off to check references
A simple static webpage, inspired by motherfuckingwebsite.com, comicsanscriminal.com, etc.
When you mostly get your facts secondhand from utter strangers (as most of us do), any statement made clearly and confidently enough is indistinguishable from fact.
Who on the face of the earth doesn’t know this? Scroll through the comments and if you find one person who is like “oh shit really?? I had no idea!” Then you’re hallucinating.
This page isn’t designed to be informative it’s designed as self affirmation to people who really hate AI.
Ai is not fully reliable. But it’s reliable enough to use as a tool. But there are tons of people who hate it and want to inform others it’s bad even though the world already knows. They see this and are like “yeah I’m right”
Tell me you know nothing about AI without telling me you know nothing about AI
This is so passive aggressive it’s fireable—or at the very least unpromotable.
LLMs are still better than the trash-filled waste bin Google Search has become.
No, don’t do this. It’s as bad as the “no hello” thing.
If it bothers you when people do the “chatgpt said” thing (and it should), put your concerns into your own words. Or at least respond with an article in the news that you can discuss with that person.
Responding with one of these sites is just as worthless and devoid of interpersonal investment as responding with AI. Don’t be that person.
> Responses from Large Language Models like ChatGPT, Claude, or Gemini are not facts. > They’re predicting what words are most likely to come next in a sequence.
I wish we'd move away from these reductive statements that sound like they mean something but are actually a non-sequitur. "Articles on Wikipedia are not facts. They're variations in magnetic flux on a platter transferred over the network".
Yeah, that doesn't make them not facts, though. The LLM should simply cite its sources, and so should Wikipedia, a human, or a dog, otherwise I'm not believing any of them. Especially the human.