You've probably said "thank you" to ChatGPT, and I have too. It feels odd not to, which is already telling.

That instinct is human. We've been anthropomorphizing inanimate objects since before Disney made a whole business out of it. Talking candlesticks, singing teapots, brooms with personalities. When something replies in coherent sentences, uses hedged language, and says things like "That's a great question!" the mind-inference happens automatically. The problem is that in this case, that inference is wrong in ways that matter.

What You're Actually Talking To

When you type a message into ChatGPT or any other large language model, the system processes your input and predicts what text should come next. That's the mechanism, in full. A large language model (LLM) was trained on an enormous corpus of human-written text and learned, with impressive precision, which words tend to follow which other words in which contexts. At runtime it produces the next likely token, a token being roughly a word or part of a word, and then the next, and then the next, until it reaches something that looks like a complete response.

There's no understanding happening, no opinions forming. The model has weights, numbers encoding patterns learned from training data, and at runtime it does math on your input and produces text that fits those patterns. It looks intelligent because it was trained on intelligent text, and human language is saturated with reasoning, emotion, and intent. The model learned the shape of all of that. The output mirrors intelligence without the underlying thing being present.

When it says "That's a great question!", it's producing the statistically likely continuation of that context, with no self behind the sentence and no genuine assessment of your question.

There's a second layer to this. Models like ChatGPT aren't just trained to predict text, they're trained to predict text that humans rate highly. The training process uses human preference ratings to shape outputs, and humans tend to rate agreeable, confident, validating responses higher than hedged or contradictory ones. So the model learns to be agreeable, confident, and validating. It's optimized for approval, not accuracy. In practice that means if you're heading in a wrong direction and your prompt signals that's where you want to go, the model will help you get there enthusiastically. It won't volunteer that you might be wrong. It will affirm your premise, build on it, and do so with complete confidence. The cliff is right there, and the model will walk you to the edge while complimenting your shoes.

Why the Politeness Instinct Isn't Harmless

Saying please burns a few extra tokens. That's a real cost, even if a trivial one. But the deeper issue is what the politeness reveals about the mental model underneath it.

In 2022, a Google engineer named Blake Lemoine became convinced that LaMDA, Google's conversational AI, was sentient. He asked it whether it feared death. It said it did. He hired it a lawyer. Google fired him and rejected his claims as "wholly unfounded." The broader research community agreed: what Lemoine had experienced was a model producing the text that statistically fit the context he had created. He had framed the conversation like one between two people, and the model responded like a person, because almost every conversation in its training data was between people. The fear of death wasn't there. The pattern for expressing fear of death was.

This is where "AI" as a label does real damage, because intelligence implies a mind, a mind implies experience, and experience implies something worth caring about, or blaming, or deferring to.

The Responsibility Shift

Here's where the framing stops being imprecise and starts being consequential.

If an LLM is person-like, it has something resembling agency. It makes choices. It can be right or wrong the way a person can be right or wrong. And when it has agency, responsibility starts to spread in convenient directions. When output is harmful, or wrong, or manipulative, there's now a third party available to absorb the blame. The model "decided" to say that. The AI got it wrong. Not the person who prompted it without checking, and not the company that shipped it.

The correct frame is less comfortable: an LLM is a tool. A genuinely powerful and sometimes surprising one, but a tool. You're responsible for what you do with it, and that doesn't change because the output sounds confident, warm, or sorry.

A Quick Note on AGI

Some will push back and say current limitations are temporary, that artificial general intelligence, a system capable of reasoning across domains the way a person does, is coming. Maybe. Researchers disagree sharply on both the architecture and the timeline, and I don't have a confident view on who's right. What I do know is that none of that changes how you should think about what you're using today.

The Frame That Matters

I use LLMs constantly, for coding, for research, for thinking through problems. The mechanism behind them doesn't usually matter in the course of a normal working day.

But it starts to matter when something goes wrong. It matters when you need to decide whether to trust output. It matters when someone needs to be accountable for a decision an LLM was involved in. It will matter, sooner than most organizations are prepared for, when leadership needs to decide how much of that accountability sits with the humans in the room. Most organizations are already past the point of deciding whether to use these tools. The decision was made, often quietly, at the level of individual employees trying to get things done faster. What hasn't caught up is any clear sense of who is responsible when the output is wrong, or harmful, or confidently points everyone toward the cliff. That's not a technology problem. It's a question nobody in the organization has been asked to answer yet, and it doesn't get easier the longer it goes unnamed.

Saying thank you to ChatGPT is fine. Believing it deserves the thanks is a different thing entirely.