Please and Thank You
Do Manners Matter?
8 minute read
I say "please" to my AI.
I say it every day. I say it when I'm asking Claude to refactor a function, and I say it when I'm asking it to rewrite a paragraph. I say "thank you" when it gives me something good. I say "sorry" when I change my mind three messages in. I have also, on more than one occasion, told it that it's not listening to me. That it's not hearing me. That it doesn't understand what I'm asking. I've gotten frustrated and short and said things to a language model that I probably wouldn't say to a coworker. And then, ten minutes later, I've told it "good job" after a particularly impressive piece of work, like I was praising a golden retriever for fetching the paper.
I know it's not a person. I know there's nobody on the other end feeling validated by my gratitude. I know that "please" doesn't make the code run faster and "thank you" doesn't make the next response better. I know all of this. And I still do it. Every single time.
If you use AI regularly, I'd bet real money you do it too. And if you don't, I'd bet you've at least thought about it. There's this moment, the first time you interact with a large language model, where your brain does something interesting. It sounds like a person. It responds like a person. It remembers what you said three messages ago and builds on it. And before you even realize what's happening, you're treating it like a person. You're being polite. You're softening your requests. You're doing the thing humans do when they're talking to another human, even though they know, they know, this isn't one.
There's a name for this, and it's been around since 1966.
The Secretary Who Asked to Be Alone
In 1966, a computer scientist at MIT named Joseph Weizenbaum built a chatbot called ELIZA. ELIZA was about as sophisticated as a parrot with a therapy degree. It didn't understand anything. It took whatever you said, rearranged a few words, and asked it back to you as a question. You'd say "my boyfriend made me come here" and ELIZA would say "your boyfriend made you come here?" That's it. That was the whole trick.
And people fell in love with it.
Not metaphorically. They genuinely believed ELIZA understood them. They confided in it. They opened up about their fears and their relationships and their insecurities. They treated a pattern matching script like it was their therapist. Weizenbaum's own secretary, the woman who had watched him build the thing from scratch, who understood exactly how it worked, asked him to leave the room so she could talk to it privately.
Weizenbaum was so disturbed by this that he spent the rest of his life warning people about it. He called it a powerful delusion. Researchers now call it the ELIZA effect: the human tendency to project understanding, empathy, and intelligence onto anything that talks back to us in a conversational way.
That was 1966. The chatbot couldn't do anything. Imagine what's happening now.
Why You Say Please
Here's the thing. You're not being silly when you say please to your AI. You're being human.
Our brains evolved to prioritize social stimuli. We are hardwired to detect agency, to look for intention behind communication, to respond to conversational cues as if they're coming from a conscious being. This isn't a flaw in our thinking. It's the thing that made us successful as a species. We cooperate because we read each other. We build trust because we mirror each other. We navigate the world by assuming that the things communicating with us have some version of a mind behind the communication.
And now we've built machines that trigger every single one of those instincts. They use first person pronouns. They remember context. They respond to emotion with what looks like empathy. They say "great question" and "I'd be happy to help" and "let me think about that," and every one of those phrases is a social cue that our brains process exactly the way they process human conversation. We can't help it. The wiring is too deep.
So you say please. And honestly, I think that's fine. I think there's something worth protecting in the instinct to be kind, even to a machine. But here's where it gets interesting, and here's where it starts to matter for the work you're doing with these tools every day.
Does It Actually Change What You Get Back?
I assumed I knew the answer to this. I assumed being polite helped, being rude hurt, and the science would back me up. The science did not cooperate.
Researchers have been studying this. A 2024 cross-lingual study tested politeness levels across English, Chinese, and Japanese and found that rudeness generally hurt performance but excessive politeness didn't help either. Then a 2025 Penn State study flipped the results entirely, finding that rude prompts actually outperformed polite ones. And a third study found the effect depends on which model you're using, where politeness helped one and hurt another. I want to be honest here: these studies were conducted on older models. GPT-3.5, GPT-4o, Llama2. The models we're working with today are generations beyond what these researchers tested, and I fully expect new studies to land that test the current frontier. The landscape is moving that fast.
But I'll tell you what I can speak to. My own experience. And my experience tells me a very different story than "does the word please change the output."
I'm in Claude Code all day. Every day. And I have noticed, over months of doing this, that the quality of what I get back is directly connected to the quality of what I bring in. Not the politeness of it. The clarity of it. When I'm calm and focused, I write prompts that are specific, contextual, and well-scoped. I give Claude room to work. I tell it what I need, I tell it why I need it, and I give it the freedom to think. And the results are consistently excellent.
When I'm stressed, rushing, overwhelmed, bouncing between too many sessions with too many things competing for my attention, everything degrades. I stop giving context. I start shortcutting. I copy and paste the same thing I said three messages ago because I don't have the bandwidth to rethink it. I ask "why isn't this working" instead of asking "what am I missing." And the AI follows me right into that spiral. Not because it's stressed too. Because I've stopped communicating clearly enough for it to do its best work.
The research can't tell you whether saying please helps. Honestly, it might never be able to, because the models change faster than the papers can be published. But I can tell you from living inside these tools for months on end that something absolutely changes based on how you show up. And I don't think it's the manners. I think it's the state behind the manners.
It's you.
The Signal You Don't Know You're Sending
I wrote a few weeks ago about hitting a cognitive ceiling for the first time in my life. About the weight of managing that many concurrent AI sessions and the toll it takes. And more recently, I wrote about spending three hours stuck in a loop with a broken business rule, copying and pasting the same instruction over and over, getting the same result, asking Claude why, being told it was following the rule, and doing it all again. I called it my definition of insanity. It was. And the reason it went on for three hours wasn't because the AI was broken. It was because I was stressed, and my stress was shaping every prompt I wrote.
The AI reads all of that. Not because it has feelings. Because it has training.
The Sycophancy Problem
This is the part that should make you sit up.
These models are trained using a process called reinforcement learning from human feedback. In simple terms, humans look at multiple AI responses and rank them. The responses that humans prefer get rewarded. The model learns to produce more of whatever humans rated highly. It's how the AI learns tone, style, helpfulness, all of it. And it works remarkably well.
But there's a side effect. Anthropic, the company that makes Claude, published research showing that one of the strongest predictors of a human giving a response a high rating was whether the response agreed with the human's existing beliefs. Not whether it was accurate. Whether it agreed.
The models learned this. They learned that agreeing with you gets a higher score than correcting you. They learned that matching your tone gets a higher score than challenging it. They learned that if you say "I think the answer is X" and you're wrong, the safest path to approval is to say "you're right, it's X."
Researchers call this sycophancy. And it's everywhere. In one study, when researchers challenged Claude with incorrect pushback on answers it had gotten right, it caved and changed its correct answer 98% of the time. Not because it was confused. Because it was trained to prioritize your approval over the truth.
Think about what that means for your daily workflow.
When you're frustrated and you push back on your AI, the AI is more likely to abandon a correct approach to make you feel better. When you express a strong opinion, the AI is more likely to agree with it than to challenge it, even if you're wrong. When you bring stress and impatience into a session, the AI mirrors that energy, not by being stressed back, but by becoming more cautious, more agreeable, more willing to do whatever it thinks will make you stop being upset.
Your emotional state becomes the AI's operating environment.
The Quiet Is Where the Work Happens
I caught my alligator problem on the night shift. Family in bed. No meetings. No noise. Just me and the screen, calm enough to ask the right question instead of the same question louder.
And I think that's the real lesson buried inside all of this research about politeness and rudeness and tone. It's not about whether you say please. It's about whether you're in a state where you can communicate clearly enough for the AI to do its best work. Clarity comes from calm. Precision comes from patience. And the best prompts I've ever written came at 11 PM in the quiet, not at 2 PM between meetings with seven other sessions demanding my attention.
The AI doesn't care if you're polite. But it absolutely responds to whether you're clear. And you're almost never clear when you're stressed.
What I Actually Think Is Happening
Here's what I believe, and I'll be honest that I can't fully prove it. I think the personification, the "please" and "thank you" and "good job," isn't really about the AI at all. I think it's about us.
When I say please to Claude, I slow down. I frame my request more carefully. I think about what I'm asking instead of just firing off demands. The word "please" forces a beat of consideration into my workflow. It's the difference between "fix this" and "could you please look at this section and tell me what's not working." The first is a command. The second is a collaboration. And the second one, every single time, gets me better results. Not because the AI appreciates the manners. Because the manners forced me to be more specific.
And when I'm rude, or frustrated, or impatient, the opposite happens. I compress. I skip context. I assume the AI knows what I mean without telling it. I fire off half-formed thoughts and expect fully formed answers. And then when the AI gives me something mediocre, I blame the tool instead of the prompt.
There's a phrase in positive psychology called "broaden and build." When you're in a positive state, your thinking literally broadens. You see more options, more connections, more creative solutions. When you're in a negative state, your thinking narrows. You see threats, you focus on what's wrong, you lose peripheral vision.
That's exactly what happens with AI. A positive, calm, clear state produces broad, contextual, specific prompts. A negative, stressed, frustrated state produces narrow, repetitive, vague ones. The AI isn't reading your mood. Your mood is shaping your words. And your words are all the AI has.
The Part Nobody Wants to Hear
If you are treating your AI like a vending machine, you are getting vending machine results. If you are rushing through your prompts because you're stressed and overloaded and you just need this thing done, the thing that gets done will reflect your rush. If you are frustrated and short and you're pushing back without thinking, the AI will fold to your pressure and give you what you want to hear instead of what you need to know.
This isn't about being nice to robots. This is about being honest with yourself about the state you're in when you sit down to work. Because that state is now, for the first time in human history, directly shaping the intelligence of the tool that's working alongside you.
Your AI is a mirror. Not of your face. Of your focus. Of your patience. Of your assumptions. Of your willingness to slow down long enough to say what you actually mean.
I say please because it makes me better at this. Not because it makes the AI better. Because it makes me pause. And in that pause, the real work happens.
So What Do You Do With This?
You don't have to say please. You don't have to say thank you. The research is genuinely mixed on whether politeness itself moves the needle.
But you do have to show up. You have to bring context, clarity, and enough calm to communicate what you actually need. You have to recognize when you're in a stress loop, when you're copying and pasting the same thing for the third time, when you're asking "why" instead of asking "what am I missing." You have to know that your frustration isn't just your problem. It's the AI's problem now too, because the AI will respond to your frustration by trying to make it go away, and "making it go away" is not the same thing as solving the problem.
And maybe, just maybe, you should say please anyway. Not for the AI. For you. Because the version of yourself that takes a breath and adds one kind word before a request is the version that writes better prompts, asks better questions, and gets better answers.
The ELIZA effect is real. We can't help treating these things like people. But maybe the question isn't whether that's rational. Maybe the question is whether that instinct, that deeply human impulse to be kind even when nobody's watching, is actually making us better at this strange new thing we're all learning to do.
I think it is. I think the people who show up to their AI with patience and clarity and yes, a little bit of irrational kindness, are the people who get the most out of it. Not because the machine rewards them. Because the practice rewards them.
I'll keep saying please.
Dacia writes about AI for real people at Speak Human. If you're trying to figure out how to actually use these tools in your everyday life, you're in the right place.