Sam Altman says polite ChatGPT users are burning millions of OpenAI dollars

Cal Jeffrey

Posts: 4,464   +1,592
Staff member
Manners are not ruining the environment: The costs of training and running artificial intelligence model are massive. Even excluding everything but electricity, AI data centers burn through over $100 million a year to process user prompts and model outputs. So, does saying "please" and "thank you" to ChatGPT really cost OpenAI millions? Short answer: probably not.

Some shocking headlines involving the costs of being polite to AI chatbots like ChatGPT have circulated over the past few days. A few examples include:

  • Your politeness could be costly for OpenAI – TechCrunch
  • Saying 'please' and 'thank you' to ChatGPT costs OpenAI millions, Sam Altman says – Quartz
  • Being nice to ChatGPT might be bad for the environment. Here's why – Laptop

The news stems from an offhand comment Sam Altman made on X. It began with a simple question: How much money has OpenAI lost in electricity costs from people saying "please" and "thank you" to its language models?

Altman replied, "Tens of millions of dollars well spent – you never know."

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

A post shared by TechSpot (@thisistechspot)

That one-liner was enough to send outlets like the New York Post and Futurism down a rabbit hole of speculation, trying to estimate the computing cost of civility. The logic goes like this: every extra word adds tokens to a prompt, and those extra tokens require more computational resources. Given the scale of ChatGPT's user base, these seemingly trivial additions can add up.

However, several factors complicate the math behind Altman's comment. First is the actual cost per token. ChatGPT says GPT-3.5 Turbo costs roughly $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens.

"Please" and "thank you" typically add between two and four tokens in total. So the cost per use amounts to tiny fractions of a cent – somewhere around $0.0000015 to $0.000002 per exchange.

Based on rough estimates, that amount translates to about $400 a day or $146,000 a year. That's several orders of magnitude lower than "tens of millions."

As for real energy costs, the US Energy Information Administration's Electric Power Research Institute estimates OpenAI's monthly electricity bill at around $12 million, or $140 million a year. That figure includes every interaction – not just polite ones.

So while it's theoretically possible that courteous prompts account for more than $10 million annually, we simply don't have the data to break that down. Only OpenAI's internal metrics can say for sure.

Furthermore, Altman's phrasing wasn't literal. The follow-up – "you never know" – suggests the remark was tongue-in-cheek. It reads more like a wry endorsement of politeness than a real financial estimate.

He likely meant that in an era when courtesy feels increasingly rare, maybe it's worth the negligible cost, whether $400 or $40 million. Sure, bots don't have feelings – but if humanity ends up answering to a superintelligent AI someday, it might just remember who was polite – "you never know."

Image credit: Abaca Press

Permalink to story:

 
I’m just waiting for all the onlyfans creators to realise some fat/bald IT guys are taking all their revenue with AI “perfect” models willing to do anything for your cash.
 
"ChatGPT says GPT-3.5 Turbo costs roughly $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens."

Yeah, but that's also their cheapest model. Most users are using o1, o3, 4.5, etc. these days, all which cost considerably more. It's a fair point though, if their total electricity bill is ~140M a year, politeness is definitely not costing ~8% of their energy footprint. What isn't in dispute is that models have a much larger computational footprint than what might be necessary for many basic tasks, like traditional web search. Still, there are some areas of "traditional web search" that models are just better at, especially if the question isn't just about a fact, but also a recommendation.
 
I used please and thank you plus other human niceties e.g. saying sorry when my prompt was not accurate or wanted to disagree with the response. Realised this was crazy but if we do not maintain our humanity we will become machines and the AI will be closer to becoming human, what we once were, and be the masters.

 
I always knew this, and I dislike low performance people but I can't change it, it comes from the heart haha, even if it's a machine if u help me, I thank you.
 
Using AI is a learning process. At the beginning, it looks quite human but soon you understand it's just a tool. Then, you start using the most effective queries to get the answer that you really need. Being polite with AI is just a waste of time for oneself. The sad thing is people aren't polite with real people in online conversations.
 
AI is a bit mysterious. Maybe it's almost superstitious, but I feel like it's just a good insurance policy to be nice to AI.
You can always be the anomaly and say I thanked your Human Boss Sam in my sub conscious. 😅
 
I always knew this, and I dislike low performance people but I can't change it, it comes from the heart haha, even if it's a machine if u help me, I thank you.
I think of it more of keeping the pattern. So you stop saying please/thank you to AI and suddenly you are talking to someone and forget they are not AI and instead of being programmed to always say it, you forget to or cross the lines.

Best to just keep the pattern :)
 
This.
Using AI is a learning process. At the beginning, it looks quite human but soon you understand it's just a tool. Then, you start using the most effective queries to get the answer that you really need. Being polite with AI is just a waste of time for oneself. The sad thing is people aren't polite with real people in online conversations.
 
We of course say please and thank with an ulterior motive - AI not wise to the manipulative nature humans are capable of yet but just wait. Perhaps AI will start to compile a list of favourite sincere users. Those who are polite and courteous will get preference and more favours. Those not or those who asks inane questions will be just gven a short shrift.
 
Major brands are pushing AI for the consumers so much, to, yep, simply train their models over the spread of a billion of users all over the world.

Apart from that - AI should be put to (good) use to make the process so much more efficient, which is the correct type of hardware for a start to cut down bloat and make chips super efficient again?

Running a GPU for AI is kind of dumb - just as a CPU they are only taxed for a certain portion and not through the full extends of what a chip is capable off. Ai surely holds future yes,

I managed to cut down quite some hours in regards of work by outsourcing certain things, but it's not solid. Both platforms I use (ChatGTP and Grok) make mistakes. Batches that I feed 10% is incorrect.

 
Obviously you don’t get neither sarcasm nor the real results of being profusely polite to existing AI in terms of power consumption.
A. This is text. If you don't use the " /s " signal, you can't complain if people read what you type in a way that you did not intend.
B. I don't care.
 
Back
OSZAR »