
A playful question on X has uncovered the fact that our polite habits with AI models like ChatGPT may be quietly racking up major financial and environmental costs.
The Question That Sparked It All
This fact was revealed following a seemingly light-hearted post by X user @tomieinlove: “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.”
This led to OpenAI’s CEO, Sam Altman, jumping into the thread with a characteristically dry reply: “Tens of millions of dollars well spent — you never know.”
While Altman’s comment was clearly tongue-in-cheek, it opened up a deeper conversation. As it turns out, it seems that every extra word typed into ChatGPT, however courteous, requires processing power. Multiply that by millions of interactions every day, and the costs begin to stack up in ways that are anything but trivial.
How Politeness Comes With a Price Tag
Behind ChatGPT’s friendly interface lies a vast network of servers powered by high-performance GPUs (graphics processing units). Every user input (no matter how short or polite) triggers an inference process, i.e. the AI’s work to interpret, generate, and deliver a response. For example:
– A simple “Please help me draft a job application” requires more compute than just “Draft job application.”
– Adding extra pleasantries leads to more data being processed, however small it might seem.
According to estimates by semiconductor analyst Dylan Patel, OpenAI’s running costs were around $700,000 (£525,000) per day for GPT-3 models in 2023. Given the far greater complexity of the newer GPT-4o model, it may, therefore, be reasonable to assume that these daily operational costs have increased substantially.
The cumulative impact of billions of interactions, each a few characters longer because of user politeness, adds up over time, hence, “tens of millions” might not be such a wild estimate after all.
The Environmental Cost Is Also Mounting
In terms of costs, it’s not just about money but also the environmental toll of generative AI that is significant … and growing.
For example, the International Energy Agency (IEA) reports that data centres, AI processing and crypto mining together accounted for almost 2 per cent of global electricity demand in 2022. Alarmingly, the IEA forecasts this could double by 2026, which is a consumption level equivalent to that of Japan!
Also, research by the University of Massachusetts Amherst found that training a single large AI model can emit more carbon than five American cars produce across their entire lifetimes, manufacturing included. With companies racing to develop ever more powerful models, that carbon footprint is only expected to expand.
Cooling these server farms also guzzles resources. For example, Microsoft’s water use jumped by over 1.7 billion gallons in just one year (enough to fill roughly 2,500 Olympic swimming pools!) as a direct result of AI growth. Also, Google reported a 48 per cent rise in emissions since 2019, largely driven by AI demands.
Why AI Inference Costs Are A Growing Concern
It’s worth noting here that one important nuance is the distinction between training costs and inference costs. For example:
– Training. Building an AI model like GPT-4 involves massive, one-off energy and financial investments.
– Inference. Running the model daily (every query, every interaction) generates ongoing operational costs that are now, for many companies, the larger burden.
It seems that as AI adoption explodes, inference costs are becoming a major pinch point. In a nutshell, this is because responding to every user prompt (whether polite or not) demands electricity, server capacity, and human oversight, and when you extend that across millions of daily users, small inefficiencies balloon into major issues.
How This Impacts OpenAI, Competitors, And Business Users
For OpenAI, these growing costs have a direct effect on business strategy. Free access to ChatGPT may no longer be sustainable at scale, especially as users lean on the system for increasingly complex tasks. For example, the company already charges $20 (£16) per month for its premium “ChatGPT Plus” service and has introduced a $200 (£150) per month “ChatGPT Pro” tier. Also, reports suggest that OpenAI is now exploring advertising models to supplement revenue.
Other AI providers are, of course, facing similar dilemmas. Google’s Gemini, Anthropic’s Claude, and Microsoft’s Copilot all involve vast back-end costs. Some, like Anthropic, are experimenting with tiered pricing and restricted free access to manage demand.
For business users, especially SMEs reliant on AI for tasks like customer service, marketing, and document generation, the potential implications could include:
– Higher costs. More businesses may find themselves needing to pay for premium AI services.
– Usage limits. Companies relying heavily on free AI tools could face throttling or capped usage.
– Environmental pressure. Businesses with sustainability commitments may need to rethink how they deploy AI, balancing productivity with carbon goals.
Is There A Way To Mitigate The Costs?
Thankfully, there are several potential strategies for reducing the financial and environmental burden of AI usage without sacrificing performance, such as:
– Model optimisation. AI developers are racing to make inference more efficient. Techniques like ‘model quantisation’ and ‘low-rank adaptation’ could significantly reduce the compute needed for each response.
– User education. Encouraging users to streamline their prompts could shave precious microseconds off every interaction.
– Energy innovation. Companies are investing in renewable energy to power their data centres. For example, Microsoft aims to be carbon negative by 2030.
– Alternative architectures. Research into more efficient AI models, including smaller specialist models rather than vast generalist ones, could lessen the overall burden.
Politeness Still Has Its Place
Interestingly, politeness may still have its place. As Kurt Beavers, a director on Microsoft’s Copilot team, points out, polite language can “set a tone” for more courteous AI responses. In customer-facing settings, this could lead to better user experiences, which is something businesses will not want to sacrifice lightly.
It seems, therefore, that while a “please” or “thank you” might seem harmless, at scale it reveals the fundamental truth that efficiency will be key to the future of AI and even our manners might need an upgrade.
What Does This Mean For Your Business?
It seems that a casual question on X has actually shed some light on one overlooked area of the challenges of scaling AI technology. Seemingly minor behaviours, like adding polite language to prompts, may have surprisingly major implications when multiplied across millions of users. For companies like OpenAI, it is likely to be a sharp reminder that the cost of delivering ever-more human-like AI experiences is not just technical, but financial and environmental too.
For UK businesses in particular, this could signal a future where the true price of AI adoption becomes harder to ignore. As premium AI services rise in cost and environmental concerns gather pace, firms may need to become more selective about when and how they use generative AI tools. For small and medium-sized enterprises, who have so far enjoyed the benefits of free or low-cost AI support for tasks like customer engagement and content creation, the possibility of tighter usage limits or higher subscription fees could soon reshape digital strategies. Those with strong ESG (environmental, social, and governance) commitments will also be under greater pressure to account for the carbon impact of their AI usage, adding another layer of responsibility to technology procurement decisions.
At the same time, for developers, investors, and regulators, the issue highlights a growing tension at the heart of the AI boom. The race to embed AI in every product and service is clashing with the reality that massive compute power is neither free nor limitless. As AI models grow in scale and sophistication, the demand for greener technologies, smarter model optimisation, and fairer business models will only intensify. It seems that businesses who plan ahead (i.e. investing in efficiency, staying informed about evolving AI practices, and questioning the real-world costs behind the tools they use) may be best placed to navigate this shifting landscape.
All this means that our (digital) manners now come with a price.