GitHub will start charging Copilot users based on their actual AI usage
4 hour ago / Read about 11 minute
Source:ArsTechnica
GitHub says it can no longer absorb "escalating inference cost" from it heaviest AI users.


Credit: Github

GitHub has announced that it will be shifting to a usage-based billing model for its GitHub Copilot AI service starting on June 1. The move is pitched as a way to “better align pricing with actual usage” and a necessary step to keep Copilot financially sustainable amid surging demand for limited AI computing resources.

GitHub Copilot subscribers currently receive an allocation of monthly “requests” and “premium requests,” which are spent whenever they ask Copilot for help from an AI model. But those broad categories cover many different AI tasks with a wide range of total backend computing costs, GitHub says.

“Today, a quick chat question and a multi-hour autonomous coding session can cost the user the same amount,” the Microsoft-owned company wrote in its announcement. And while GitHub says it has “absorbed much of the escalating inference cost behind that usage” to this point, lumping all “premium requests” together “is no longer sustainable.”

Under the new pricing system, GitHub Copilot subscribers will receive a monthly allotment of “AI Credits” that matches their monthly subscription payment. Pricing for additional AI usage beyond those credits “will be calculated based on token consumption, including input, output, and cached tokens, using the listed API rates for each model.”

Those API rates can vary greatly depending on the sophistication of the model being used; pricing for OpenAI’s high-end GPT models currently ranges from $4.50 per million output tokens (GPT-5.4 Mini) to $30 per million output tokens (GPT-5.5), for instance. The total number of tokens used for an individual AI prompt can also vary greatly depending on how much “thinking” time the model needs to craft its output.

GitHub Copilot subscribers will still be able to use simple AI suggestions like code completion and Next Edit without consuming AI credits. But Copilot code reviews will come with an additional cost in the form of GitHub Actions minutes.

Before the new pricing structure takes effect on June 1, GitHub Copilot users will be able to use a “preview bill” tool to help forecast how their current AI usage would be charged under the new pricing model.

Something’s gotta give

Last week, AI critic Ed Zitron cited “leaked internal documents” in reporting on the upcoming usage-based billing changes. Those documents reportedly indicate that the week-over-week costs for GitHub Copilot had nearly doubled since January. That timing aligns with the rise of agentic AI assistants like Openclaw, which can consume massive amounts of AI tokens through their nearly always-on multi-agent workflows.

Subsidizing that level of usage through heavily discounted subscription rates has apparently become untenable for GitHub, which says its new usage-based pricing “reduces the need to gate heavy users” who take full advantage of the current pricing system. “This change is designed to deliver a more sustainable and reliable product experience by aligning pricing to actual usage and costs,” the company wrote in an FAQ.

Last week, GitHub paused new signups for its subscription plans, tightened usage limits, and removed Claude’s Opus models from the lower-tier Pro plans. At the time, GitHub said those changes were “necessary to ensure we can serve existing customers with a predictable experience.”

GitHub’s pricing decision follows an Information report that Anthropic has begun charging large Claude Enterprise subscribers for the full cost of the computing resources they use rather than offering subscription-subsidized discounts on AI tokens. Last week, Anthropic also briefly tested removing the resource-intensive Claude Code from its $20-per-month Pro subscription plan. And Anthropic has been adjusting usage limits during the “peak hours” of 5 am to 11 am Pacific Time in an effort to limit costs and improve reliability for subscribers.

These kinds of pricing moves could become more common as major AI companies try to convert growing revenue and high demand for their services into the kinds of profits that have so far been illusory. Amid an ongoing shortage of computing resources to meet that demand, the days of subsidized, subscription-based usage discounts for the most voracious users of AI may be coming to an end.