To capitalize on Claude's recent surge in user attention, AI startup Anthropic recently announced a limited-time reward program, significantly increasing the usage limits for its chatbot. From March 13th to March 27th, 2026, users of Claude during off-peak hours (i.e., times other than 8 AM to 2 PM Eastern Time on weekdays), including Free, Pro, Max, and Team subscription users, will automatically enjoy double the usage credits.
Automatic upgrades without setup, covering multiple platforms and productivity plugins.
Anthropic explained that this double credit promotion will "take effect automatically," and users will be able to enjoy this benefit directly without needing to make any additional settings in the background or manually activate it.
This promotion has an extremely wide range of applications. In addition to the basic web, desktop, and mobile apps, users of Cowork, Claude Code, and productivity tools such as Claude for Excel and Claude for PowerPoint (designed specifically for Microsoft Office) during off-peak hours will also be eligible for double credit.
Looking back at the end of last year (December 25th to 31st), Anthropic also launched a similar double credit promotion, but at that time it was only limited to Pro and Max premium subscribers. In comparison, this limited-time upgrade shows a greater ambition, including almost all free users and mid-level teams in the beneficiary list, in addition to the highest-tier Enterprise users.
A small thank you to everyone using Claude: We're doubling usage outside our peak hours for the next two weeks. pic.twitter.com/W7TEBPditq
— Claude (@claudeai) March 14, 2026
Behind the "Small Act of Gratitude": The Ethical Struggle Between AI Security and Department of Defense Contracts
Although Anthropic downplayed the event in its official marketing as "a small thank you to all Claude users," it's clear to anyone with a discerning eye that this decision is closely related to the recent controversy surrounding the U.S. Department of Defense (DoD).Contract DisputeIt's inextricably linked.
Not long ago, Anthropic refused the U.S. Department of Defense's request to remove certain security restrictions, adhering to its established AI safety safeguards. This insistence led to Anthropic not only losing cooperation with U.S. federal agencies but also being placed on the U.S. government's "supply chain risk" list.
This seemingly commercially disadvantageous decision unexpectedly earned OpenAI extremely high moral standing. Immediately after Anthropic's withdrawal, OpenAI chose to partner with the U.S. Department of Defense.Sign relevant agreementsThis move immediately sparked a strong backlash from developers and general users, and triggered a wave of calls to "boycott ChatGPT." Many users announced their switch to Claude or other more ethically compliant AI platforms, directly contributing to Claude's recent astonishing increase in user traffic.
We may face service traffic pressure again.
However, such adjustments may also pose a challenge to Anthropic's own service traffic capacity. Although taking advantage of its success is a common marketing strategy, Claude's service has recently experienced frequent errors and abnormal shutdowns due to the recent "takeover" of many users from the original OpenAI. In addition, the current offer of doubling the usage credit may put even greater pressure on its service server traffic capacity.
Although Anthropic has chosen to double the usage limit during off-peak hours, these off-peak hours, outside of Eastern Time, coincide with daytime working hours in Asia, and therefore could still see a surge in usage. Whether Anthropic can maintain stable service quality may depend on how it adjusts its service operation model in the future.


