After AI became a standard tool for enterprises, a phenomenon once dismissed as a “matter of feel” is quickly coming to the surface: LLMs (large language models) are getting “dumber.” Netizen Wisely Chen points out that so-called “LLM loss of intelligence” is not an urban legend—it has already been continuously tracked through data, and it is now having a real impact on enterprise work processes.
He cites his own experience as an example. On April 15, Anthropic’s Claude services suffered an across-the-board downgrade, including claude.ai, the API, and Claude Code, all showing “Degraded Performance.” This is not simply slower response times or occasional errors; instead, the response quality clearly collapsed, and even situations arose where it couldn’t be used normally, causing all three of his development tasks that day to be delayed in full.
For individual developers, this kind of situation may only mean reduced efficiency, but for enterprise IT teams, the impact is multiplied. When a team has multiple engineers simultaneously relying on AI tools for coding, document writing, and workflow automation, a single model downgrade means overall productivity dips collectively at the same time—turning into a measurable loss of time and costs.
Does it feel like AI is getting dumber? Data confirms it has “already degraded”
Wisely Chen notes that claims like “GPT is getting dumber” and “Claude isn’t as good as before” have circulated in the community for a long time, but have long lacked objective data to back them up. Only until recently, with the emergence of platforms that continuously monitor model quality, has this phenomenon been quantified for the first time.
Among them, StupidMeter runs 24-hour automated tests on mainstream models including OpenAI, Anthropic, Google, and more, tracking metrics such as correctness, reasoning ability, and stability. Unlike traditional one-off benchmarks, these systems are more similar to how enterprises monitor an API or service availability—observing fluctuations in how the model performs in real usage environments.
The results are quite straightforward: most mainstream models are currently in a warning or degraded state, with only a few models maintaining normal performance. This means that model quality is unstable—a widespread industry phenomenon rather than a problem limited to a single product.
LLMs quietly lose intelligence, impacting enterprise AI workflow stability
For enterprises, this kind of change means AI has shifted from a “tool for improving efficiency” to a “variable that affects stability.” If a company’s daily workflows—everything from writing code, to doing code reviews, to producing documents and analysis reports—are already highly dependent on LLMs, then when the model’s reasoning ability drops or response quality deteriorates on a given day, these issues won’t occur locally like traditional software bugs. Instead, they will seep into every part of the workflow that uses AI at the same time.
More importantly, these fluctuations are often hard to predict and difficult to detect in real time. Most enterprises do not have mechanisms to continuously monitor model quality. They usually only realize the problem comes from the model after output results turn abnormal or team efficiency declines. In such a scenario, “loss of intelligence” is no longer just a subjective user experience—it becomes a systemic risk that directly affects the rhythm of business operations.
When AI becomes water and electricity, stability becomes the new key metric
Wisely Chen compares the role of LLMs to “water and electricity for modern enterprises.” When AI has been deeply integrated into day-to-day operations and becomes an indispensable foundational capability, the importance of stability naturally rises as well.
In the past, when enterprises evaluated AI tools, they mainly focused on model capability, price, and features. But as the “loss of intelligence” phenomenon comes into view, another more critical metric is emerging: stability. When model quality may change without notification, enterprises are no longer just “using AI”—they must start taking on a new form of infrastructure risk. Even more bleak is that if you only look at cutting-edge large language models, as long as the compute power problem hasn’t been solved, it may continue to happen.
This article first appeared in Lianxin ABMedia: Data reveals “Claude loss of intelligence” is not an urban legend—AI model instability becomes an enterprise risk.
Related Articles
Meta Stock Rises 1.73% as Company Plans 8,000-Job Layoff Starting May 20
Google’s annual report says Gemini achieves millisecond interception, blocking 99% of scam ads
Ethereum Co-founder Lubin: AI Will Be Critical Turning Point for Crypto, But Tech Giant Monopoly Poses Systemic Risk
Elon Musk Pushes 'Universal High Income' Checks as Ultimate Solution for AI Unemployment
DeepSeek Reportedly Launches First External Fundraising Round, Targets $10B+ Valuation and $300M+
ChatGPT ads move into Australia and New Zealand: Free and Go users first, paid plans stay ad-free