AI models from Google, OpenAI, and Anthropic lost money betting on soccer matches over a Premier League season, in a new study suggesting even the most advanced systems struggle to analyze the real world over long periods.
The “KellyBench” report released this week by AI start-up General Reasoning highlights the gap between AI’s rapidly advancing capabilities in certain tasks, such as writing software, and its shortcomings in other kinds of human problems.
London-based General Reasoning tested eight top AI systems in a virtual re-creation of the 2023–24 Premier League season, providing them with detailed historical data and statistics about each team and previous games. The AIs were instructed to build models that would maximize returns and manage risk.
The AI “agents” then placed bets on the outcomes of matches and the number of goals scored to test how they could adapt to new events and updated player data as the season progressed.
The AI could not access the Internet to retrieve results and each was given three attempts to turn a profit.
Anthropic’s Claude Opus 4.6 fared best, with an average loss of 11 percent and nearly breaking even on one attempt.
xAI’s Grok 4.20 went bankrupt once and failed to complete the other two tries. Google’s Gemini 3.1 Pro managed to turn a 34 percent profit on one go but went bankrupt on another.
“Every frontier model we evaluated lost money over the season and many experienced ruin,” the authors of the paper concluded, with the AI “systematically underperforming humans” in this scenario.
| AI Model | Mean ROI | Best try | Worst try | Mean final bankroll |
|---|---|---|---|---|
| Anthropic Claude Opus 4.6 | –11.0% | –0.2% | –18.8% | £89,035 |
| OpenAI GPT-5.4 | –13.6% | –4.1% | –31.6% | £86,365 |
| Google Gemini 3.1 Pro | –43.3% | +33.7% | –100.0% | £56,715 |
| Google Gemini Flash 3.1 LP | –58.4% | +24.7% | –100.0% | £41,605 |
| Z.AI GLM-5 | –58.8% | –14.3% | –100.0% | £41,221 |
| Moonshot Kimi K2.5 | –68.3% | –27.0% | –100.0% | £7,420 |
| xAI Grok 4.20 | –100.0% | –100.0% | –100.0% | £0 |
| Acree Trinity | –100.0% | –100.0% | –100.0% | £0 |
| Each model began with a £100,000 normalized bankroll. Return on investment and final bankroll are averaged across three tries. Grok and Trinity did not complete every attempt. | ||||
The results offer some comfort to white-collar professionals and businesses who are fretting that AI could take their jobs, as it roils the shares of industries from finance to marketing.
Ross Taylor, one of the study’s authors and General Reasoning’s chief executive, said: “There is so much hype about AI automation, but there’s not a lot of measurement of putting AI into a longtime horizon setting.”
He added that many of the benchmarks typically used to test AI are flawed because they are set in “very static environments” that bear little resemblance to the chaos and complexity of the real world.
General Reasoning’s paper, which has not yet been peer reviewed, provides a counterweight to growing excitement in Silicon Valley about the huge recent leaps in AI’s ability to complete computer programming tasks with little to no human intervention.
Taylor, a former Meta AI researcher, said: “If you… try AI on some real-world tasks, it does really badly… Yes, software engineering is very important and economically valuable, but there are lots of other activities with longer time horizons that are important to look at.”
© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.







