Leaked information suggests OpenAI is pushing all its resources into a new model, codenamed ‘Spud,’ in a high-stakes bid to achieve artificial general intelligence.
OpenAI is reportedly preparing to launch GPT-6 as soon as April 14, a move that would escalate the artificial intelligence arms race with a purported 40 percent performance leap over its current models. The release, if confirmed, would place significant pressure on rivals Anthropic and Google, as the race to build more capable AI intensifies and the cost of computing power soars.
"This is AGI's 'last mile,' and they're willing to cut everything else to bet on it," an individual familiar with the company's internal discussions said, referencing the firm's goal of achieving artificial general intelligence. The information, which has not been officially confirmed by OpenAI, points to a dramatic strategic shift within the Microsoft-backed company.
The new model, internally codenamed "Spud," is described as a natively multi-modal architecture with a 2 million token context window, double that of Anthropic's latest models. Leaked details suggest the model completed pre-training on March 17 and is positioned to unify ChatGPT, Codex, and a browser into a single, unified agent, creating a desktop "super application."
The strategic pivot appears to be a direct response to competitive gains by Anthropic, whose Claude series of models have reportedly captured users from OpenAI, particularly in coding applications. To fuel the massive computational demands of GPT-6, OpenAI has reportedly canceled other projects, including its text-to-video model Sora, a move that scuttled a potential $1 billion deal with Disney.
A Strategic Shift to Counter Rivals
The decision to consolidate resources behind a single flagship model reflects a significant strategic realignment for OpenAI. The company, which enjoyed an early lead with ChatGPT, has faced mounting pressure from competitors. The success of Anthropic's coding-specific models like Claude Code and the broader rise of agentic AI platforms such as OpenClaw, which gives AI models control over computer functions, have highlighted the importance of code generation and agent-based tasks as a potential path to AGI.
According to internal sources, OpenAI co-founder Greg Brockman acknowledged that the company had previously focused too heavily on benchmark scores while losing ground in the crucial programming domain. This realization reportedly triggered a "programming red alert" inside the company starting in December 2025, forcing CEO Sam Altman to make the difficult decision to terminate non-core product lines to free up essential computing resources for the development of GPT-6.
The High Cost of Ambition
The alleged all-in bet on GPT-6 underscores a critical bottleneck in the AI industry: the scarcity and high cost of computing power. Even for a company as well-funded as OpenAI, which recently closed a funding round valuing it at $852 billion, the computational requirements for training and deploying frontier models necessitate strategic sacrifices. The company's focus has reportedly shifted almost entirely to building out its data center capacity.
This industry-wide constraint is a major tailwind for hardware providers, particularly Nvidia (NVDA), whose GPUs are the primary workhorses for training large language models. The race to secure these chips is driving massive capital expenditures across the tech sector. As AI companies push the boundaries of model scale and capability, their success becomes increasingly dependent on their ability to procure and manage vast fleets of specialized hardware, making the supply chain for AI chips a central theater of competition.
This article is for informational purposes only and does not constitute investment advice.