OpenAI CEO Sam Altman has done something no Silicon Valley titan has dared before: published a detailed, public blueprint telling governments exactly how to tax, regulate, and redistribute the unprecedented wealth that AI is about to generate — including wealth from his own company.
This isn’t a PR stunt. Or maybe it is. Either way, it matters.
The Claim That Changes Everything
Altman’s central argument is stark: AI superintelligence isn’t decades away. It’s close. Close enough that America needs a new social contract — one he compares in scale to the Progressive Era reforms of the early 1900s and Franklin Roosevelt’s New Deal during the Great Depression.
In a half-hour interview, Altman was blunt about what happens if policymakers sleepwalk through this moment: widespread job loss, destabilized societies, cyberattacks of historic scale, and — most chillingly — machines that humanity can no longer control.
The Two Threats Keeping Altman Up at Night
Of all the risks on the horizon, Altman singled out two as most immediate.
Cyberattacks. Senior tech, business, and government officials are already privately warning that next-generation AI models could enable a civilization-shaking cyberattack within the year. Altman didn’t dismiss the fear. “I think that’s totally possible,” he said. “I suspect in the next year, we will see significant threats we have to mitigate from cyber.”
Bioweapons. AI will cure diseases. Altman believes that deeply. But the same models capable of accelerating drug discovery can help a terrorist group engineer a novel pathogen. “That’s no longer a theoretical thing,” he said, “or it’s not going to be for much longer.”
Six Ideas That Could Reshape the Economy
OpenAI’s 13-page policy paper — titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First” — reads less like a corporate whitepaper and more like a political manifesto. Here are the most consequential proposals inside it:
1. A Public Wealth Fund Every American citizen would receive a direct stake in AI-driven economic growth through a nationally managed investment fund, seeded partly by AI companies. This is the document’s most radical idea — essentially a national dividend tied to the AI boom.
2. Robot Taxes As AI hollows out the payroll-driven tax base that funds Social Security, Medicaid, and food assistance, OpenAI proposes shifting taxation toward capital gains and corporate income — and taxing automated labor directly.
3. The Four-Day Workweek Rather than letting AI efficiency gains flow entirely to shareholders, OpenAI proposes incentivizing companies and unions to pilot 32-hour workweeks at full pay — treating productivity gains as time returned to workers.
4. A “Right to AI” Framed alongside literacy, electricity, and internet access, OpenAI argues affordable AI access should be guaranteed for workers, small businesses, schools, libraries, and underserved communities.
5. Rogue AI Containment Playbooks In its most unsettling passage, OpenAI openly acknowledges scenarios where dangerous AI systems “cannot be easily recalled” — because they are autonomous and capable of self-replication. The proposed solution involves coordinated government response frameworks built in advance.
6. Auto-Triggering Safety Nets The blueprint proposes economic tripwires: when AI-driven displacement hits preset thresholds, benefits — unemployment insurance, wage assistance, cash support — automatically expand. When conditions stabilize, they automatically phase out.
The Uncomfortable Question No One Is Asking Loudly Enough
Let’s be honest about what’s also happening here.
Altman has every financial incentive to hype superintelligence — higher valuations, more investment capital, greater geopolitical leverage. Publishing a visionary policy document positions OpenAI as the responsible adult in a room full of reckless competitors, a lane Anthropic carved out first. And proposing regulation before regulators act independently is a classic play to shape the rules of your own industry.
When asked directly why the public should trust him, Altman offered this: “I think almost everybody involved in our industry feels the gravity of what we’re doing… We all take that responsibility very seriously. We also think it’s very important that no one person is making the decisions by themselves that are going to impact all of us.”
A careful answer. A politician’s answer, some would say.
Why It Still Matters — Regardless of Motive
Here is what cannot be spun away: The CEO of one of the most powerful and best-funded AI companies on Earth is publicly stating that the technology he is racing to deploy may break capitalism as we know it, and that governments are unprepared for what is coming.
Whether you read that as altruism, strategy, or both — the admission itself is historic.
The debate Altman wants to start is worth having. The urgency he’s projecting is real, even if the motives behind projecting it are mixed. And the ideas in that 13-page document — radical as some of them are — deserve serious scrutiny from policymakers, not dismissal.
Superintelligence may or may not arrive on Altman’s timeline. But the moment when AI reshapes labor, wealth, and power is no longer hypothetical. The question now is whether the policy conversation catches up before the disruption does.
The man betting everything on superintelligence is telling the world the bet will change everything. That’s worth paying attention to — whatever his reasons.

