Post: A roadmap for AI, if anyone will listen

A roadmap for AI, if anyone will listen

While Washington’s break with Anthropic exposed the utter lack of any coherent principles governing artificial intelligence, a bipartisan coalition of thinkers has assembled something that the administration has so far refused to produce: a framework for what AI development should actually look like.

gave Pro-Human Declaration Finalization was done before last week’s Pentagon-Anthropic standoff, but no one involved in the two incident collisions was injured.

“Something very remarkable has happened in the United States in the last four months,” said Max Tegmark, an MIT physicist and AI researcher who helped lead the effort. In conversation With this editor. “Sudden polling [is showing] That 95 percent of all Americans oppose the unregulated generation of superintelligence.

The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the sobering observation that humanity is at a fork in the road. One path, which the announcement calls a “race to change,” sees humans first as workers, then as decision-makers, as unaccountable institutions and their machines gain power. The other leads to AI that vastly expands human potential.

The latter scenario rests on five key pillars: holding humans responsible, avoiding concentration of power, protecting the human experience, protecting individual freedom, and holding AI companies legally accountable. Its more muscular provisions expressly prohibit the development of superintelligence unless there is scientific consensus it can be done safely and with genuine democratic buy-in. Mandatory off switches on powered systems; and banning architectures capable of self-replication, autonomous self-improvement, or shutdown resistance.

The release of the declaration coincides with this period which makes it easy to appreciate its urgency. Last Friday in February, Defense Secretary Pat Hegseth designated Anthropic — whose AI runs on already classified military platforms — a “supply chain risk” after the company refused to allow the Pentagon unlimited use of its technology, a label usually reserved for firms with ties to China. Hours later, OpenAI terminated its contract with the Department of Defense, which legal experts say will be difficult to enforce in any meaningful way. All this has laid bare just how costly congressional inaction on AI has become.

As Dan Ball, a senior fellow at the Foundation for American Innovation, told the New York Times After that, “This is not just a dispute over a treaty. This is the first discussion we’ve had as a country about control over AI systems.”

TechCrunch event

San Francisco, CA
|
October 13-15, 2026

Tegmark arrived at an analogy that most people can understand when we speak. “You never have to worry that some drug company is going to release another drug that does a lot of harm before people figure out how to make it safe,” he said, “because the FDA won’t let them release anything until it’s safe enough.”

Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tagmark sees child safety as the pressure point most likely to break the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — especially chatbots and companion apps aimed at young users — covering risks including increased suicidal ideation, deterioration of mental health conditions, and emotional manipulation.

“If some weird old man is texting an 11-year-old boy pretending to be a teenage girl and trying to get the boy to commit suicide, that boy could go to jail for that,” Tagmark said. “We already have laws. It’s illegal. So why is it any different if a machine does it?”

He believes that once the principle of pre-release testing for children’s products is established, its scope will almost inevitably expand. “People will come along and be like — let’s add some more requirements. Maybe we should also check that it doesn’t help terrorists build bioweapons. Maybe we should check to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”

It’s no small feat that former Trump adviser Steve Bannon and President Obama’s national security adviser Susan Rice signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive religious leaders.

“What they agree on is that they’re all human,” Tagmark says. “If it’s going to come down to whether we want a future of humans or a future of machines, of course they’re going to be on the same side.”