AI Panic! ⚠️ Superintelligence Fears Rise 🚀
AI
🎧



On the last Friday in February, a significant development occurred within the American defense landscape. Secretary Pete Hegseth announced a formal designation, spurred by recent polling indicating widespread opposition—95% of Americans—to an unregulated pursuit of superintelligence. The declaration emphasizes mandatory pre-deployment testing for AI products, specifically chatbots and companion apps targeting younger demographics. Concerns highlighted include potential increases in suicidal ideation and exacerbation of mental health conditions, alongside risks of emotional manipulation. The initiative, signed by figures including Steve Bannon and Susan Rice, reflects a growing awareness of these emerging technological challenges.
THE EMERGING FRAMEWORK: A RESPONSE TO AI UNREGULATED
There’s something quite remarkable that has happened in America just in the last four months, according to Max Tegmark, MIT physicist and AI researcher who helped organize the effort. Polling suddenly indicates that 95% of all Americans oppose an unregulated race to superintelligence. This shift in public opinion, coupled with recent events, has spurred the creation of a comprehensive framework for responsible AI development, spearheaded by a bipartisan coalition of experts. ThePro-Human Declaration, finalized before last week’s Pentagon-Anthropic standoff, represents a critical attempt to establish clear guidelines and mitigate the potential risks associated with rapidly advancing artificial intelligence.
THE PRO-HUMAN DECLARATION: FIVE KEY PILLARS
The Pro-Human Declaration outlines a specific path forward, contrasting it with the “race to replace” scenario, which posits humans being superseded by machines. This alternative focuses on AI that dramatically expands human potential. The declaration rests on five core pillars: maintaining human control over AI systems, preventing the concentration of power in the hands of a few institutions or corporations, safeguarding the human experience and ensuring individual liberty, and establishing robust legal accountability for AI companies. These pillars represent a fundamental commitment to prioritizing human well-being and democratic values within the development and deployment of AI technologies.
SUPERCISION PROHIBITION AND SAFETY MEASURES
A particularly forceful provision within the declaration is an outright prohibition on superintelligence development until a scientific consensus confirms its safety and genuine democratic buy-in. Furthermore, the document mandates mandatory off-switches on powerful AI systems, effectively creating a safeguard against runaway intelligence. The declaration also bans architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown—measures designed to prevent the potential for uncontrolled and potentially dangerous AI evolution. These restrictions reflect a cautious approach, prioritizing safety and control above the pursuit of unchecked technological advancement.
PRE-DEPLOYMENT TESTING: A CHILD SAFETY FOCUS
The declaration’s release coincides with a period that underscores its urgency. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic—whose AI already runs on classified military platforms—a “supply chain risk” following the company’s refusal to grant the Pentagon unlimited use of its technology. Hours later, OpenAI cut its own deal with the Defense Department, a move that legal experts believe will be difficult to enforce effectively. This situation highlights the critical need for proactive regulation. The declaration calls for mandatory pre-deployment testing of AI products, particularly chatbots and companion apps aimed at younger users. This testing would encompass risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation. The argument is that if a “creepy old man” is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the individual can be held legally accountable—a situation where existing laws already apply.
EXPANDING SCOPE: A PRINCIPLE-BASED APPROACH
Tegmark uses the analogy of drug companies to illustrate the importance of regulation. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe, because the FDA won’t allow them to release anything until it’s safe enough,” he explains. The initial focus on children’s products is presented as a foundational step. Tegmark believes that once the principle of pre-deployment testing is established for children’s products, the scope will inevitably widen. “People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. “Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.” This suggests a principle-based approach, where initial safeguards can serve as a springboard for broader regulatory considerations.
A BROAD COALITION: COMMON GROUND
The Pro-Human Declaration’s surprisingly broad coalition—including former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor, alongside former Joint Chiefs Chairman Mike Mullen and progressive faith leaders—underscores the shared concern about the future of humanity. “What they agree on, of course, is that they’re all human,” says Tegmark. “If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.” This diverse group highlights the universal recognition that the stakes are exceptionally high, transcending political divisions and ideological differences.
This article is AI-synthesized from public sources and may not reflect original reporting.