Slowing Down the AI Train
It's now apparent that it's necessary
I am both a humanist (as a human how could one not be?) and a pragmatist, and thus far I have believed that limiting the pace of AI development would be both impossible to enforce and not necessarily a good idea. I still think it would be difficult to enforce, but I’ve changed my mind about whether it is the right thing to do. The drumbeat about the current and potential perils of AI has become louder and louder. Over the past few days, for example, there have been these stories in the online and mainstream press:
When AI Bots Start Bullying Humans Even Silicon Valley Gets Rattled
Something Big Is Happening (over 85 million views!)
AI Is Already Making Online Crimes Easier. It Could Get Much Worse
And these articles, all negative about the impact or potential of AI, were only in the past week. It no longer seems viable to stick your head in the sand about the negative impacts of AI on jobs, social life, political life, cybersecurity, non-AI software businesses, the environment, etc. Yes, there are some potential positive impacts as well—particularly in healthcare. I do think there is high potential to lower the cost and improve the quality of everyday healthcare, although there will no doubt be some negative impacts when AI freely dispenses health and treatment advice. And I would argue that current AI healthcare systems are already good enough to yield a major improvement in the quality and the cost of care if they were broadly implemented.
I’ve also been influenced by the increasing number of AI scientists who are trying to warn the world that the possibility for global damage from AI is too high. They include Nobelist Geoffrey Hinton, Turing Award winner Yoshua Bengio, Berkeley AI pioneer Stuart Russell, and most recently Google DeepMind CEO (and another Nobelist) Demis Hassabis. He warned in an interview a few days ago that agentic AI poses particular risks of taking actions its creators didn’t intend and could get out of control. He argued for international cooperation to “set minimum standards before existing institutions are overwhelmed.” Several other prominent researchers at major AI vendors have recently resigned for safety and human alignment concerns.
OK, I’m convinced. In short, I’m ready to put on the brakes on AI. The fact that AI systems are now improving themselves and generating their own code has been a big factor in my increased concern. I’d like to see researchers and vendors stop making Ai systems smarter. I’d like to see agentic AI restricted from coming up with its own goals, or even its own means of achieving those goals. I’d like to see regulations that prevent it from doing harm. Let’s focus on implementing the AI we have, which is already plenty intelligent to power improvements in many business and organizational processes.
How to Slow It Down?
The obvious answer to slowing AI down and making it safer is government regulation. I don’t see the current US federal government issuing any AI safety regulation; Congress is dysfunctional, and the Trump administration is hell-bent on eliminating all regulation and beating China on AI. In the US, that means that states and even cities will have to regulate AI. It’s also obvious that having a patchwork of state- and city-based regulation is nutty, but until there are major changes in the federal government that may be the only regulatory remedy. California has already passed some AI legislation, one component of which focuses on the safety of foundation LLMs. Since OpenAI, Google, and Anthropic are all headquartered in California, maybe that legislation will have some impact. It hasn’t thus far, however. New York City has also enacted legislation about the use of AI in hiring (Local Law 144), but thus far it appears not to have been enforced.
It’s not surprising that if state regulation of AI is the only alternative in the US now, AI vendors and investors are lining up to fight it. Meta, for example, just announced that it was giving $65 million to two political action committees—one for Democrats, one for Republicans—to support anti-regulation candidates. Last summer, vendors and AI investors put even more millions into Leading the Future, another anti-AI regulation PAC. To their credit, Anthropic seems to be the only major AI vendor that is still advocating for regulation, although I’m sure they would prefer that it were federal rather than state-based.
Max Tegmark, the MIT physics professor who leads the Future of Life Institute, said at a Davos panel (video here) that he believes that openness to AI regulation is increasing. I hope he’s correct, but I am not quite as optimistic.
On the same panel, historian Yuval Hariri argued for an international agreement not to recognize AI as legal persons. That would mean that AI systems can’t have bank accounts, can’t own property, and can’t form corporations on their own. He feels that if even one country allows personhood, the global structure preventing AI-wreaked havoc would break down. Avoiding AI being corporations seems to me a good idea, but I am not sure we still have a structure to create and enforce international agreements. It certainly wouldn’t hurt, however, for the UN to take up the issue.
There is also the possibility of a consumer revolt—particularly in the US, where most citizens don’t trust AI already. The industry can only thrive if people use its products, and once potential consumers realize that AI can cost them their jobs, hurt their children, increase crime, nurture political conflict and even war—and perhaps kill us all, though I am not yet ready to envision that possibility—they may stop using it. Lack of use would perhaps slow down the whole industry and the somewhat crazy valuations and investment levels in AI vendors and data centers.
On the corporate side of AI consumption, I’ve been doing some research recently on the economic value that companies receive from AI. I’ll publish more about it soon. It does suggest that companies are starting to receive substantial value from AI, but only about half of the respondents in a global survey say they are getting “a great deal of value.” One interesting finding is that they say they are getting more value from analytical AI (which is less scary) than generative AI by a substantial margin. But if companies don’t get sufficient value and start spending less on AI—and the survey suggests that only a few plan to spend a lot more—that could slow the pace of AI research and development considerably.
As I mentioned at the beginning, I’m a pragmatist, and I’m not sure how an AI slowdown can be accomplished. But I am increasingly sure that it is desirable or even necessary.



Tom, the AI Train is not just fast, it is decoupled from policy. You cannot slow it down with regulation because technology moves at the speed of the electron, and obeys Moore's Law, while Policy moves at the speed of election cycles/ fiscal year. This isn't just a delay, it is a governance gap that creates a structural vacuum where high-entropy actors can exploit the system without friction.
How do you regulate something that moves faster than regulation can?
This is a very significant and timely topic. Artificial intelligence is often portrayed as a uniquely disruptive force—faster, broader, and riskier than any technology before it. History suggests a more familiar story. AI fits squarely within a long lineage of dual use technologies that generated extraordinary value for society while also enabling misuse. The defining question has never been whether such technologies should exist, but how societies govern them once their power becomes global.
The printing press transformed education, commerce, and political participation by making information widely accessible. It also accelerated propaganda, misinformation, and social unrest. Radio and television unified societies through shared information and culture, yet they became powerful instruments of mass persuasion and manipulation. Chemical engineering fed billions through fertilizers while enabling chemical warfare. The internet revolutionized communication and markets while creating new vectors for fraud, surveillance, and disinformation.
Nuclear energy stands as the clearest parallel for AI today. Nuclear science delivered medical breakthroughs and reliable energy on an industrial scale but also introduced risks so severe that unmanaged proliferation threatened global stability. The response was not prohibited, but governance at the international level. Licensing, inspection regimes, export controls, and global norms enabled peaceful use while constraining catastrophic misuse. Crucially, these controls made large scale investment possible by creating predictability and trust.
AI is reaching a similar moment. Like nuclear technology, advanced AI concentrates capability, scales rapidly, and produces effects that cross borders instantly. A system developed in one country can influence financial markets, public discourse, or security outcomes worldwide. This reality renders purely local or company level controls insufficient. As history shows, when technologies operate on a global scale, governance must also evolve beyond national boundaries.
What history also makes clear is that effective governance does not suppress innovation, it stabilizes it. The most successful responses to dual use technologies focused on three principles.
First, use based controls rather than blanket bans. Printing presses were not outlawed; libel and incitement were. Nuclear energy was encouraged for medicine and power generation while weapons were restricted. Second, graduated access to the most sensitive capabilities. Not everyone can operate a nuclear facility or manufacture controlled chemicals; similarly, not all AI capabilities require unrestricted deployment. Third, transparency and accountability through documentation, audits, and incident reporting. These mechanisms reduced fear, misinformation, and reactionary policy responses.
AI governance is now following this historical path. Around the world, policymakers are converging on risk based frameworks, disclosure expectations, and shared norms for responsible development. While the details differ across jurisdictions, the direction is consistent: high impact systems face greater scrutiny, developers bear responsibility for foreseeable misuse, and cross border coordination is increasingly seen as essential. These trends mirror earlier efforts in nuclear safety, aviation, and pharmaceuticals, where international alignment ultimately supported growth rather than constrained it.
The most significant obstacle to effective global AI governance is neither technical capacity nor normative disagreement, but the combined political and temporal constraints under which governance must be constructed. Historically, particularly the evolution of nuclear governance—illustrates this challenge with clarity. Nuclear governance failed because of mutual trust among states; rather, it emerged through the establishment of rules and institutions that reduced strategic uncertainty in an inherently unstable international environment.
The central question for AI governance is therefore whether institutional mechanisms can meaningfully narrow the gap created by unprecedented technological diffusion before that gap results in irreversible consequences. When the pace of technological deployment outstrips the development of formal legal frameworks, informal norms and practices inevitably assume a governing function. Organizations that adopt responsible practices at an early stage thus exert disproportionate influence over the formation of these norms, potentially mitigating the likelihood of abrupt, reactive, and destabilizing regulatory interventions at a later stage.
For business executives, the lesson is strategic, not ideological. Technologies that lack credible governance eventually lose public trust, triggering backlash, fragmentation, and unpredictable regulation. Technologies that develop alongside shared rules gain legitimacy, investment stability, and long term scalability. Nuclear energy did not endure because it avoided regulation; it endured because governance made it viable.
Artificial intelligence is not an anomaly in human history. It is the next chapter in a familiar story. Executives who recognize this pattern—and engage proactively with emerging global norms—will be better positioned to capture AI’s value while helping ensure it remains a force for broad economic and social benefit.