12 Comments
User's avatar
Peter A. Jansen's avatar

Tom, the AI Train is not just fast, it is decoupled from policy. You cannot slow it down with regulation because technology moves at the speed of the electron, and obeys Moore's Law, while Policy moves at the speed of election cycles/ fiscal year. This isn't just a delay, it is a governance gap that creates a structural vacuum where high-entropy actors can exploit the system without friction.

How do you regulate something that moves faster than regulation can?

Tom's avatar

Well, you are certainly correct that current regulation is much, much slower than the speed of AI change. But that doesn't mean we can't try to regulate aspects of it, and to speed up that process. Think of social media--no meaningful regulation in more than 20 years!

Peter A. Jansen's avatar

To stabilize the "Recursive Flow," we cannot simply "speed up" humans.

I would propose changing the Geometry of Governance from a Linear to a Recursive Policy: Move away from static "Acts" and "Bills" toward Dynamic Policy Gradients—rules that function like software updates, adjusting in real time based on feedback loops from the AI ecosystem.

Variance Suppression: We must introduce "Regulatory Sandboxes" (Low-variance Clean Rooms) where high-innovation branches are tested before they are allowed to impact the global "Adjacent Novelty" space.

If we do not address The Gap, the "AI Train" won't just be fast; it will be Uncoupled. A system with a disconnected steering mechanism is not a vehicle; it is a projectile.

Rachel Kaberon's avatar

I’m concurring with both Tom’s reasoned pleas and acknowledging Peter’s disparate reality of development .

I also know the system’ difference between human in the loop and humans in command.

Any one see a two factor authentication app whose default holds humans accountable? The mark that a human approved the action taken by the ai? There should be a seal present . Getting consensus across any institutional level will be too slow.

But if people feel greater confidence or trust when they see the seal, we might be able to drive more people to assume the oversight they unwittingly surrender

.

Jerry Luftman's avatar

This is a very significant and timely topic. Artificial intelligence is often portrayed as a uniquely disruptive force—faster, broader, and riskier than any technology before it. History suggests a more familiar story. AI fits squarely within a long lineage of dual use technologies that generated extraordinary value for society while also enabling misuse. The defining question has never been whether such technologies should exist, but how societies govern them once their power becomes global.

The printing press transformed education, commerce, and political participation by making information widely accessible. It also accelerated propaganda, misinformation, and social unrest. Radio and television unified societies through shared information and culture, yet they became powerful instruments of mass persuasion and manipulation. Chemical engineering fed billions through fertilizers while enabling chemical warfare. The internet revolutionized communication and markets while creating new vectors for fraud, surveillance, and disinformation.

Nuclear energy stands as the clearest parallel for AI today. Nuclear science delivered medical breakthroughs and reliable energy on an industrial scale but also introduced risks so severe that unmanaged proliferation threatened global stability. The response was not prohibited, but governance at the international level. Licensing, inspection regimes, export controls, and global norms enabled peaceful use while constraining catastrophic misuse. Crucially, these controls made large scale investment possible by creating predictability and trust.

AI is reaching a similar moment. Like nuclear technology, advanced AI concentrates capability, scales rapidly, and produces effects that cross borders instantly. A system developed in one country can influence financial markets, public discourse, or security outcomes worldwide. This reality renders purely local or company level controls insufficient. As history shows, when technologies operate on a global scale, governance must also evolve beyond national boundaries.

What history also makes clear is that effective governance does not suppress innovation, it stabilizes it. The most successful responses to dual use technologies focused on three principles.

First, use based controls rather than blanket bans. Printing presses were not outlawed; libel and incitement were. Nuclear energy was encouraged for medicine and power generation while weapons were restricted. Second, graduated access to the most sensitive capabilities. Not everyone can operate a nuclear facility or manufacture controlled chemicals; similarly, not all AI capabilities require unrestricted deployment. Third, transparency and accountability through documentation, audits, and incident reporting. These mechanisms reduced fear, misinformation, and reactionary policy responses.

AI governance is now following this historical path. Around the world, policymakers are converging on risk based frameworks, disclosure expectations, and shared norms for responsible development. While the details differ across jurisdictions, the direction is consistent: high impact systems face greater scrutiny, developers bear responsibility for foreseeable misuse, and cross border coordination is increasingly seen as essential. These trends mirror earlier efforts in nuclear safety, aviation, and pharmaceuticals, where international alignment ultimately supported growth rather than constrained it.

The most significant obstacle to effective global AI governance is neither technical capacity nor normative disagreement, but the combined political and temporal constraints under which governance must be constructed. Historically, particularly the evolution of nuclear governance—illustrates this challenge with clarity. Nuclear governance failed because of mutual trust among states; rather, it emerged through the establishment of rules and institutions that reduced strategic uncertainty in an inherently unstable international environment.

The central question for AI governance is therefore whether institutional mechanisms can meaningfully narrow the gap created by unprecedented technological diffusion before that gap results in irreversible consequences. When the pace of technological deployment outstrips the development of formal legal frameworks, informal norms and practices inevitably assume a governing function. Organizations that adopt responsible practices at an early stage thus exert disproportionate influence over the formation of these norms, potentially mitigating the likelihood of abrupt, reactive, and destabilizing regulatory interventions at a later stage.

For business executives, the lesson is strategic, not ideological. Technologies that lack credible governance eventually lose public trust, triggering backlash, fragmentation, and unpredictable regulation. Technologies that develop alongside shared rules gain legitimacy, investment stability, and long term scalability. Nuclear energy did not endure because it avoided regulation; it endured because governance made it viable.

Artificial intelligence is not an anomaly in human history. It is the next chapter in a familiar story. Executives who recognize this pattern—and engage proactively with emerging global norms—will be better positioned to capture AI’s value while helping ensure it remains a force for broad economic and social benefit.

Tom Davenport's avatar

Many good points, Jerry. I certainly agree that AI should continue to exist; I just don’t think it should be made increasingly powerful and autonomous. To your point, we didn’t make nuclear bombs autonomous—though we have perhaps overly concentrated the decision-making about it. Agree that we could certainly use some “institutional mechanisms” around AI, as you put it.

Dave Feineman's avatar

Tom

Great article, and I think that as an academic with ties to business leaders who listen to your messages , getting up on the soapbox now is extremely important.

Just a few tidbits to throw in- in Colorado there was push to regulate AI in last year's legislative term, which wound up being deferred until this year after a big pushback from AI companies against regulation and how it would have unintended impacts on the states economy. The key angle here was legislation that could help protect how individuals personal information could be used at a time when local governments were trying to join in the AI potential applications rush. So privacy and protection of personal information is an important topic in the larger how to constrain AI intrusion- and another angle in the regulation challenge.

Another is the use of AI systems to deliver hate messages. You might be interested to look at the ADL AI Index evaluation- which shows that even the LLMs produced by companies that have staff concerned about the security and delivery of false or dangerous messages (as opposed to the open source models) are not at all good at catching and restricting extremist bias.

Tom Davenport's avatar

Thanks Dave, and good to hear from you. I knew there was regulatory discussion in Colorado but didn’t know how it turned out. AI is certainly overlapping with personal information issues, but I think it raises some unique challenges as well—e.g. autonomous decisions and job loss issues. Hadn’t seen the ADL Index of LLMs, but not surprised that they have hate speech problems.

Timothy Hughes's avatar

Great article, thanks for this

Kane Tomlin's avatar

AI policy is Data Security and vice versa. So more than regulating what AI can and can't do, how can state and local government use AI to architect digital government? That's the use case we need. https://thegoodideafairy.substack.com/p/how-to-implement-citizen-centric

Bonus points, it doesn't need many laws enacted, maybe just one.

Micheline's avatar

The irony of throwing out all regulation to beat China in AI is not lost on me. Meanwhile the US is trying to introduce surveillance to match that of China.

Opinion AI's avatar

I think the slowdown is already here on the buyer side, most companies still arent seeing huge genAI value and budgets will get picky, they even say analytics is paying more right now.  But the train won’t really stop, it’ll just reroute to whoever has the cash and the chips, so the move is speed limits and liability, audits, spend transparency, not wishful pauses.