Discussion about this post

User's avatar
Peter A. Jansen's avatar

Tom, the AI Train is not just fast, it is decoupled from policy. You cannot slow it down with regulation because technology moves at the speed of the electron, and obeys Moore's Law, while Policy moves at the speed of election cycles/ fiscal year. This isn't just a delay, it is a governance gap that creates a structural vacuum where high-entropy actors can exploit the system without friction.

How do you regulate something that moves faster than regulation can?

Jerry Luftman's avatar

This is a very significant and timely topic. Artificial intelligence is often portrayed as a uniquely disruptive force—faster, broader, and riskier than any technology before it. History suggests a more familiar story. AI fits squarely within a long lineage of dual use technologies that generated extraordinary value for society while also enabling misuse. The defining question has never been whether such technologies should exist, but how societies govern them once their power becomes global.

The printing press transformed education, commerce, and political participation by making information widely accessible. It also accelerated propaganda, misinformation, and social unrest. Radio and television unified societies through shared information and culture, yet they became powerful instruments of mass persuasion and manipulation. Chemical engineering fed billions through fertilizers while enabling chemical warfare. The internet revolutionized communication and markets while creating new vectors for fraud, surveillance, and disinformation.

Nuclear energy stands as the clearest parallel for AI today. Nuclear science delivered medical breakthroughs and reliable energy on an industrial scale but also introduced risks so severe that unmanaged proliferation threatened global stability. The response was not prohibited, but governance at the international level. Licensing, inspection regimes, export controls, and global norms enabled peaceful use while constraining catastrophic misuse. Crucially, these controls made large scale investment possible by creating predictability and trust.

AI is reaching a similar moment. Like nuclear technology, advanced AI concentrates capability, scales rapidly, and produces effects that cross borders instantly. A system developed in one country can influence financial markets, public discourse, or security outcomes worldwide. This reality renders purely local or company level controls insufficient. As history shows, when technologies operate on a global scale, governance must also evolve beyond national boundaries.

What history also makes clear is that effective governance does not suppress innovation, it stabilizes it. The most successful responses to dual use technologies focused on three principles.

First, use based controls rather than blanket bans. Printing presses were not outlawed; libel and incitement were. Nuclear energy was encouraged for medicine and power generation while weapons were restricted. Second, graduated access to the most sensitive capabilities. Not everyone can operate a nuclear facility or manufacture controlled chemicals; similarly, not all AI capabilities require unrestricted deployment. Third, transparency and accountability through documentation, audits, and incident reporting. These mechanisms reduced fear, misinformation, and reactionary policy responses.

AI governance is now following this historical path. Around the world, policymakers are converging on risk based frameworks, disclosure expectations, and shared norms for responsible development. While the details differ across jurisdictions, the direction is consistent: high impact systems face greater scrutiny, developers bear responsibility for foreseeable misuse, and cross border coordination is increasingly seen as essential. These trends mirror earlier efforts in nuclear safety, aviation, and pharmaceuticals, where international alignment ultimately supported growth rather than constrained it.

The most significant obstacle to effective global AI governance is neither technical capacity nor normative disagreement, but the combined political and temporal constraints under which governance must be constructed. Historically, particularly the evolution of nuclear governance—illustrates this challenge with clarity. Nuclear governance failed because of mutual trust among states; rather, it emerged through the establishment of rules and institutions that reduced strategic uncertainty in an inherently unstable international environment.

The central question for AI governance is therefore whether institutional mechanisms can meaningfully narrow the gap created by unprecedented technological diffusion before that gap results in irreversible consequences. When the pace of technological deployment outstrips the development of formal legal frameworks, informal norms and practices inevitably assume a governing function. Organizations that adopt responsible practices at an early stage thus exert disproportionate influence over the formation of these norms, potentially mitigating the likelihood of abrupt, reactive, and destabilizing regulatory interventions at a later stage.

For business executives, the lesson is strategic, not ideological. Technologies that lack credible governance eventually lose public trust, triggering backlash, fragmentation, and unpredictable regulation. Technologies that develop alongside shared rules gain legitimacy, investment stability, and long term scalability. Nuclear energy did not endure because it avoided regulation; it endured because governance made it viable.

Artificial intelligence is not an anomaly in human history. It is the next chapter in a familiar story. Executives who recognize this pattern—and engage proactively with emerging global norms—will be better positioned to capture AI’s value while helping ensure it remains a force for broad economic and social benefit.

10 more comments...

No posts

Ready for more?