Artificial Intelligence, the Complexity Trap, and National Security
Carl O. Pabo, Ph.D.
This essay shows how a qualitative agent-based model can facilitate scenario analysis and policy development in an Age of AI.
The rise of artificial intelligence will radically transform the strategic landscape. It will change the nature of warfare, reshape economic competition, affect political stability, and change the cognitive frameworks through which leaders must assess threats. The paradox of AI is striking: while empowering individual actors, it dramatically increases system-wide complexity. Military leaders urgently need new tools to effectively “observe” and “orient” on this new landscape—to maintain effective OODA loops amidst this mind-boggling complexity.
To address this challenge, I offer a new agent-based model that can help military strategists see further into the future. My model shows how key societal actors—from tech companies to foreign adversaries—are likely to behave as AI advances, and how their actions and interactions will reshape our world. By providing some added clarity in an increasingly complex environment, my model enhances the “observe” and “orient” phases of strategic decision-making, helping strategic planners have a better view of the future.
Design of this model began with a simple recognition: our future will be shaped by the actions and interactions of billions of people and millions of AI systems. While we cannot predict all these interactions in detail, we can create a useful qualitative model by identifying key societal groups and anticipating how they will behave amidst the rapid rise of AI. By analyzing how these agents (these groups of people) will act and interact, we can—as detailed below—foresee critical changes likely to occur in the next several years as AI reshapes our world.
Modeling shows both tremendous opportunities and serious risks. There will be amazing technical advances, yet, at the same time, there will be many new types of competition and confusion. The net complexity of the social/political/economic/military system will increase far more rapidly than AI can expand the view of policy planners.
Thus, paradoxically, the world as a whole will become increasingly difficult to understand and govern even as AI helps each individual see and do more than they would without it. This system-level complexity demands fundamental changes in methods for strategic planning. Military leaders must develop new approaches to see as best they can through this fog of complexity, particularly since AI’s benefits may be distributed so unevenly as to threaten the social stability on which national security depends.
Agent-based modeling will help when navigating this complexity. And cooperative refinement of the approach outlined below will let us develop more effective strategies for addressing these unprecedented challenges emerging in the Age of AI.
Setting Up this Agent-Based Model
The Agents: The current version of my model identifies 34 key societal actors—from tech companies and everyday citizens to political leaders and AI-based agents themselves—and predicts how their actions and interactions will begin changing the world over the next several years. This framework facilitates analysis amidst complexity that would otherwise overwhelm human cognitive capacity.
How Agents Will Behave: As I prepared to use this agent-based model, I needed to know: How will these diverse societal actors behave in an age of AI? How will each agent act? How will their actions and interactions change the world?
In addressing these questions, I made the simplifying assumption that human agents will tend to act in selfish, self-centered ways (seeking power, influence, financial reward, fame, etc.). I don’t mean this cynically—I just take this as a reliable principle about human behavior. Our neural machinery evolved to protect ourselves, our families, and our allies.
Given this view, my model assumes that people will use AI like they have used other tools. I.e., computers extend and amplify the power of human thought much as physical tools—a hammer, a lawnmower, or a machine gun—amplify our physical power. People using AI will apply this new power in self-centered ways, seeking power, financial advantage, or competitive advantage. The challenge of modeling involves foreseeing how the net effect of such selfish individual behaviors will play out for society as a whole.
Modeling as a Window on the Future
My initial work with this model—using assumptions above and trying to foresee how these 34 agents will act and interact over the next few years—offers a window on the human future. It reveals four key changes that will transform society:
As in the Industrial Revolution, 1) there will be many cases in which the “selfish” behavior of inventors and entrepreneurs creates broad benefits for others. AI is likely to lead to remarkable advances in medicine, materials, and systems for generating renewable energy.
Yet my modeling reveals many cases in which selfish behavior will lack broader benefit and will directly impose severe new costs on society. Widespread use of AI will lead to 2) increasing competitive pressure, 3) unprecedented layers of complexity, and 4) governance challenges that threaten to overwhelm our policymaking capabilities.
These key changes—predicted by my model—are summarized in the following sections:
1) Thought Becomes a Commodity; “Things” Start to Think
Artificial intelligence will fundamentally transform our world by supplementing—and often replacing—human thought. It will accelerate breakthroughs in fusion energy, quantum computing, and brain/computer interfaces. It will aid in designing “optimized” human genomes. Intelligent machines will become ubiquitous: self-driving vehicles, human-level robots, and autonomous weapons will transform transportation, labor markets, and warfare. New algorithms will reshape our financial systems.
These tangible benefits mirror how the Industrial Revolution created physical tools that amplified human capabilities. Yet the fundamental new discovery—having machines that think and “make ideas”—will change society in other, far more radical ways.
2) Competitive Pressure Increases Relentlessly
The rise of AI will dramatically intensify the competitive pressure that’s already felt by individuals, companies, and countries.
In the everyday jostle for power and attention, AI will become necessary because everyone else is using it. Competitive zero-sum games will intensify—with individuals competing for jobs, political parties for attention, and investors for financial returns. Financial markets will grow ever more complex as AI-powered trading strategies compete in increasingly opaque ways, and ever-more-sophisticated cyberattacks will amplify systemic risk to national and international security.
This may, at first, feel like more of the same—as if it were a simple continuation of a pattern of steadily increasing risks that cybersecurity experts have dealt with for decades. Yet these problems will compound rapidly as a) specialized chips for AI continue to increase computational speeds, and as b) improvements in software let AI systems become even faster, smarter, and start interacting with other AI systems. As AI systems help develop even more sophisticated AI, we’ll move to much steeper parts of the exponential curve. Change will proceed at supra-exponential (double or triple exponential) rates, steadily creating and re-creating a more difficult challenge for cybersecurity experts than anything previously encountered.
Nations will require ever-more-advanced AI systems to combat cybercrime, prevent “grey zone” attacks, and develop capabilities for cyber-warfare. This creates an escalating cycle where both attackers and defenders must continually upgrade their capabilities just to maintain their relative position—a clear example of effort that produces no net gain for society or the world as a whole.
Beyond cyber warfare, AI will enable ever-more sophisticated ransomware and targeted scams that extract value from social and economic systems without contributing anything in return. The deception/detection “arms race” will accelerate, forcing everyone to run faster simply to stay in place. And we need to understand: Even these zero-sum competitions impose real, net-negative economic burdens on society. We could, for example, soon reach a stage where the cost of protecting a bank account against cyberattacks effectively creates a “negative interest rate” on savings. Resources directed toward offense and defense in these escalating battles are likely to be a significant drag on productivity and meaningful real-world work.
3) Society Will Be Afflicted with a “Fog of Complexity”
In addition to these “local” competitive pressures (with each of the agents in my model pitted against a few other agents), AI will have further system-level consequences that envelop society as a whole in a fog of complexity.
Sources of System-Level Complexity: The main source of this problem can be seen by considering the radical difference between local and global perspectives. From an individual viewpoint, having access to powerful AI systems increases personal knowledge and capability. Yet globally, as billions of people use these systems simultaneously, the world as a whole becomes more complex at an astonishing rate.
This system-level complexity is many orders of magnitude more severe than what’s involved in the “local” competitions discussed above. It’s not merely the sum of all these competitive struggles—it’s an emergent property arising from the way in which ideas, algorithms, people, and machines repeatedly act and interact in every part of our global social-political-economic-military systems.
This paradox (of increasing complexity and confusion in an age of AI) can also be understood by considering implications of the way in which AI allows for the outsourcing of thought. As artificial intelligence advances and gets used all over the world, an ever-greater portion of “thinking” occurs outside the human mind, proceeding with a speed and complexity beyond human cognitive capacity. The events enabling these new patterns of thought reside in our computers rather than in the neural networks embedded in our own brains. AI-powered bots will act and interact in ways so complex that no one will know what they are doing (and there is, of course, no omniscient view that even lets us see what code they are using). This fundamentally degrades our observational capabilities, making it impossible for individuals—including those responsible for public policy and national security—to comprehend the full landscape of human-machine interactions.
New Security Risks: In this environment, some critical security questions become as hard as those faced by Herman Kahn when he tried to “think the unthinkable” about nuclear strategy. We must ask: What are the risks of a cyberattack large enough to upset the global financial system? Of fake news or a hallucinating AI system that could trigger conflict escalation? Of an electromagnetic pulse large enough to cause global chaos by disrupting computational systems at a national or international scale? Of AI causing unemployment severe enough to lead to the overthrow of governments?
This fog of complexity represents an acceleration of broader trends that I’ve previously discussed in “Civilization and the Complexity Trap.” AI doesn’t merely continue this pattern—we accelerate to warp speed as we hurtle deeper into the fog of complexity. AI systems actively generate novel patterns of thought and interaction, becoming a real wild card within the larger complexity trap that already threatens our national security and governance capabilities.
New ideas are needed for a new age, and we must ask: does an ability to foresee this complexity trap lead to any fresh ideas about ways in which new treaties—or new laws—might help constrain the growth of complexity and thus might benefit the nation and the planet as a whole?
4) Challenges of Policy Development Overwhelm the Human Mind
In addition to the fog that makes it hard to see what is happening, there is a fog that makes it hard—even when risks are identified—to figure out what we should do.
This further type of complexity bedevils the mind as we try to move forward with decision and action in complex social, political, and military realms. We see this with each of the “New Security Risks” mentioned above. Perhaps the most profound example of this decision-making fog involves AI’s potential to displace human workers. As robots and AI systems increasingly perform jobs currently done by people, pressures will mount: Few companies will retain employees for tasks that can be performed more quickly, carefully, or cheaply via some AI-based system.
Yet, as this happens, we’ll face a fundamental threat to the core social contract underpinning virtually all human societies throughout history: the principle that people must contribute through their labor to receive economic benefits. Governments will confront an unprecedented dilemma: What happens when large segments of the population cannot find economically valuable roles because AI has made their potential contributions obsolete? Mass unemployment and resulting poverty could lead to political instability—as seen with conditions that led to the French Revolution in 1789 and the downfall of many other governments throughout history.
We may need entirely new social structures for distributing society’s output—mechanisms decoupled from traditional concepts of “earning a living.” Yet society, at present, has no coherent way of thinking about how such a world might function, or how the world could make a transition from the current economic system to some future, radically different way of orchestrating the flow of goods, services, and ideas in this new Age of AI. For military leaders, the risk of such widespread social instability presents a security threat on par with other conventional military challenges. Such unrest could undermine the very foundations upon which national security depends.
Extending and Using this Qualitative Model
This model is robust enough to be extended, refined, and used in a variety of ways that can benefit national security (Specific military policy implications are discussed in the final section of this paper.)
Sharing the Model: The real power of this model will emerge as it’s shared and used by others. Shared use will help double-check the validity of my analysis offered above. Other immediate benefits will come as it establishes a higher standard of discourse about the potential implications of AI. When AI enthusiasts or AI doomers make bold predictions, others can ask: On what timescale do you expect these events? Can you demonstrate your reasoning with this agent-based model?
When teams collaborate on scenario analysis and policy development, this model will help highlight areas of agreement and focus attention on disagreements. The framework is robust enough that even AI-based systems themselves could use it as a shared conceptual frame when working with human teams or when this agent-based model is connected with macro-economic models. More details of this current agent-based model can be seen in a white paper posted on carlpabo.com.
Policy Development: My initial work has focused on foresight and scenario analysis—predicting how agents will behave in the current, largely unregulated, environment where AI development in the U.S. faces minimal constraints. We need to start by developing a conceptual lens that helps us observe and orient in the current environment before addressing areas where regulation, or changes in national security policy, may be needed.
The model provides value in two distinct ways: First—as emphasized in everything above—it gives security experts a better view of the future context in which they must work. Second, it provides a frame in which they can consider how their actions (as agents in this system), or their warnings to other branches of government, can affect the system as a whole.
At this second level, the model will also provide a valuable “test bed” for exploring proposed regulations or changes in military policy. That is: We can begin to simulate how different policy interventions might alter the behavior of key agents. This analytic frame will help us address questions like: How would a system for universal basic income actually work? Who is legally responsible for potential rogue, or malicious, AI-based agents that may pose threats to society? How might different regulatory approaches affect the global balance of military AI capabilities? And how will society decide whether to grant “human rights” to (potentially) sentient robots?
This model offers a framework to test potential strategies and consequences before implementing policies in the real world. For military strategists, this testing capability offers a crucial advantage: the ability to better anticipate cascading effects of policy decisions before they create unanticipated security vulnerabilities.
Navigating a Risky, Uncertain Future
This agent-based model provides a structured way to help navigate—as best we can—amidst this dangerous uncertainty. Obviously, it would be nice if there were some way to rely on experience or experiments (so we would know what happens next). The model—offered because we lack those other tools, and may need to act quickly—makes no claim of perfect predictive power. Rather, it serves as a disciplined framework for harnessing our best collective human judgment about expected patterns of behavior, offering us more clarity than we would have without it. It helps us recognize patterns that would not be apparent with a less systematic, less disciplined way of thinking about the future.
The stakes could not be higher. These new levels of complexity pose unprecedented challenges to democratic governance, economic stability, and social cohesion—in addition to all the direct new challenges facing the military in this Age of AI. We are already enveloped in a “fog of complexity,” and this fog will only grow denser as AI advances. Yet we must retain hope, think as clearly as possible, and develop OODA loops that function effectively within this increasingly challenging environment. Leadership requires precisely this balance of clear-eyed assessment, timely decision, and decisive action.
This complexity-induced cognitive burden has direct implications for military policy:
- It will, of course, require the military to adapt and use AI as best it can in every stage of its work.
- It suggests the need for streamlined bureaucratic structures that can make decisions and complete procurement at the pace required in an AI-accelerated world.
- It may necessitate a recalibration of our global military commitments in light of these new cognitive demands (to avoid overcommitment itself).
- It requires closer integration with State Department efforts to forge and maintain international alliances—creating networks that distribute the cognitive load of monitoring and responding to the challenges of a world made ever more complex by the rise of AI.
- It suggests that there would be immense advantages if global treaties (or even laws within the U.S.) could be set up to constrain the rate of change and give society more time to adapt.
Thus I argue: the fog of complexity is so thick that it changes everything, forcing us to think more carefully—forcing us to adopt a wider field of view and reconsider every aspect of our approach to national security.
We cannot afford to sleepwalk into the future. We cannot hope—as with some other existential risks like nuclear war, or risks of a meteor strike—to find some way to avert the whole crisis.
The future will come, and we must start getting ready.
Carl O. Pabo, Ph.D., is an Andrew W. Marshall Scholar and strategic thinker focused on developing new analytical frameworks and new conceptual “scaffolds for thought” to help society address complex global challenges. Previously, he served as Professor of Biophysics at MIT and Investigator at the Howard Hughes Medical Institute, after appointments at the Johns Hopkins University School of Medicine. Dr. Pabo is a member of the American Academy of Arts and Sciences and the National Academy of Sciences.