Asymmetric AI warfare in the era of cheap intelligence

Asymmetric warfare, where adversaries leverage unconventional or disproportionate means, has taken on a new urgency with the advancement of technology in the last several years. Artificial intelligence innovation is at the forefront of this advancement and is not just a competitive advantage, but presents the necessity to detect, measure and mitigate emerging threats. Actions countries take today can unlock opportunities for advancement and have the potential to significantly impact their economic competitiveness, social development and security. Inaction could have equal and far-reaching negative impacts.

On October 24, 2024, the White House released the Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, framing a collective imperative for the evolving nature of global threats and unknowns in AI.

The U.S. government has an incentive to impact the global development and regulation of AI technologies, both to ensure that the U.S. leads the sector and to pursue a paradigm that aligns with democratic norms. The goal of AI innovation must be to inspire a race to the top — a pursuit of excellence that balances safety, ethics and strategic foresight. The world is at an AI inflection point and the U.S. can seize the opportunity with targeted investments, setting the standard for innovation that aligns with democratic values and global stability. This requires partnerships between public and private sector stakeholders, so that competitive and financial incentives align with ethical and strategic goals. The alternative is what Liz Boeree, a British social scientist, has equated to “Moloch’s trap” in game theory: a proverbial race to the bottom where near-term incentives drive mutually destructive behaviors.

Globally, AI foundation models are being developed under widely varying norms, principles and governance structures. This divergence reflects differing values and priorities across borders and cultures. For democracies like the U.S., privacy, consent and ethical boundaries shape the data sets used to train these models, and how they are optimized. In contrast, authoritarian and autocratic governments operate with fewer constraints, enabling faster and potentially more exploitative development. Adversaries unconstrained by such considerations may create systems optimized for cyberattacks, election interference and other destabilizing efforts. The asymmetry lies not only in capability, but also in intent and approach, making it crucial to maintain vigilance over what has already been developed and what is being invested in globally.

Adding to the complexity of the AI market is the precarious financial state of many AI ventures. At present, most AI business models are unprofitable, reliant on rapid and extremely large venture capital or corporate investment. This dependency creates opportunities for financial influence, which can shape the trajectory of the field over the next several years, where ownership of the foundation model IP may reflect a war of financial attrition — determined by a vast war chest of cash accumulated by a few key players during the last several decades of technology consolidation.

AI development relies heavily on private enterprise. If private sector priorities diverge from national security interests, the U.S. risks losing control over critical AI capabilities both at home and internationally. Strategic investments by the U.S. and other democratic governments can guide AI development in directions aligned with public good and national security objectives, reducing reliance on less predictable funding sources.

The perceived tension between safety and innovation is often overstated. They are complementary goals. The structures that remove tension and encourage innovation — clear regulatory frameworks, robust ethics guidelines and proactive risk management — are crucial. These frameworks must be navigable and flexible while maintaining a strong, unified foundation.

The U.S. can encourage responsible innovation across industries. SpaceX, for example, has disrupted the aerospace industry as the first private company to produce a liquid-propellant rocket that reached orbit, an achievement only possible with a shared culture of relentless ambition. This means fostering an environment where taking calculated risks is encouraged and incentivized, and where the potential consequences are clearly understood and managed. Safety at scale requires two key strategies: alignment of incentives for private companies developing AI models and toolsets, and focusing on governance that enables innovation.

In the face of asymmetric threats, the U.S. must not only keep pace but lead with purpose. By putting in place good governance, smart investments, aligning safety and innovation, and maintaining vigilance over global developments, the U.S. can ensure that it remains at the forefront of this transformative era — shaping the future, rather than reacting to it.

Amy Jones is U.S. public sector AI lead at EY.

The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.

Source: https://federalnewsnetwork.com/