Saturday, October 12, 2024

Why The Techbros Back Trump And Vance Is Their Man In The White House

thebulletin  |  Since the emergence of generative artificial intelligence, scholars have speculated about the technology’s implications for the character, if not nature, of war. The promise of AI on battlefields and in war rooms has beguiled scholars. They characterize AI as “game-changing,” “revolutionary,” and “perilous,” especially given the potential of great power war involving the United States and China or Russia. In the context of great power war, where adversaries have parity of military capabilities, scholars claim that AI is the sine qua non, absolutely required for victory. This assessment is predicated on the presumed implications of AI for the “sensor-to-shooter” timeline, which refers to the interval of time between acquiring and prosecuting a target. By adopting AI, or so the argument goes, militaries can reduce the sensor-to-shooter timeline and maintain lethal overmatch against peer adversaries.

Although understandable, this line of reasoning may be misleading for military modernization, readiness, and operations. While experts caution that militaries are confronting a “eureka” or “Oppenheimer” moment, harkening back to the development of the atomic bomb during World War II, this characterization distorts the merits and limits of AI for warfighting. It encourages policymakers and defense officials to follow what can be called a “primrose path of AI-enabled warfare,” which is codified in the US military’s “third offset” strategy. This vision of AI-enabled warfare is fueled by gross prognostications and over-determination of emerging capabilities enhanced with some form of AI, rather than rigorous empirical analysis of its implications across all (tactical, operational, and strategic) levels of war.

The current debate on military AI is largely driven by “tech bros” and other entrepreneurs who stand to profit immensely from militaries’ uptake of AI-enabled capabilities. Despite their influence on the conversation, these tech industry figures have little to no operational experience, meaning they cannot draw from first-hand accounts of combat to further justify arguments that AI is changing the character, if not nature, of war. Rather, they capitalize on their impressive business successes to influence a new model of capability development through opinion pieces in high-profile journals, public addresses at acclaimed security conferences, and presentations at top-tier universities.

To the extent analysts do explore the implications of AI for warfighting, such as during the conflicts in Gaza, Libya, and Ukraine, they highlight limited—and debatable—examples of its use, embellish its impacts, conflate technology with organizational improvements provided by AI, and draw generalizations about future warfare. It is possible that AI-enabled technologies, such as lethal autonomous weapon systems or “killer robots,” will someday dramatically alter war. Yet the current debate for the implications of AI on warfighting discounts critical political, operational, and normative considerations that imply AI may not have the revolutionary impacts that its proponents claim, at least not now. As suggested by Israel and the United States’ use of AI-enabled decision-support systems in Gaza and Ukraine, there is a more reasonable alternative. In addition to enabling cognitive warfare, it is likely that AI will allow militaries to optimize workflows across warfighting functions, particularly intelligence and maneuver. This will enhance situational awareness; provide efficiencies, especially in terms of human resources; and shorten the course-of-action development timeline.

Militaries across the globe are at a moment or strategic inflection point in terms of preparing for future conflict. But this is not for the reasons scholars typically assume. Our research suggests that three related considerations have combined to shape the hype surrounding military AI, informing the primrose path of AI-enabled warfare. First, that primrose path is paved by the emergence of a new military industrial complex that is dependent on commercial service providers. Second, this new defense acquisition process is the cause and effect of a narrative suggesting a global AI arms race, which has encouraged scholars to discount the normative implications of AI-enabled warfare. Finally, while analysts assume that soldiers will trust AI, which is integral to human-machine teaming that facilitates AI-enabled warfare, trust is not guaranteed.

What AI is and isn’t. Automation, autonomy, and AI are often used interchangeably but erroneously. Automation refers to the routinization of tasks performed by machines, such as auto-order of depleted classes of military supplies, but with overall human oversight. Autonomy moderates the degree of human oversight of tasks performed by machines such that humans are on, in, or off the loop. When humans are on the loop, they exercise ultimate control of machines, as is the case for the current class of “conventional” drones such as the MQ-9 Reaper. When humans are in the loop, they pre-delegate certain decisions to machines, which scholars debate in terms of nuclear command and control. When humans are off the loop, they outsource control to machines leading to a new class of “killer robots” that can identify, track, and engage targets on their own. Thus, automation and autonomy are protocol-based functions that largely retain a degree of human oversight, which is often high given humans’ inherent skepticism of machines.

0 comments:

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...