Fifth Generation Warfare and Other Myths: Clarifying Muddled Thinking in our Current Defense Debates

FIFTH GENERATION WARFARE AND OTHER MYTHS: CLARIFYING MUDDLED THINKING IN OUR CURRENT DEFENSE DEBATES

By Dave “Sugar” Lyle

When you are talking about the command and control of forces that could potentially destroy human society as we know it, you owe it to the rest of humanity to be very precise with your terms and concepts – matters of such extreme importance require the greatest degree of conceptual due diligence that we can collectively muster. But the reality is that much of the debate over future force structure, command and control, and strategy writ large is littered with unexplored assumptions and muddled thinking, often cloaked in buzzwords that members of an organization become obligated to use once their leadership has adopted and promulgated them as guidance. We will always need “bumper stickers” to spread new ideas, but as we do that, we must be very careful which ones we use, and be very aware of their inherent limitations, before they are used to justify dangerous courses of action built on conceptual foundations of sand.

This article will explore some of the most recent examples of imprecise language, unqualified assumptions, and outright myths one frequently sees as the US Department of Defense attempts to build a ready force that can match lethality and military effectiveness with purpose in the emerging operational environment. We correctly sense that we need to update our organizations, technologies, and ways of thinking in a world where the Industrial Age models that brought us success in the past are becoming part of the problem in the Information Age. But as we pursue updates in these areas, instances of muddled thinking must be identified, dissected, and reigned in if we’re to avoid unforced errors and future disasters of our own making because we failed to examine our own flawed assumptions early enough, and ignored a rich intellectual tradition that has already provided adequate concepts to examine many of our modern challenges. The assertions below are representative of where many current conversations about modern warfare tend to converge, and this discussion is meant to illuminate the assumptions and logical claims of these positions in general, rather than to name or attempt to refute specific instances of their use.

 Technology is changing the nature/character of war, therefore tech is the most important area we need to concentrate on

Current US military leadership has been describing the nature and character of war in ways that even the academics would agree with lately, and are fairly united in the belief that the human based “nature of war” elements – the basic human motivations of “Fear, Honor, and Interest” drive all conflicts and thus provide a basis for strategy – even as the character of war (the specific ways and means we use to fight) is constantly in flux. These are important distinctions that often become lost when we use imprecise descriptions and terminology in our discussions of war (more nature related) and warfare (more character related), often leading us to confuse one for the other, or to discard viable legacy concepts that were incompletely examined and understood in favor of vague buzzwords that introduce new problems.

Technology is a key component of any model of institutional adaptation, but one of only three elements a strategically capable organization must simultaneously manage. Creating competitive advantage against external foes really requires a virtuous internal co-evolution between Ideas, Groups, and Tools, with each partially throttling either the development or the degradation of the others. Past studies like the Office of Net Assessment’s 1992 study on Revolutions of Military Affairs have shown that historically “the greatest challenges during past periods of revolutionary change had been primarily intellectual, not technical.” Subsequent revisits of this study, even in light of the significant technological advances since then, continue to affirm that “Dealing with these challenges will require innovative thinking, new operational concepts and organizations, and new long-term strategies if the United States is to retain a dominant military position while avoiding imperial overstretch and economic exhaustion in the years ahead.” This means that innovation in the area of “Ideas about ideas”, not just “Ideas about how to use tools,” is critical to the viability of any military organization – you have to understand how your organization learns collectively, and how it determines what reliable knowledge is, and question both periodically.

Effects can be described as grids, warfighting can be prosecuted algorithmically, and warfare is about winning force on force engagements at the tactical edge

The idea of approaching war rationally, scientifically, and often mathematically is an old one, and a philosophical tradition that is alive and well today. It began in earnest with the birth of operational and strategic theories in the midst of the French Enlightenment in which it was optimistically felt that all human activities would eventually be brought under the purview of science, calculation, and reason – the modern version of this school of thought can be described as a combination of positivism and technical rationalism. In the early 2000s advocates of Network Centric Warfare thought that the technology needed to validate the idea had finally arrived. In their mind, war could be described as a series of “grids” comprised of sensors, shooters, and targets.

Resurrected recently in Third Offset discussions, the grids have grown, and have been discussed by key leaders in the Department of Defense as if their validity was established. But this approach betrays a tactical mindset that is mostly blind to strategic context, a half-truth at best. While it is true that identifying and servicing targets rapidly and accurately is a vital part of bringing lethality to warfighting, the mathematically focused grid description cannot adequately account for the fact that the real “effects grid” is ultimately subjective, more about cognitive and moral effects than physical or electromagnetic ones, although the physical ones are still very important, and are usually a necessary prerequisite to achieving your goals of pressing for social change using violence. Likewise, strategic success is more about achieving continuing advantage than achieving “decisive” results that then present you with new sets of problems to solve or manage.

In a war with a near peer competitor who has the ability to flip over the entire chessboard if pushed to desperate enough straits, good strategy is not about achieving checkmate within the defined grid and unchanging rules of chess. It is more about slowing or stopping the game clock, while trying to find a less dangerous game to keep playing. And if we are competent strategists, we are driving towards games that we are better suited to play than our adversaries.

The future of warfare is cognitive/fifth generation/multi-domain/fusion warfare

This are the greatest areas of muddled thinking that neglect to use previously derived strategic concepts that would add more clarity to our discussions. We grasp onto new buzzwords because we are searching for ways to discuss something that seems new or remarkable to us, even if in the broader scope of history the phenomenon is neither of these things. And occasionally, new buzzwords are trotted out to try to inoculate the purveyor from historically or theoretically informed criticism using more durable concepts from the past. Those people are usually trying to sell something as well, and it is probably not going to be cheap.

Cognitive warfare” is an especially murky area in terms of definition, as has been mentioned in the same contexts as predecessors like “Information Operations” which at one time included influence operations, electronic warfare, and cyber operations, all of which require different knowledge of the various mental and machine mechanisms in play. Warfare has always been cognitive – as discussed before, the real result we seek via tactical victory is not so much the physical result imposed on those who personally experience our lethality, but rather on getting the people we have not killed to stop resisting the unfolding of our desired future using physical violence. We have always needed to collect, process, use, and protect information throughout human history, so this element is nothing new – using the term ‘cognitive” is an advance on the traditional frameworks of “physical-mental-moral” schema that traces its conceptual parentage back to Plato’s “spheres” (ideas, mathematical entities, and the physical world), and was described as “spheres” by JFC Fuller and “levels of interaction” by John Boyd. What is different now is really the multiple machine based layers – sometimes called “stacks” – upon which our information is gathered, transferred, filtered, analyzed, and interpreted. Each level of the stack is a potential battleground in the new operational environment, bringing the competition to virtual environments that traditional legal authorities and notions of national sovereignty cannot adequately describe nor address. But this does not mean that this combined human-machine “cognitive” environment will dominate the traditional physical or moral aspects of warfare – it actually means that there are just more ways to manipulate or attack across all three dimensions due to their mutual interdependence on information systems.

Fifth Generation Warfare, described and dissected very capably for OTH by Peter Layton here, adds a fifth tier to a 4G Warfare construct that already obscures more than it explains when you peel back its layers and dig into its assumptions. The biggest problem with this conceptual approach is that in trying to gain insights about the changing character of war, the theoretical construct often neglects or distorts insights about the enduring nature of war, and often confuses the two. If you take one component of a much richer, more dynamic web of interconnected ideas, groups, and tools, and hold it up above the others as being the most important, you may learn something about that component, but you lose the bigger systemic picture that a more solidly grounded theory provides to explain the tensions between continuity and change across multiple dimensions.

Multi-domain warfare is the buzzword with the highest level of acceptance in Washington DC today, with various interpretations of what it means circulating throughout both DC and the blogosphere. If one could boil down the intent of most who are advocating for it, it is to try to be domain agnostic in how we combine distinct capabilities across the different traditional domains of air, land, maritime, space, and cyber domains in order to create synergistic effects. It also implies a higher level of integration earlier in the planning and execution cycles than we’ve done in the past, more than to repudiate past concepts of joint warfare or combined arms.

Fusion Warfare is conceptually sound in the sense that it provides a means of conducting agile battle management – consciously or not, its focus is primarily tactical at the level of battle management vs higher headquarters command and control planning. It’s a useful way to describe how we might quickly find and strike targets that have previously been identified as being relevant and within commander’s intent – perfect for SCUD hunts, sinking enemy fleets, etc. But as a concept it has much less to say about how one determines which targets are valid in the first place and why, how targeting them links to strategy and political outcomes, and how one deals with highly volatile combat against near peer competitors with WMD where the previously issued commander’s intent may not remain valid after a significant event. Where Fusion Warfare is a method for achieving checkmate in a game with defined and quantifiable rules, it offers little to the strategist or planner who must determine which games to play, which ones to put on the back burner, and which ones should not be played at all.

Artificial Intelligence is the key to future advantage

It’s really pretty risk free to make the claim that AI will be central to the future, and also not a stretch to say that it will confer many advantages as machines can increasingly take on some tasks, and do them far more accurately and faster than people can. It’s also accurate to say that those who fail to embrace AI may find themselves at a considerable disadvantage in some competitions. But unless we apply diligent thought to our decisions about what to delegate to adaptive algorithms operating at the speed of light, we may in retrospect find ourselves repeating the observation of Dr. Ian Malcolm in Jurassic Park: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

Artificial intelligence is not structurally or ethically neutral, even if it’s agnostic to human values in its operation. It reinforces through repetitive iteration what it determines to be of value based on the implicit values we embed into the programming, and even the physical design of the system itself, which can change the way the same program runs on different machines. AI can detect structure in data, but it cannot assess or compare values within rapidly changing social contexts – it can answer questions we provide, but not ask good new ones. And it may be completely useless when competing and often diametrically opposed human values are in play, ones that can’t be reduced to an optimization function and still meet the expectations of the human users it should be serving. With increases in computational complexity, and unexpected normal accidents that happen as a consequence of the impossibility of predicting all of the outcomes that will result as AI becomes more widespread, it is going to be increasingly difficult to control this interconnected human-machine system that we’ve been building without blueprints for decades. Cybernetics pioneer Norman Wiener warned us in 1960 that “We had better be quite sure that the purpose put into the machine is the purpose we really desire,” but the reality is that each new task or component we add to this complex adaptive system will initiate a new social experiment.

Commander’s Intent can be held constant within a phase, enabling distributed control, and the best decision-making happens forward at the tactical edge

The Network Centric Warfare/technical rationalist warfighter’s dream requires holding the linkage of tactics to strategy as a constant, using the proxy of an unchanging commander’s intent that can be used to drive distributed and largely autonomous tactical operations “Forward to the Edge.” This works well for geographically localized combat actions in wars against actors who do not pose an existential threat to the US (like our last two rounds of SCUD hunting against the Iraqi army, or our current strikes against ISIS and Al Qaeda), and will be key to achieving tactical victory in situations where autonomous systems are being pitted against each other at high speeds, like fleet and base defense, and in cyber combat. But this procedurally driven mindset could be dangerous, and even disastrous, in a fight against a country that has nuclear weapons (including electromagnetic pulse (EMP) capabilities), space, and advanced cyber capabilities who can cripple major parts of our national and international infrastructure if they don’t interpret our messages of threatened or actual violence in the same way we thought they would.

Distribution of control is sometimes appropriate, and sometimes yields the opposite of the effect you want – it all depends on specific political context and degree of interdependence. In situations of high complexity, you need an agile and rapidly communicating network more than you need a clear, detailed plan. If your logistics are tightly coupled, your forces highly interdependent, or if the political situation is highly volatile, rapid distributed operations could be much worse than taking a knee, reestablishing connection, and finding out if the previously received commander’s intent is still valid under the new circumstances. There are numerous examples in military history where autonomous distributed ops divorced from the larger context contributed to disaster and there are some particularly sobering instances where nuclear war was only narrowly averted by last minute glovesaves when someone who applied caution and judgment over context free procedure. These should be a warning to us as we consider delegating more and more decision authority to machines – our past record shows that we tend to underestimate the dangers of unexpected scenarios that our algorithmic procedures were not designed to handle, yet we keep building systems under the assumption that they can.

Faster OODA than your enemy = success in war

John Boyd’s OODA loop is a simple yet elegant explanatory framework for any kind cognition enabled action – Observe, Orient, Decide, Act – hence its popularity and widespread use in the DoD. But what Boyd was getting to with his actual OODA loop diagram – which is considerably more nuanced than the simple one referred to in most instances – was that there is not just one cognitive process in play here, and that it does not just work on one direction since orientation also influences observation. Competitive advantage is gained by leveraging all of the mechanisms available to you across the physical, mental, and moral levels of interaction, but first, you have to understand what they are, and improve your orientation while seeking to influence the cognitive processes of your adversary.

You can still use OODA to describe decision-making in the Information Age – we are increasingly using machines to enhance our human OODA loops (human-machine teaming & intelligence augmentation), and we allow machines to complete sub-OODA loops autonomously for tasks that they can do faster and better than the humans (AI & machine learning). The real trick is knowing when it is advantageous and wise to do so (like having algorithms help us maintain the inflight stability of the B-2), and when you’re unwisely letting computerized bureaucracy run amok at the speed of light with insufficient connection to human wisdom, which can be highly unwise and potentially dangerous or fatal in some situations.

The traps that many fall into with OODA are: 1. Acting as if there’s one monolithic adversary OODA loop that you can influence through tactical actions, 2. Thinking that the point is cycling faster through the complete process than the enemy, 3. Believing that it’s always advantageous to paralyze your adversary’s OODA cycles. With number one, there are many competing formal and informal decision loops in play that aggregate into what we can talk about an enemy’s OODA loop descriptively, but not prescriptively – the individual interactions still count, and cannot be accessed all at once. With number two, the point is not speed but advantage – in a tactical knife fight or an aerial dog fight, speed is life, but in a standoff between nation states with weapons of mass destruction, you are probably trying to slow down rather than speed up the escalation as you seek off ramps and negotiation. And with the third, in the very dangerous scenarios just discussed, paralyzing your enemy’s strategic decision making cycles is probably the LAST thing you want to do, or to even appear to be doing – you want an enemy with WMDs to be thinking as clearly and as rationally as possible. This is especially true if there’s a chance of miscalculation that could lead to nuclear/EMP attacks, massive levels of destruction against the satellite constellations that worldwide commerce, communication, and future access to space depend on, and cyber-attacks that could cripple major sectors of national infrastructure.

More connection = more resilience = more situational awareness = warfighting advantage

Most advocates of this position since the days of Network Centric Warfare have cited Moore’s Law and Metcalf’s Law as their justification, which state that the value of a telecommunications network is proportional to the square of the number of connected users of the system. And while the most responsible advocates for NCW have still urged caution that “connecting all the things” carries as many challenges as it does potential advantages, requiring careful and diligent study, the first part is often skipped or neglected by those advocating for this bumper sticker. But in truth, these two interrelated “laws”, originally designed to help understand the growth of the digital economy, have often both been oversimplified and overextended beyond their original explanatory power to make predictions that are overoptimistic at best, and neglectful of the downsides of connection at worst – as with most topics worth digging into, the truth is far more nuanced. It is not just the number of connections that counts, but the task to be done, and the topology of the network that determines how robust or resilient a network is.

 Creating large battle networks has created new capabilities that were unthinkable only several decades ago, and has let us prosecute war at a previously unthinkable economy of force, especially with our air components. But they have also created new vulnerabilities that we are now experiencing, but still barely understand. Every level of the stack between our hardware, construction language, operating language, our user interfaces, and the user themselves all present new opportunities for exploitation and disruption to the savvy hacker who can control the gateways of this new, highly connected, mostly digital environment. The ability to connect multiple systems does not equal new capability in itself, and it may actually present new vulnerabilities, as we are increasingly finding as we have allowed computers to run our financial markets, and introduced the Internet of Things into routine kitchen appliances that savvy hackers can turn into botnets to attack us. In the past, Air Force leaders remarked that they feared that someday a capable hacker would convince our airplane’s computer that it was a toaster; today those hackers can use our own toasters to do it because we decided to connect all the things without foresight or security in mind.

The other thing “connecting all the things” will not help us with is gaining understanding and wisdom from the data, even if greater data sharing does indeed offer the potential for better decision-making and execution IF we filter and use it wisely.

Conclusion

The costs for building systems before understandings is high – the results can include misguided policies, bogus requirements, excessive wasted costs, and vicious vice virtuous cycles. Education and robust discussion are the keys to teasing out the most valuable insights from all perspectives, and placing them within holistic and systematically sound conceptual frameworks that preserve the wisdom from our legacy concepts while adding the additional clarification and context we need to make wise decisions – or at least, to avoid obviously bad ones – in the Information Age. Clearer thinking built upon well-constructed theories and concepts will help us better discern the sources of continuity we need form the basis of our strategies, and detect the changing elements that will require agile responses over detailed plans, without confusing the two. If we fail, we risk becoming slaves to our own systems.

Dave “Sugar” Lyle is an Air Force strategist and currently serves as Deputy Director of Strategy and Concepts of the Le May Center for Doctrine Development and Education.

 The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Original posting on Over the Horizon available here: https://othjournal.com/2017/12/04/fifth-generation-warfare-and-other-myths-clarifying-muddled-thinking-in-our-current-defense-debates/