This is the paper I wrote for the OSD Operational Challenges call for papers, also composed to support the C2 ECCT problem framing - DJL

 

 

 

 

Avoiding a Classic Strategy Blunder in the Third Offset – the Two Types of C2 Challenges

 

 

 

 

 

Lt Col Dave “Sugar” Lyle, USAF

 

 

 

LeMay Center, Air University

 

 

 

david.lyle.1@us.af.mil

 

 

 

DSN 493-3789

 

Cell 913 306-4712


 

War has always been about both science and art – we use science to create military capabilities that can suppress or remove obstacles to the harmonious social, political, and economic intercourse that promote our national security and interests. Based on the questions OSD did not ask in the Operational Challenges Call for Papers, we may need a greater focus on the tougher part of our innovation challenge, especially when it comes to the future of our command and control. This paper will illuminate the problem, and suggest ways our operational science and art can better serve strategy in the short term, and better inform our planning and programming choices in the long term.  

The Two C2 Challenges

At the most recent Air Force Association Conference in September 2016, two very senior US Defense leaders shared their perspectives on different but equally important aspects of the multi-domain command and control challenge. Deputy Secretary of Defense Robert Work discussed the Third Offset, and the combinations of concepts and technology that would enable rapid communication across the sensor-shooter complex to deliver rapid combat power, overmatching our adversaries. Chairman of the Joint Chiefs of Staff Joseph Dunford spoke about the challenge of command at our most senior levels, describing both our means of conceptualizing war and the mechanisms by which we plan and execute it as being inadequate for the task. It’s important to understand the complementary but distinct nature of both challenges before addressing the future of “multi-domain” command and control.

How C2 Theory Describes the Challenges

In the classic RAND study Command Concepts: A Theory Derived from the Practice of Command and Control, Carl Builder, Steven Bankes, and Richard Nordin surveyed the history of command and control, noting that “…one of the most consistently evident topics is some vision of the conduct of the operation: what could and ought to be done in applying military force against an enemy…The source of such visions, of course, lies inside human minds – in complex sets of ideas that might be called “command concepts…We define command concept as a vision of a prospective military operation that informs the making of command decisions during the operation.“ This is the part of the C2 challenge that Gen Dunford is most worried about – to support the President and SECDEF, he needs the ability to bring senior national leaders and commanders together, focus them together on informed and accurate understandings of the situation, define the problems, establish short and long term goals and risk acceptance, and establish which total military options are available to mitigate threats, and take advantage of opportunities.  One can think of this as the design problem, which is mostly about problem framing and developing command concepts that, along with other levers of power, will produce the social and political outcomes that we desire, and seek to make them sustainable.

Once the command concept is formed and disseminated, there’s another equally daunting challenge that must take place to translate the strategy into execution within tactical military echelons. Deputy Secretary Work emphasizes this when he talks about partially automated battle networks consisting of four interconnected grids: a sensor grid; a command, control, communications, computer and intelligence (C4I) grid; an effects grid; and a logistics and support grid.  Executed correctly, the grids fuse to support the achievement of command concepts. This is the engineering problem, the execution focused problem solving which establishes the physical and detailed sequences of action that  accomplish the aims of strategy, using the assumptions and limitations provided in the command concept. But if one is talking about “grids” and “effects” together, one needs to peel back the assumptions behind what the “grids” mean – are they metaphorical, a scientifically quantifiable construct, or a mixture of both? This is perhaps the most important question we can ask, and it evokes a centuries old military debate.

The Classic Strategic Blunder

Many years ago, a bright young company grade officer and combat veteran wrote his first published article, a critical review of a book that represented an entire class of recent military theories that tended to “turn the whole into an artificial machine, in which psychology is subordinated to mechanical forces.” He determined that the proposed system made impossible demands on the state’s network of logistics and fortresses, and almost completely ignored the actions of the enemy. But by far his most serious problem with the book was that “No separation of concepts, however logically correct, can be useful so long as its purpose is not stated.” In other words, there was no way to evaluate the value of the theory in the book, or the value of the concepts it described, without wargaming their ability to achieve desired political outcomes – concepts and tactics can be general, but strategies must be specific.

Two decades later this same critic - now a general officer with even more battlefield experience in the most significant wars of his day - described that same theoretical defect that had spurred him to become a theorist himself. He wrote that most military thinkers tended to “direct their inquiry exclusively towards physical quantities, whereas all military action is intertwined with psychological forces and effects”, and elsewhere that “One might say that the physical seem little more than the wooden hilt, while the moral factors are the precious metal, the real weapon, the finely honed blade.” Those moral forces “will not yield to academic wisdom. They cannot be classified or counted. They have to be seen or felt.” We now know this author as Carl Von Clausewitz, and his treatise as On War.  We’ve all read his words, but it seems we fail to grasp the fundamental reason that he picked up his pen in the first place, as we risk revisiting the scientific overreaches of Von Bulow (the reviewed work) ourselves.

There’s a modern term for the incomplete scientific philosophy that Clausewitz rejected as being insufficient to serve the requirements of strategy. As described by MIT’s Donald Schön, Technical Rationality is “instrumental problem solving made rigorous by the application of scientific theory and technique”, a philosophy that is “implicit in the institutionalized relations of research and practice, and in the normative curricula of professional education.” Under this paradigm, “real knowledge lies in the theories and techniques of basic and applied science.”  But this type of inquiry usually falls short when it comes to evaluating moral factors, or evaluating the tradeoffs between competing value-laden options or irreconcilable ethical dilemmas.  These are the intangible and often capricious social dynamics that describe the difference between knowledge and wisdom, a distinction that cannot be described in formal or universal logical functions minus specific contexts, but must be fit to specific situations.

Technical Rationality Applied to Warfare

This reality has not prevented the folks on either side of the debate from using the same science to make their claims, as Sean Lawson recently described in his study Nonlinear Science and Warfare. In the late 90s/early 2000s, adherents of Network Centric Warfare (NCW) promised that greater connectivity and information sharing would lift the “fog of war”,  allowing us to dramatically increase mission effectiveness through pushing “Power to the Edge” with self-synchronization of distributed forces, sensing and disrupting the enemy system before their strategies have a chance to emerge.  But there is much that the NCW mindset either obscures or ignores, including fundamental scientific findings about computability and intractability of certain aspects of social problems, the lack of empirical evidence for some if its claims, its high dependence on a relatively static command concept/commander’s intent to enable distributed/decentralized/autonomous operations, and analytic blindness that is introduced and propagated when complex problems are reduced to a decision algorithm in the value functions of “grids”. Any general procedure includes value functions relating to ethical and moral issues – this is the key issue in debates between those advocating for artificial intelligence, and those favoring intelligence augmentation. Additionally, advocates for automation and distribution usually proceed using a hazy and unstated “theory of victory” which roughly equates military victory with the fast and efficient striking of targets, with little to say about what social conditions would make that tactical success meaningful, and in which situations maintaining operations tempo might be proceeding quickly in the wrong direction.  If the same technology that fuses more and more info can be spoofed or jammed in more ways, or if the entire system fails if one part of the system is damaged or misaligned, we may be introducing new vulnerabilities at the same time we gain new capabilities due to the levels of increased complication and coupling, even without an external malign actor trying to thwart us.

While physical factors are still foundational to military capability, moral factors ultimately define the worth of a deterrent force, a target, an operational concept, or a theory of victory. Because of this, only commanders informed with robust knowledge of the dynamic political contexts can negotiate the conversion between tactics and strategy, allowing them to generate and modify the command concepts that are likely to generate the desired physical and psychological effects. However, in an increasingly connected world, this translation is becoming progressively difficult to fathom, let alone accomplish. As Builder and his team described the challenge, “In an age of abundant, almost limitless, information and communications capabilities, decisionmakers are increasingly faced with the problem of too much information, rather than too little…The prevailing approach to this problem has been to apply still more technology in the form of computers and software, in order to sort through, filter, and display the information in ways that will assist the commander in focusing on the “right” information. Of course, that approach assumes that the commander and his responsibilities, circumstances, and decisions are understood well enough for his information needs to be anticipated…”

And there’s the rub. In the ideal world of the Technical Rationalist, you can anticipate the commander’s C2 needs, completely capture the command concept in an intent statement, and design a decentralized command and control system based on a relatively constant commander’s intent. In the real world, you usually can’t, due to the increased complexity, uncertainty, and volatility that quickly changes the strategic effect of your pre-planned tactical tasks. The same problem Clausewitz pointed out in the early 1800s still applies today – there are competing social values that can’t be mathematically codified and optimized in an “effects grid”.  Commander involvement is required to reconcile the tradeoffs and accept the risks that come with reconciling competing social interests in a world in which one video can prompt the redirection of national policy, or one ill-thought, rules based, pre-delegated response could potentially trigger unintended escalation in a highly volatile fight with another nuclear/cyber power.  

What We Can Do to Avoid the Classic Blunder

Builder’s study concluded that “…contemporary theories about command and control (C2) are, by and large, theories about organizations and communications… A comprehensive theory of C2 should explain not only how to organize, connect, and process information, it should also explain something about the quality of the ideas and their expression and about how the qualities of people contribute to or detract from C2, not just how they should be organized and wired together. What is needed is a deeper theory that encompasses high-level, creative aspects of command as well as the direct-order and control aspects.” This echoed thoughts from Rear Admiral Henry Eccles in his 1965 classic Military Concepts and Philosophy: “Those who exercise political and military power must be able to deal with the tangible and the intangible. They must recognize the distinction between the ‘puzzle’ [engineering problem] and the ‘difficulty’ [design problem].”  Thus, in the designs of our C2 system and our human capital plans, we need both Chessmasters (mostly focused on tactics) who can solve the problems that strategists frame in terms of operational and tactical plans, and the Game Designers (mostly focused on strategy) who specialize in the conceptual planning that give our tactics social value.  The Game Designers determine which games we should play, which ones to avoid, and describe what winning means if the game never really ends against a near peer competitor. We need to promote both kinds of specialists in the system, and focus our human capital plans more on building command teams than individual commanders.

Strategy requires specifics, and taking general capabilities based approaches to building a future force could result in ending up with a highly expensive “designed for the mean” force that is not suited for the specific challenges and the actual adversaries who will confront you.  We must take a cue from our new threat based National Military Strategy, and do wargames and experiments based on specific scenarios - against specific actors in specific regions - to better reveal our true requirements and constraints.

Having design discussions requires a shared understanding of how the larger system is connected, what our interests are, how we’re framing the challenges and opportunities in various operational environments, and what assumptions those assessments are anchored in. This requires us to move beyond two-dimensional decision support products and video teleconferences, and requires us to use more visual, dynamic, data informed representations of the operational environment. Using combinations of Geographic Information Systems with various cultural, economic, and logistical overlays, and introducing accurate data informed modeling and simulation to show dynamic connections, we can enhance the presentation of the engineering problem to promote better intuitive understandings of how the larger system is connected.  We can create a common virtual meeting space where we use visual metaphor to make assumptions explicit, tying the design problem to the engineering challenge. Just as animated weather maps based on accurate weather data help us make travel decisions with a quick glance, this “shared consciousness” will be critical to fixing the current options “cacophony” that Gen Dunford describes. By promoting shared awareness of how the various command activities are connected at a global level, we will help our national level decisionmakers make better informed decisions about global apportionment of US assets and capabilities. These same visualization, simulation, and collaboration tools can also be used to extrapolate new challenges in the future, experimenting with various future force components and presentation options before we commit ourselves to highly expensive but empirically unproven concepts and systems.

Above all, we need to acknowledge our own tendency to seek solutions in engineering that can only be found in design. Both are mutually dependent on each other for ultimate success. It’s still valid to pursue engineering solutions to identified capability gaps without a specific theory of victory in mind – bottom up innovation is still crucial to our process. But if highly expensive and complicated capability gap solutions are developed in stovepipes, we’ll likely end up with a brittle force that is insufficient to deal with holistic reality. Where we can, we must seek simplicity, modularity, and adaptive variety to hedge against uncertain futures. Finally, we need constant consultation between the tactical chessmasters and strategic game designers, letting both survive in the system doing what they do best.