Feedback Request for "The Operational Environment and the Changing Character of Future Warfare"

TRADOC G-2 would appreciate your feedback on our paper, "The Operational Environment and the Changing Character of Future Warfare". Please share your unclassified comments in the discussion thread below.

  • Well written and smart paper. Primary disagreements lie in the paper's overoptimistic view of future threat budgets/technology compared to our own and allies. Look at Samsung and Korean carmakers contrasted with a totally dark nighttime North Korea. Examine a sanctioned Iran versus oil-rich Sunni nations and a fully-capable Israel. Then look at Russian/Chinese inability to even fully reverse-engineer aircraft engines, or innovate in ways where either could come close to fielding large numbers of costly systems given far smaller defense budgets than our own and allies.

    If ground technology is so easily replaced with the new, why the scores of older systems in all adversary militaries? Prototypes are fine, few in number, and often of questionable quality/technology compared to our own. New Russian tanks and jet aircraft? Why so many old ones still around? Capable Russian and Chinese helicopters? How many compared to our fleet? We will have hundreds of fielded stealth fighters and bombers before threats have then in tens and the trend will continue out to our thousands vs. their hundreds if any. These superior joint systems and many current Army systems will still be in our inventories well into both timeframes of this paper.

    By the paper's own admission, both our and threat militaries will increasingly be forced to hide and disperse to survive. How then will swarms of UAS and loitering munitions find multiple targets under trees and inside cities. What about that other new technology lasers and microwave weapons? A Hellfire missiles is hardly cheap yet loitering swarms and munitions would need to be equally expensive and heavy, especially with costly sensors and AI
  • In reply to Cole Milstead:

    We already talked about sanctions affecting acquisition of technology. Add that low oil prices and Russian/Chinese interventions in Europe and the South China Sea may create new sanctions. As the Chinese economy improves, low-cost Chinese labor will disappear making it harder to export and have large defense budgets. Interventions also will wake up neighbors who will increase their own defense spending and technology and allowance of U.S bases.

    Then we get into the ROE of AI-controlled weapons and how-to-fight in large urban areas. For starters, look at Patriot friendly fire against two of our jets in OIF. Look at fratricide with human-controlled ground weapons in all wars. Now introduce intermingled forces, and those on both sides hugging or working around lots of civilians. Is AI that smart? True, foes and friends alike may not place the same emphasis on such matters. Look at rubbled Chechnya cities and current Mosul, not to mention WWII. Look at Seoul's proximity to DPRK artillery. Now add that many future coastal cities may have sea walls due to global warming and will be filled with mixes of friendly and not-so-friendly civilians. Will we place our Army in such a city given the threat of sea walls being destroyed by adversaries and guerrillas/terrorists to affect our "will to fight and support"? Will the new WMD be terrorist and threat conventional munitions targeting our U.S. seawalls?
  • The wide array of potentially disruptive trends can be effectively countered through ongoing adaptive and agile innovation. The paper astutely calls out the power of innovation as a prevailing force. The US is outstanding at creating innovation and meeting uncertainty with capability overmatch/ opportunity. The challenge seems to be successfully making innovation readily available as an operational advantage. Changes in procurement mechanisms and overcoming cultural differences in government - industry relations may be the key to unlocking the power of US creativity and prevailing well into the future.
  • An excellent overview of probable future trends and their effects on future warfare, assuming that the current trends highlighted in the paper prove to persist as predicted and also continue to dominate other trends.
    Of course, whether the postulated set of trends will actually dominate the future environments which will present themselves to future warfighters is impossible to predict. The paper rightly points out that the true need is to continuously assess the actual ongoing trends and update the expected outcomes.
    I personally agree with the comments concerning the majority of the trends impacting the two future eras described between now and 2050. However, I do believe that comments concerning outcomes of artificial intelligence technologies and blockchain technologies are unlikely to occur as assumed. It also seems to me that an emerging set of technologies which will have profound effects on future warfare are the various applications of cyber-physical systems (CPS) scientific and technical advances. The National Science Foundation has been investing in CPS technologies for the past ten years and estimates that “CPS technology will transform the way people interact with engineered systems -- just as the Internet has transformed the way people interact with information. New smart CPS will drive innovation and competition in sectors such as agriculture, energy, transportation, building design and automation, healthcare, and manufacturing.” For military applications the possibilities go far beyond current tradeoffs being considered between cyber offensive alternatives and kinetic alternatives for achieving a given effect. As indicated in the NSF quote above, CPS outcomes will affect the way in which humans will interact with machines. They will also affect how machines will interact with amchines.
    Concerning blockchain technologies, there are already very broad technology development efforts which will most likely enable the blockchain idea to support distributed processing and control activities in ways which no one can predict. For example, today the lowest level tactical commanders in Iraq and Afghanistan cannot electronically share tactical data with coalition partners who are not “on the net” even though they are free (and encouraged) to provide paper copies of operations orders to counterparts, just as platoon leaders and company commanders and advisers were in Vietnam 50 years ago. However, the hyperledger effort which has just released version 1.0 of the hyperledger fabric, www.hyperledger.org/.../fabric , can be applied to enable platoon leaders and squad leaders to share operations orders electronically with selected coalition partners, even those who are not cleared to be “on the net.”
    Concerning artificial intelligence, the IEEE Systems, Man and Cybernetics (SMC) society, http://www.ieeesmc.org/ , traces its origin back to the early ideas of the Norbert Wiener and Alan Turing during the 1940s and 1950s concerning the eventual ability of computers to effectively mimic or exceed human cognition capabilities. Also, the TRADOC Knowledge Engineering Groups (KEGs) of the late 1980s and early 1990s developed over 30 knowledge-based applications for various TRADOC activities, at least one of which is still being used today. However, the KEGs became the various Battle Laboratories of today and the KEG requirement that officers receive advanced civil schooling to participate in KEG efforts were not continued for the Battle Laboratories’ efforts to develop future warfighting capabilities.
    Since the 1940s various predictions have been made concerning the eventual dominance of machine cognition capabilities. However, those prediction horizons have consistently been advanced even as useful AI capabilities have been achieved. It does not seem to me that there is any technical basis for assuming that the continuing improvement of machine computational and data storage and network sharing capabilities will result in machines achieving “cognition” capabilities through some yet-to-be-discovered technical approach for artificial reasoning.
    In the general area of data, information, knowledge, semantics, and semiotics, it might be beneficial to predict those things which are not expected to change so much and to frame the discussion around how to mesh those attributes of warfare which will not be so different (perhaps warfighting doctrine) with those attributes of warfare which are expected to change dramatically (like situation assessment tools and engagement process tactics, techniques, and procedures (TTP) which are discussed in the paper).
    For example, one assertion attributed to Eisenhower is that “The plan is nothing, planning is everything.” That is, the dominant outcome of the planning process is that it enables each echelon of command to understand the intent of the commander for a given operation so that when the plan becomes useless as the situation changes, each echelon of command can exercise “good military judgement” to meet the intent of the commander. If that idea is expected to persist into the future, then the rule that “1/3 of the time available at each echelon is allocated for planning and 2/3 of the time available at each echelon is allocated for execution” can be used for assessing future decision support technology alternatives. That is, if this assumption is correct concerning effective planning for enterprise activities, then it provides a framework for analysis of how to exploit future technological advances to “get inside the decision cycle” of future opponents at each echelon of command. For example, if we assume that the “intent of the commander” is the only system invariant for a given operation, then for friendly forces we can frame future weapon system capabilities and command and control capabilities to exploit the “1/3 – 2/3” rule of warfare to remain as effective at each echelon of command into the future as it has proven to be in the past. Then we use the TDLOMPF tradeoffs to build, test and train units to enable each echelon of friendly forces to “get inside the decision cycle” of opposing forces. The primary technical change in the future would then be in possibly exploiting the machine version of the “sense-decide-act” situation assessment and decision cycle to exploit future capabilities.
    In that regard one trend that has continued for many years and will probably persist into the foreseeable future, and which is tangentially mentioned in the paper, is the ever improving ability to synchronize clocks to enable precision location, navigation, and timing outcomes for unit operations. The IEEE 1588 Precision Time Protocol committee is currently revising the precision time protocol (PTP) to enable distributed clocks to be synchronized to within a nanosecond of each other. The speed of light travels about a foot in a nanosecond. Thus, the expected widespread availability of distributed devices whose logical events can be coordinated to the nearest nanosecond means that it is feasible to build sets of distributed devices which can be logically coordinated to perform offensive and defensive activities orders of magnitude faster than humans can perceive the occurrence of the activity, much less analyze and decide about how to counter the activity.
  • In reply to John James:

    Appreciate your reply, Mr. James and think it good that TRADOC asked for feedback. Like many, I marvel at technology advances even if not always that great at using it. That leads to suggestions like AI where humans are largely taken out of the loop as too slow, or too difficult to train, or both. Data link and navigation arguments may apply, as well. But philosophical arguments regarding human control and speed/training aside, lots of technology makes giant leaps of faith such as Future Combat Systems did assuming capabilities will be there rapidly that defy physics, realistic timelines and budgets, or common sense not to mention all of the above. Let me offer examples.

    I understand kill boxes and that you theoretically could send your swarms of loitering munitions/sensors to a location to find and kill anything found. That, however, implies a kill box well removed from friendly forces implying some distance to travel. So how do those swarms get there? They need some combination of speed and range, not to mention the expensive sensors and computing power on board to navigate, find and stay in the kill box, and destroy any threats hiding there or not yet there yet without harming civilians. So now, because your threat does not want to be found, hugs civilians, or is not there yet, you need some endurance on station in addition to whatever fuel was required to get to the kill box in the first place. The result is not a small, lightweight, inexpensive munition created by 3D printing. And the future enemy may have lasers that can down these munitions as a relatively cheap countermeasure.

    Now you might say you can add countermeasures (expensive) launch the little, not-so-cheap or small munitions from a C-130, or a missile/rocket as UAS submunitions. But the C-130 has to get close enough to or beyond friendly lines to survive itself to launch the munition cluster, and a missile/rocket GMLRS or ATACMS could attack targets without such AI if it gets solid recon/intel that targets are already there. So also could an F-35 or bomber whether the target is small boats, large amphibious ships or landing craft, or columns of armor approaching an embattled border. Which is more likely to survive the enemy's laser countermeasures and multiple advanced air defenses?

    Close combat? How smart is the robotic vehicle or overhead AI swarm that it can distinguish between friends, foes, and civilians and understand the danger close criteria protecting friendlies and by implication civilians. What happens when friendly allies inconveniently use threat equipment or threats have old/new allied vehicles and aircraft. What if Russia is an ally of sorts such as in Syria, but Syrian and ISIS forces are all using similar equipment to the Russians? What happens when ISIS has captured friendly Iraqi armor? I've read about inexpensive UAS that supposedly will take out armed dismounts, as if they will have the endurance to do so and still be small and cheap with AI and advanced sensors. How will they know the armed dismount is a threat and not just an armed civilian or armed member of the Afghan police?
  • Cole,


    Thanks for the comments. Computer-controlled and networked physical systems (cyber-physical systems) have been incrementally improving and it is reasonable to assume that they will continue to do so. I certainly agree with the NSF contention that CPS advances underway will change the manner in which humans will interact with machines. It is also reasonable to “think out of the box” concerning asymmetric warfare possibilities. However, in my opinion, it is not reasonable to assume outcomes for which there is no current path for achieving the assumptions. Planning that there will be sentient machines when there is no scientific basis for that assumption makes for good books and movies but will very likely not result in more capable combat units.


    Probably the largest commercial manufacturing development activity underway today is the set of efforts by many (all?) of the car manufacturing companies to build autonomous driving vehicles. These efforts are underway partially due to the decades-long efforts at DARPA and other government agencies in many countries to create autonomous vehicle technology. TRADOC (Fort Benning combat developments) provided the requirements document back in 1988 which stated that the US Army needs an autonomous robotic capability for conducting intelligence, reconnaissance, and surveillance activities. Senator Warner of Virginia led efforts in the Senate to add target engagement capabilities to the set of existing requirements for autonomous robots. To date, autonomous robotic vehicles which provide ISR and target engagement capabilities to units remain largely a research effort. What has worked well for many tears is a direct extension of what worked during WWII using radio technology to enable humans to drive remote vehicles. Swarm technology has made dramatic advances recently to achieve patterns of air, ground, and sea movement. Swarm technology currently has no path for meeting ISR and target engagement requirements. However, success of the car manufactures autonomous highway vehicles will mean success of the ongoing efforts of the logistics community to have a group of autonomous resupply trucks driving down the road or autonomous pack mules following a squad through the woods. This will not be sufficient to enable lethal robots to join combat units and follow variable rules of engagement (ROE).


    In fact, a persistent failure of robotic technologies has been a failure to enable machine understanding of human intent to be "close enough" to human understanding of human intent. The planning process for creation and dissemination of operations orders enables humans to understand the intent of the commander for a given operation. Humans adapt the plan "on the fly" to meet the intent of the commander. We have no technology or process (a TTP) for enabling machines to understand command intent for a given operation. To my knowledge, there is currently no approach, or a program for developing such an approach, which will enable machines to achieve that outcome. Thus, whether the operation is completing the set of Tasks, Conditions, and Standards for a mechanic to change a tire, or whether the operation is for one rifle squad to provide covering fire for another rifle squad during a tactical movement, there is no reasonable expectation that robots will function well as part of a combat team. That does not mean that we should not continue to investigate the possibility of making a revolutionary breakthrough in building sentient machines. However, for me it does mean that we should not plan for such an outcome. What does make a great deal of sense to me is to create scenarios of future warfighting outcomes which exploit the ongoing revolution in information system technologies, cyber physical system technologies, and the possible emergence of the "Age of Biology." Each of these possibilities are discussed in the paper.

    I do agree with your comments concerning the impacts on future outcomes of resource allocations by ourselves and by other nations. Certainly for most of our history we had a very small Army, Navy and Marine Corps between the various wars. After WWI the Army retained the largest force in our history but it was still less than 100,000 soldiers. Even after WWII we cut back the force structure to be very small compared to the millions who served during WWII. Only since the Korean War and the emergence of weapons of mass destruction have we retained a large standing force structure and the associated capability for building weapon systems which incrementally improve combat capabilities of that force structure. Building a future force while retaining sufficient capability for meeting current national security commitments will most probably continue to be a high risk endeavor.

    Cheers,
    John
  • In reply to John James:

    John,

    For a robot to be sentient, there is an implication of multiple sensors and computing power not likely to be cheap. I just read about a small UAS called Coyote in the realm of $15k a piece which is probably before sensors, warheads, and AI computing power for more than organizing and deconflicting the swarm (probably not including deconfliction with manned aircraft!). Looking at systems like LOCUST also leads me to speculate endurance of no more than half an hour if that. Assuming 30-100 UAS as suggested in articles are part of the swarm, at $15k each that is 450k to 1.5 million dollars. It is pretty easy to see all that money being wasted being unable to find a target in the short time available, and a potential hazard to ground personnel and civilians when the swarm runs out of gas. Then there is that airspace hazard.

    If launched from three F/A-18 as in one published experiment, it implies a significant standoff range to keep the manned aircraft safe from air defenses. It also means that something like several JDAM or multiple gliding small diameter bombs could be more cost and lethally effective. Plus, half the swarms 30 minute endurance could be wasted just getting to the target area kill box where the enemy has moved or sought cover. Perhaps the enemy force has lasers or radio frequency/cyber jammers or microwave weapons as you quickly read about through brief searches.

    Yes, self-driving cars have a commercial future sharing the road with driven vehicles. Likewise, unmanned systems must share the air, ground, and sea spaces with manned systems, and stay close enough to manned systems for control data links with ground, air, and sea unmanned systems. Believe me, I'm a true believer as evidenced by a manned-unmanned teaming article I wrote for Aviation Digest. My current job involves unmanned systems and it is quite humbling and illustrative that these systems work best NOT unmanned but with very capable manned direction from afar. I also worked FCS so understand the enormous potential, shortcomings, and costs of integrating manned and unmanned systems. Some of the crazy ideas they had for computerized mission command systems for OPORDs, etc, get to your points. But lethally and for information collection, I can see the former loader of a tank and cavalry vehicle controlling an unmanned ground vehicle and serving as a back-up to the autoloader. Systems like Switchblade show the potential of shorter-range lethal UAS. The idea of launching swarms from survivability equipment tubes on manned aircraft could have great merit.

    However, there is a reason why we are building F-35s, have F-22s, and want a new bomber. There is a reason why manned helicopters must fly low and slow in some combat environments while higher flying UAS maintain stand-off and launch shorter range weapons. We hope to achieve air superiority rapidly so the threat of manned aircraft and air defense shoot-down of UAS is a reduced threat. We want to penetrate enemy defenses early in many conflicts delivering munitions initially in a stealthy manned aircraft configuration. That would "shoot down" the idea of 3 F/A-18s with a large signature pod on board that instead could be one or more gliding bombs launched from afar.
  • A very nice and thoughtful piece of work - Thank you.
  • Great summary of the near term challenge. There in out years needs to be a fleshing out of a couple of elements. 1. The convergence of multiple suites and processes and integration within persons and institutions. 2. The dramatic effect of policy and economic decisions in the out years. There will be real constraint on maneuver if the decisions on debt and fiscal discipline are ignored
  • Great summary of the near term challenge. There in out years needs to be a fleshing out of a couple of elements. 1. The convergence of multiple suites and processes and integration within persons and institutions. 2. The dramatic effect of policy and economic decisions in the out years. There will be real constraint on maneuver if the decisions on debt and fiscal discipline are ignored
  • Well, after a long absence, I've apparently figured out how to get back in here. I did so because of an automated 'request for feedback' email I got after downloading a copy of the Changing Character of Future War document. I intend to give the thing a closer look -- after having heard the G2 praise it during a VTC yesterday. Still, is there an actual human out there who might be reading this with any interest? CONFESS.
  • Excellent paper. One item left out: Controlling populations through addiction. While there was mention of "drug cartels" as a disruptive force, I got the impression that this was in connection with violent activity. Currently large portions of the US population are deeply addicted to drugs that keep them from functioning as fully aware and involved citizens. An addicted population is one that cannot be mobilized either militarily or as a civil force for intelligent response to cyber social-warfare. It may be easily argued that addiction is a straightforward outcome of previous (failed) policies regarding drug control and/or a simple result of the relationship between poor pain management techniques and ongoing pharmaceuticals use. However, if not this time, perhaps another time such a possibility should be viewed as a serious possible threat in the form of a chemical control weapon. Keeping an eye on the forces engaged in keeping populations under control through addictive substance availability might not hurt. It is necessary to ensure that populations are full conscious before we can fully address adding AI to innate cognition.
  • In reply to Teresa DiMeola:

    This is a consequential comment you are making and I hope that folk take notice. I'm not sure how a population-wide addiction attack would work, but I definitely see the willful creation of addicts as a 'form of struggle' in irregular war. Just as gangsters use debt traps and various forms of blackmail, they will also target people for the creation of drug dependency.
  • Thank you. An interesting article and I agree with much of what is proposed. I do think that perhaps you should look on the other side of the coin for C3D2 as not only can it be a game changing defence against our surveillance and targeting systems, but the absence of our own C3D2 systems makes near all our NATO forces at high risk of detection by adversary C4ISTAR systems and covert agents. For example, US armour is sadly lacking in modern multi-spectrum camouflage; and the ULCAS program is stalled I believe. The improved ghillie suit program is also stuck in acquisition whilst troops go around easily detected by readily available sensors; you can but a FLIR thermal camera via Amazon for a few hundred dollars today - and so the uncontrolled proliferation of certain technologies should also be a game changer. Finally, there is also a morality thought on C3D2; how long will it be until a court decides that not having adequate camouflage is a breach of trust as was experienced with the haphazard issue of body armour in the early 2000's? tomorrow?
  • In reply to Michael Rowe:

    Michael,
    Interesting ideas regarding the importance of camouflage. However, it seems to me that the dominant trend affecting military forces and the operational environments they will encounter in the future is the ongoing integration of information technology systems (IT) and operational technology systems (OT). That is, across the spectrum of the national critical infrastructure sectors identified by the US Department of Homeland Security and for similar structures identified by our coalition partners, IT/OT integration both supports the increased functionality and enhanced situational awareness available through the integration of business processes with operational processes and simultaneously increases the vulnerability of the resulting integrated processes to deceptive and disruptive hostile action by our adversaries or inadvertent and unanticipated failure modes introduced through the integration processes. For example, active efforts are underway to achieve improvements in wide-area control (netted, distributed control) of the North American power grid to improve resilience to cyber anomalies (failure modes) and physical anomalies (failure modes) through cooperative control of sets of microgrids which are capable of joining the larger grid or islanding from the larger grid as the operational environment changes.
Related