Feedback Request for "The Operational Environment and the Changing Character of Future Warfare"

TRADOC G-2 would appreciate your feedback on our paper, "The Operational Environment and the Changing Character of Future Warfare". Please share your unclassified comments in the discussion thread below.

Parents
  • An excellent overview of probable future trends and their effects on future warfare, assuming that the current trends highlighted in the paper prove to persist as predicted and also continue to dominate other trends.
    Of course, whether the postulated set of trends will actually dominate the future environments which will present themselves to future warfighters is impossible to predict. The paper rightly points out that the true need is to continuously assess the actual ongoing trends and update the expected outcomes.
    I personally agree with the comments concerning the majority of the trends impacting the two future eras described between now and 2050. However, I do believe that comments concerning outcomes of artificial intelligence technologies and blockchain technologies are unlikely to occur as assumed. It also seems to me that an emerging set of technologies which will have profound effects on future warfare are the various applications of cyber-physical systems (CPS) scientific and technical advances. The National Science Foundation has been investing in CPS technologies for the past ten years and estimates that “CPS technology will transform the way people interact with engineered systems -- just as the Internet has transformed the way people interact with information. New smart CPS will drive innovation and competition in sectors such as agriculture, energy, transportation, building design and automation, healthcare, and manufacturing.” For military applications the possibilities go far beyond current tradeoffs being considered between cyber offensive alternatives and kinetic alternatives for achieving a given effect. As indicated in the NSF quote above, CPS outcomes will affect the way in which humans will interact with machines. They will also affect how machines will interact with amchines.
    Concerning blockchain technologies, there are already very broad technology development efforts which will most likely enable the blockchain idea to support distributed processing and control activities in ways which no one can predict. For example, today the lowest level tactical commanders in Iraq and Afghanistan cannot electronically share tactical data with coalition partners who are not “on the net” even though they are free (and encouraged) to provide paper copies of operations orders to counterparts, just as platoon leaders and company commanders and advisers were in Vietnam 50 years ago. However, the hyperledger effort which has just released version 1.0 of the hyperledger fabric, www.hyperledger.org/.../fabric , can be applied to enable platoon leaders and squad leaders to share operations orders electronically with selected coalition partners, even those who are not cleared to be “on the net.”
    Concerning artificial intelligence, the IEEE Systems, Man and Cybernetics (SMC) society, http://www.ieeesmc.org/ , traces its origin back to the early ideas of the Norbert Wiener and Alan Turing during the 1940s and 1950s concerning the eventual ability of computers to effectively mimic or exceed human cognition capabilities. Also, the TRADOC Knowledge Engineering Groups (KEGs) of the late 1980s and early 1990s developed over 30 knowledge-based applications for various TRADOC activities, at least one of which is still being used today. However, the KEGs became the various Battle Laboratories of today and the KEG requirement that officers receive advanced civil schooling to participate in KEG efforts were not continued for the Battle Laboratories’ efforts to develop future warfighting capabilities.
    Since the 1940s various predictions have been made concerning the eventual dominance of machine cognition capabilities. However, those prediction horizons have consistently been advanced even as useful AI capabilities have been achieved. It does not seem to me that there is any technical basis for assuming that the continuing improvement of machine computational and data storage and network sharing capabilities will result in machines achieving “cognition” capabilities through some yet-to-be-discovered technical approach for artificial reasoning.
    In the general area of data, information, knowledge, semantics, and semiotics, it might be beneficial to predict those things which are not expected to change so much and to frame the discussion around how to mesh those attributes of warfare which will not be so different (perhaps warfighting doctrine) with those attributes of warfare which are expected to change dramatically (like situation assessment tools and engagement process tactics, techniques, and procedures (TTP) which are discussed in the paper).
    For example, one assertion attributed to Eisenhower is that “The plan is nothing, planning is everything.” That is, the dominant outcome of the planning process is that it enables each echelon of command to understand the intent of the commander for a given operation so that when the plan becomes useless as the situation changes, each echelon of command can exercise “good military judgement” to meet the intent of the commander. If that idea is expected to persist into the future, then the rule that “1/3 of the time available at each echelon is allocated for planning and 2/3 of the time available at each echelon is allocated for execution” can be used for assessing future decision support technology alternatives. That is, if this assumption is correct concerning effective planning for enterprise activities, then it provides a framework for analysis of how to exploit future technological advances to “get inside the decision cycle” of future opponents at each echelon of command. For example, if we assume that the “intent of the commander” is the only system invariant for a given operation, then for friendly forces we can frame future weapon system capabilities and command and control capabilities to exploit the “1/3 – 2/3” rule of warfare to remain as effective at each echelon of command into the future as it has proven to be in the past. Then we use the TDLOMPF tradeoffs to build, test and train units to enable each echelon of friendly forces to “get inside the decision cycle” of opposing forces. The primary technical change in the future would then be in possibly exploiting the machine version of the “sense-decide-act” situation assessment and decision cycle to exploit future capabilities.
    In that regard one trend that has continued for many years and will probably persist into the foreseeable future, and which is tangentially mentioned in the paper, is the ever improving ability to synchronize clocks to enable precision location, navigation, and timing outcomes for unit operations. The IEEE 1588 Precision Time Protocol committee is currently revising the precision time protocol (PTP) to enable distributed clocks to be synchronized to within a nanosecond of each other. The speed of light travels about a foot in a nanosecond. Thus, the expected widespread availability of distributed devices whose logical events can be coordinated to the nearest nanosecond means that it is feasible to build sets of distributed devices which can be logically coordinated to perform offensive and defensive activities orders of magnitude faster than humans can perceive the occurrence of the activity, much less analyze and decide about how to counter the activity.
  • Appreciate your reply, Mr. James and think it good that TRADOC asked for feedback. Like many, I marvel at technology advances even if not always that great at using it. That leads to suggestions like AI where humans are largely taken out of the loop as too slow, or too difficult to train, or both. Data link and navigation arguments may apply, as well. But philosophical arguments regarding human control and speed/training aside, lots of technology makes giant leaps of faith such as Future Combat Systems did assuming capabilities will be there rapidly that defy physics, realistic timelines and budgets, or common sense not to mention all of the above. Let me offer examples.

    I understand kill boxes and that you theoretically could send your swarms of loitering munitions/sensors to a location to find and kill anything found. That, however, implies a kill box well removed from friendly forces implying some distance to travel. So how do those swarms get there? They need some combination of speed and range, not to mention the expensive sensors and computing power on board to navigate, find and stay in the kill box, and destroy any threats hiding there or not yet there yet without harming civilians. So now, because your threat does not want to be found, hugs civilians, or is not there yet, you need some endurance on station in addition to whatever fuel was required to get to the kill box in the first place. The result is not a small, lightweight, inexpensive munition created by 3D printing. And the future enemy may have lasers that can down these munitions as a relatively cheap countermeasure.

    Now you might say you can add countermeasures (expensive) launch the little, not-so-cheap or small munitions from a C-130, or a missile/rocket as UAS submunitions. But the C-130 has to get close enough to or beyond friendly lines to survive itself to launch the munition cluster, and a missile/rocket GMLRS or ATACMS could attack targets without such AI if it gets solid recon/intel that targets are already there. So also could an F-35 or bomber whether the target is small boats, large amphibious ships or landing craft, or columns of armor approaching an embattled border. Which is more likely to survive the enemy's laser countermeasures and multiple advanced air defenses?

    Close combat? How smart is the robotic vehicle or overhead AI swarm that it can distinguish between friends, foes, and civilians and understand the danger close criteria protecting friendlies and by implication civilians. What happens when friendly allies inconveniently use threat equipment or threats have old/new allied vehicles and aircraft. What if Russia is an ally of sorts such as in Syria, but Syrian and ISIS forces are all using similar equipment to the Russians? What happens when ISIS has captured friendly Iraqi armor? I've read about inexpensive UAS that supposedly will take out armed dismounts, as if they will have the endurance to do so and still be small and cheap with AI and advanced sensors. How will they know the armed dismount is a threat and not just an armed civilian or armed member of the Afghan police?
Reply
  • Appreciate your reply, Mr. James and think it good that TRADOC asked for feedback. Like many, I marvel at technology advances even if not always that great at using it. That leads to suggestions like AI where humans are largely taken out of the loop as too slow, or too difficult to train, or both. Data link and navigation arguments may apply, as well. But philosophical arguments regarding human control and speed/training aside, lots of technology makes giant leaps of faith such as Future Combat Systems did assuming capabilities will be there rapidly that defy physics, realistic timelines and budgets, or common sense not to mention all of the above. Let me offer examples.

    I understand kill boxes and that you theoretically could send your swarms of loitering munitions/sensors to a location to find and kill anything found. That, however, implies a kill box well removed from friendly forces implying some distance to travel. So how do those swarms get there? They need some combination of speed and range, not to mention the expensive sensors and computing power on board to navigate, find and stay in the kill box, and destroy any threats hiding there or not yet there yet without harming civilians. So now, because your threat does not want to be found, hugs civilians, or is not there yet, you need some endurance on station in addition to whatever fuel was required to get to the kill box in the first place. The result is not a small, lightweight, inexpensive munition created by 3D printing. And the future enemy may have lasers that can down these munitions as a relatively cheap countermeasure.

    Now you might say you can add countermeasures (expensive) launch the little, not-so-cheap or small munitions from a C-130, or a missile/rocket as UAS submunitions. But the C-130 has to get close enough to or beyond friendly lines to survive itself to launch the munition cluster, and a missile/rocket GMLRS or ATACMS could attack targets without such AI if it gets solid recon/intel that targets are already there. So also could an F-35 or bomber whether the target is small boats, large amphibious ships or landing craft, or columns of armor approaching an embattled border. Which is more likely to survive the enemy's laser countermeasures and multiple advanced air defenses?

    Close combat? How smart is the robotic vehicle or overhead AI swarm that it can distinguish between friends, foes, and civilians and understand the danger close criteria protecting friendlies and by implication civilians. What happens when friendly allies inconveniently use threat equipment or threats have old/new allied vehicles and aircraft. What if Russia is an ally of sorts such as in Syria, but Syrian and ISIS forces are all using similar equipment to the Russians? What happens when ISIS has captured friendly Iraqi armor? I've read about inexpensive UAS that supposedly will take out armed dismounts, as if they will have the endurance to do so and still be small and cheap with AI and advanced sensors. How will they know the armed dismount is a threat and not just an armed civilian or armed member of the Afghan police?
Children
No Data