At the core of future military advantage will be the effective integration
of humans, artificial intelligence (AI) and robotics into warfighting systems –
human-machine teams – that exploit the capabilities of people and technologies
to outperform our opponents. The game of chess provides an excellent example
of human-computer collaboration and a cautionary tale about over-extrapolating
when computers outperform humans.
In 1997, IBM’s Deep Blue beat the chess
grandmaster Garry Kasparov. Many observers regarded this, and the subsequent
triumph of DeepMind’s artificial intelligence (AI) at the game of Go, along with AI
that consistently beats Top Gun instructors in air-to-air combat, as the beginning
of the end for human cognitive dominance.55 However, evidence suggests
that the future is more complex than machine beats human. A useful example
comes from chess in 2005; a competition was held allowing any combination of
human and computer chess players to compete. The competition resulted in an
unexpected victory that Kasparov later reflected on:
United States (US) automated air defence post-incident lessons and
Defence Science and Technology Laboratory work on variable autonomy
shows that optimised human integration into combat systems is critical to
the effectiveness of remote and automated systems (RAS) in guarding against
unanticipated catastrophic error.57 Catastrophic error is not a term used to
exaggerate; as conventionally programmed automated systems become more
complex, when they fail they do not gracefully degrade, they will collapse.
Section 2 – Humans and machine strengths
and weaknesses
4.3. There is a tendency to assume that the difficulty of automating a task is
proportional to the amount of human mental effort associated with that task,
but that is a poor assumption. A useful rule of thumb when considering how well
machines can be applied to a task is to understand how readily the activity can
be codified. The clearer the rules, metrics and recognition features a task has, the
higher the likelihood that a machine can be optimised to undertake the task. This
is leading to surprising outcomes: roles traditionally considered to be challenging
and that are often highly paid that involve data sorting or deterministic analysis
like accounting, insurance estimation, legal documentation reviews and medical
diagnostics, are proving to be automatable. Whereas waiting on tables or care
assistance for the elderly – often much lower wage attracting roles – are proving
difficult to automate. The last jobs to be automated in society will not simply
be those of highly paid professionals. Actions that we as humans struggle to
comprehend will be very difficult to codify and ultimately automate.
4.4. Significant staff efficiencies can be made if we adopt automation in
data-centric and readily codified roles. Defence must consider how to automate
whilst retaining understanding of the processes being automated. Furthermore
approaches to human-machine teaming that adopt the automate what you can,
leave the humans to fill in the remainder view are likely to build systems that are
cheap, but less resilient or effective. No network, organisation or system can be
completely resilient; they experience constant change, operate under varying degrees
of uncertainty and face evolving threats. The key to resilience in force and system
design is therefore tied to adaptability and understanding what humans are best at
and what machines are best at in the era of narrow AI.
4.5. Broadly, computer algorithms are good at sorting and searching through
large amounts of structured data (for example, text and document processing,
people and enterprise information, and genetics), doing deterministic analysis
(for example, counting, classifying and game playing), and producing predictable
mechanical interactions (for example, manufacturing, flying and driving).
Computer algorithms are not as good at understanding complex unstructured
data (for example, images, acoustics and environment structure or context),
doing non-deterministic analysis (for example, road scene understanding or
predicting human behaviour), and undertaking dexterous actions (for example,
fine manipulation requiring touch and pressure feedback or handling deformable
objects). Despite these being more challenging fields for machines, it must be
understood that machines are increasingly outperforming humans at some of
these challenging tasks, including image recognition. They do not suffer from
concentration lapses, or fatigue, assuming access to a constant power supply
Essentially, computer algorithms are challenged by uncertainty and
ambiguity in both data and decision-making. As a result, humans outperform
machines at understanding context, and are likely to continue to do so for a long
time. Machines are poor at exercising nuanced judgement on the complex or
ambiguous contexts that then moderate decisions. Also, because machines are
programmed or trained using established datasets relevant to a task or problem,
encountering a new problem or something wildly divergent from established
datasets tends to cause failure.59 In contrast, the human ability to adapt to
new situations is generally far superior, even imperfect responses are likely to
be more functional. This is in part because humans use mental substitutions
or approximations from familiar skills or tasks to approximate answers. AI
technologies are typically able to conduct mental substitutions appropriate
to new contexts only in specific narrow confines and can even suffer from
catastrophic forgetting, where previous algorithm optimisations or skills at tasks
are simply lost when trained on new tasks and data.60
4.7. These factors mean that the last roles likely to be automated will be where
personnel conduct activities that demand contextual assessment and agile
versatility in complex, cluttered and congested operating areas. This will apply
across domains but, as an example to make the point, consider the dismounted
combatant conducting an assault in an urban environment. While RAS will offer
a lot of new forms of advantage in urban conflict in general, in the assault in close
complex terrain humans dominate the ability to exercise continuous contextual
judgement and readjustment – is it a child who has picked up a gun, or a
combatant? Likewise, the ability to open doors, use varied tools, ropes, ladders,
or move debris to manoeuvre indoors are simple to the point of instinctive for the
human, but exceptionally difficult or impossible for the robot.
4.8. Force design and concepts of operation must also consider legal and
societal factors of employment. This tends to revolve around the targeting
debate, and while considerations about targeting are highly relevant, it is an
oversimplification to assume this is the totality of the issue. The reality for
military operations – which are broader than just war – will be more diverse, more
complex and highly contextualised. For example, a US unmanned underwater
vehicle was pulled from the ocean by the Chinese Navy, prior to holding and
handing it back to the US five days later. The lack of certainty in international
law on the status of such vessels is likely to have caused the Chinese to treat the
vessel differently than they would had it been a manned warship.61 Similarly,
unmanned systems are unlikely to be considered a comparable commitment
by populations, allies or adversaries to ‘boots on the ground’ in assessments of
military commitment, political risk and demonstrations of national will. Balancing
imperatives to deploy humans against the moral and legal imperatives to
minimise risk to life and the potential advantages of employing more disposable
RAS will be complex in some instances
Future force design must find the optimal mix of manned and unmanned
platforms, and balance employment of human and machine cognition for
various tasks. Because RAS will be a key means of generating mass, there will be
a high ratio of AI driven systems – both physical and virtual – to people. There
will be proportionally fewer points of human consciousness within the system.
Optimising how we use human mental and physical capacity within such a force
will become a key factor in out-manoeuvring and out-thinking opponents. It
follows that AI must be used to free up human mental capacity in a flexible and
adaptable way. At the heart of mission command is optimising independence
of subordinate action to allow initiative and generate tempo, balanced against
measures to create unity of effort and managing risk. Risk is assessed within
context, and will remain a human responsibility. Dynamically managing levels of
automation in RAS to balance risk against advantages from machine capability –
mass, tempo, pattern recognition and precision – within changing contexts will
be how mission command is applied in an AI age.
4.10. The concept of an optimal span of command is driven by human cognitive
loading and how many active elements an individual can control, even where
the interpersonal demands like leadership are absent. If human operators are
task-saturated piloting basic unmanned systems or managing unanticipated
behaviours in technologically complex, but uncooperative systems, they might
not have the mental capacity required to undertake higher-level thinking.
Human multitasking has its limits, and those limits are often reached quickly.
The limits of human mental capacity mean the ability to dynamically
vary the level of active control that operators exercise over systems becomes
a fundamental enabler to tempo and team effectiveness. An ability to rapidly
increase the amount of automated functionality used in RAS then allows the
team to park RAS on lower risk tasks well suited to machine execution. As a
safeguard, there must be automated alerts and warnings in place to attract
human attention in sufficient time for orientation, action and decision, if
required. This frees up the humans to focus on tasks of importance or those
poorly suited to execution by machines alone, in particular, ambiguous or
contextually dependent tasks.
likely to offer opportunities to generate mass. However, if bespoke systems can
only be controlled by set operators through a non-transferable control link, the
RAS will only offer the team additional tools when that operator is positioned to
act on the target. Therefore, RAS in a human-machine team will be most effective
as a flexible pool of assets that a wide variety of individual operators can call
upon.65 Open architectures will be required to enable the dynamic adoption and
reorganisation of RAS without the need to re-engineer control systems or retrain
personnel for each change. Control interfaces must also be intuitive and impose
low cognitive loads.
4.13. The combat cloud must be able to provide decision support information
to those best prepared to decide and act.66 The team or individual that has the
greatest situational awareness must be able to assume control of the RAS best
suited to the task and at the same time release unneeded systems. This will
optimise the force’s adaptability. Simple controls and policies will enable this
adaptability. For example, pre-set limits fixing how much individuals can control
systems; in this way an operations room watchkeeper should not be able to push
a button to try and fly a complex airframe, but, they could, with permission,
briefly take control of its electro-optical camera and quickly aim it and orient the
pilot to a target.
4.14. No universal set of design principles for RAS is likely to be found.
Individual technological assessments of systems must be judged against
intended function within an anticipated operating environment in the same way
as manned ships, aircraft or armoured vehicles. However, to judge the value of
large numbers of lower cost systems requires us to change the idea of qualitative
superiority from an attribute of the platform to an attribute of the force. In
doing this, our assessments must also include a determination of how effectively
human cognitive and physical ability is applied within a force design, and this
measure is likely to correlate strongly with the force’s adaptability. If the team
can act rapidly and efficiently and, most importantly, if they can adapt effectively
to changing circumstances, then the structure, policies and technical systems in
the force are well designed
To exploit developments in AI and robotics as they continue to emerge,
we will need to adopt an aggressive strategy of iterative experimentation,
prototyping, concept and technology development, and organisational
refinement. High quality live and synthetic collective training and
experimentation with AI systems will be essential to optimise our ability to
create effective human-machine teams. Training and experimentation with real
users will be vital for operators to understand the strengths, weaknesses and
critical limitations of such AI systems while also providing vital data to improve
AI responses, including about the human behaviours in the team. We must train
and grow with our AI assistants such that the machine can tailor how it interfaces
with us as individuals and with the wider team. Such collective training will
need to be dynamic, varied, realistic, conducted against thinking opponents and
act as surrogate warfare in which to experiment, develop and build collective
trust and confidence. Such high-quality, human-machine team training will not
just be required to train and develop the teams, but also to establish a better
understanding of Defence’s future requirements which are likely to change and
evolve across all Defence’s lines of development.
human agency. Assuring non-deterministic systems designed to dynamically
adapt and optimise decisions is inherently difficult. Achieving this requires
understanding common AI errors, developing effective test strategies and
managing AI adaptation. We must also be careful to avoid information being
filtered by AI in such a way that only one rational decision is available to the
operator, leading to the illusion of a human made decision. The development
of appropriate standards and robust assurance and certification regimes will be
critical, along with effective mechanisms to demonstrate meaningful human
accountability.
4.20. Legal obligations and policies are unlikely to cede an opponent’s military
advantage in the near term. However, as future technologies emerge, particularly
for systems supporting targeting and fires, we must consider the ethical and legal
implications. Armed remote and automated systems must not only be trusted
and safe, but also perform in such a way that they are seen to be safe and reliable
by users and observers. Those developing such systems must ensure they are
able to comply with international humanitarian law. Equally, legislative moves
to encourage technology adoption within society must be scrutinised to ensure
that in an ever more connected world, lines of accountability and responsibility
are retained
Remote and automated systems are not single entities and AI
encompasses an array of what are component-level technologies; furthermore
we must remember that in evolution there is no single end point, there are
trajectories and branches in multiple directions.69 Moves to create legal
obligations in advance of capabilities becoming technically possible, or even
understandable, must be carefully and actively examined to ensure they are
not unworkable or that they open legal avenues for others to misinterpret and
misuse. To illustrate the difficulties in trying to define autonomy for regulatory
purposes it is worth considering the problems faced by legislators in Nevada as
they made laws to permit driverless cars to be used on public roads.70 Initially
they defined autonomous vehicles as those which substituted AI for human
decision-making. Once the law was passed, it unintentionally placed heavy
restrictions on commercial vehicle sales, due to the frequency with which
modern cars functionally make substitutions for direct human control, such as
crash avoidance systems and anti-lock brakes. The law was swiftly repealed.
4.22. In considering the future we must also remember that automation
will increase across society, and where new technology is sufficiently safe
and reliable, norms of trust and public appetites can be expected to follow. It
may also turn out that in the future some highly automated weapons could
actually be more able to comply with the Law of Armed Conflict principles of
proportionality and distinction, rather than less able. If that does become the
case, it may become difficult for a state to justify not using them.
1. The following deductions and insights are those judged most critical to guide
strategy, policy and force development for Defence and front line commands. They
offer guidance on factors that will determine advantage in an era of robotics and
artificial intelligence (AI) during conflict.
2. The potential of artificial intelligence and protecting access. The capability growth
of remote and automated systems (RAS) is likely to be exponential rather than linear.
While development may appear low in earlier years, huge advantage will be available
to those able to exploit these foundational developments in later years. Gaining
access to cutting-edge AI, by fair means or foul, offers the opportunity to achieve
windows of technological advantage for states, companies and even individual actors.
Defending such civil, commercial and military AI assets may become an issue of
national security.
3. Robotic and artificial intelligence systems are likely to revolutionise the
battlespace. AI-enabled tactical learning, combined with better detection,
recognition and precision will increase lethality. It will offer opportunities to better
exploit information to improve understanding, decision-making and tempo and will
enable reduced headquarters size and more agile command and control. The larger
volume of real-time data that is generated will be impossible to process without
automated support. Deploying systems first enables an actor to establish a network
and place sensors without interference or observation. AI will engage in high-speed
battles of pattern detection and deception which will occur faster than
human-operated defences alone.
4. Creating mass effect. Novel combinations of human-machine teaming will
present opportunities to augment manned platforms and create massed effect.
Networked mass – large numbers of interconnected sensors and soldiers, vehicles,
ships and aircraft – will contribute to resilient intelligence, surveillance and
reconnaissance networks, understanding and enabling manoeuvre. Cheap, smart
systems will provide resilience by absorbing casualties on a scale that will not be
viable, or desirable, using a solely manned force and will also be used to overwhelm
an opponent’s defences. Such systems are likely to offer opportunities in mass which
5. Optimising human-machine teaming. Optimising human-machine teaming
requires an understanding of what humans are best at and what machines are
best at in the era of narrow AI. The last roles likely to be automated will be where
personnel conduct activities that demand contextual assessment and agile versatility
in complex, cluttered and congested operating areas. Optimising how we use human
mental and physical capacity within such a force will become a key factor in
out-manoeuvring and out-thinking opponents. High quality live and synthetic
collective training and experimentation will be vital for humans to understand the
strengths, weaknesses and critical limitations of such AI systems while also providing
vital data to improve AI responses, including about human behaviours in the team.
6. Trust and assurance for artificial intelligence. The increasing array of capabilities
of robotic and AI systems will be limited by not only what can be done, but also
by what actors trust their machines to do. The more capable our AI systems are,
the greater their ability to conduct local processing and respond to more abstract,
higher level commands. The more we trust the AI, the lower the level of digital
connectivity we will demand to maintain system control. Developing appropriate
standards and robust assurance and certification regimes will be critical, along with
effective mechanisms to demonstrate meaningful human accountability. Although
legal obligations and policies are unlikely to cede an opponent military advantage
in the near term, as future technologies emerge, particularly for systems supporting
targeting and fires, we must consider the ethical and legal implications.
7. Accessing skills and the race for technological advantage. The major strategic
issue for all actors – nations or technology giants – is a chronic skills shortage. There
is a significant shortage of skilled graduates, software engineers and computer
technology staff with the necessary skills to develop the full breadth of possible
AI-enabled technologies. Early investment in education to generate subject matter
expertise may represent the critical long term source of economic and military
advantage for a nation. For some technologies, such as lethal effects or stealth, only
the military will lead primary investment and must continue to do so for disruptive
advantage. However, investment and development in the commercial sector will
exceed Government research investment for other applications. The ability to exploit
commercial technology developments in Defence-industrial partnerships faster
54 JCN 1/18
Deductions and insights
than potential adversaries will be increasingly important to achieving technological
superiority.
8. The new economics of warfare. Technical capabilities like precision, automated
navigation, remote operation and image recognition will become cheap through
exploiting commercially available systems. The cost of what were previously
considered expensive precision warfare capabilities will fall and become more widely
attainable, giving minor actors the ability to punch above their weight. Employing
massed cheap systems will not be optimal in all cases; we will need to fight with the
few and capable and the cheap and many in the right mix. To judge the value of
large numbers of lower cost systems requires us to change the idea of qualitative
superiority from an attribute of the platform to an attribute of the force. Approaches
to human-machine teaming that adopt the automate what you can, leave the
humans to fill in the remainder view are likely to build systems that are cheap, but
neither resilient nor effective

Tidak ada komentar:
Posting Komentar