Monday 27 November 2017

Turning 'Meaningful Human Control' into practical reality

The Fake News

The ambiguity in 'Meaningful Human Control' (MHC) may have been good for generating discussion but it is no good for system design or operation. Rules Of Engagement are bad enough without adding more ambiguity. Some folk seem surprised that 'ethics' needs converting to a technical matter - how else do they think 'ethics' will be implemented at design or run time? The legal viewpoint is not the only one that matters, and expertise in design, support, operation, training, seems thin on the ground to date. This post attempts to make a start on describing the way ahead and practical issues to be faced.


Doug Wise, former Deputy Director, Defense Intelligence Agency “There are human beings that actually fly the MQ-9 drone – people are actually observing and make the decisions to either continue to observe or use whatever is the lethality that is inherent in the platform. There are human beings at every stage. Now lets assume that at some point the human beings release the platform to act on its own recognizance, which is based on the basic information on the payload that it carries and the information that it continues to be updated with. Then it is allowed to behave in a timescale to take data, process it, and make decisions and act on those decisions. As the platforms become more sophisticated, our ability to let it go will become earlier and earlier.” There will be people involved in all stages of the killer robot lifecycle.  The discussion around killer robots, like the discussion around other autonomous platforms, has an unhelpful focus on the built artefact - the robot itself. As UNIDIR has pointed out, a 'system of systems' approach is needed.

The Good News

"What assurances are there that weapon systems developed can be operated and maintained by the people who must use them?" This question, from Guidelines for Assessing Whether Human Factors Were Considered in the Weapon Systems Acquisition Process FPCD-82-5, US GAO, 1981, might be a more useful framing. Assurance requires a combination of inspecting the design, evaluating performance, and auditing processes (for design, operation etc.). Many military systems need something resembling MHC - aircraft cockpits, command centres etc. In fact it is hard to think of a system that doesn't. Not surprisingly, therefore, there is a considerable body of expertise in Human System Integration (HSI) aimed at providing assurance of operability.

Quality In Use (QIU) is defined as:The degree to which a product or system can be used by specific users to meet their needs to achieve specific goals with effectiveness, efficiency, freedom from risk and satisfaction in specific contexts of use. ISO 25010 (2011). The term is part of a well-formed body of quality and system engineering standards (civil and military) aimed at providing assurance of QIU. In practical terms, this approach is the way ahead (because it exists). Pre-Contract Award Capability Evaluation is likely to be the a useful tool in helping to build and operate systems with MHC.

The Bad News

The reason most people do not recognize an opportunity when they meet it is because it usually goes around wearing overalls and looking like Hard Work.” Henry Dodd
Reliance on coming up with a good definition of  MHC won't work for the folk at the sharp end of killer robot operation. The test of whether good intentions have translated into good deeds will be after things have gone wrong. There is a need to improve military accident investigation (with some notable exceptions). Unless there is good Dekker-compatible practice for accident investigation of smart systems and weapons, more good folk who put their lives on the line their country are going to be used as fall guys. Mock trials with realistic case material would be a good start - overdue really. Sensible investigation of the 'system of systems' is bound to find shortfalls in numerous aspects of both human and technical design and operation. Looking for clear human/machine responsibilities at the sharp end is no more than scapegoating.

It’s generally hopeless trying to clearly distinguish between automatic, automated and autonomous systems. We use those words to refer to different points along a spectrum of complexity and sophistication of systems. They mean slightly different things, but there aren’t clear dividing lines between them. One person’s “automated” system is another person’s “autonomous” system. I think it is more fruitful to think about which functions are automated/autonomous.” Paul Scharre. The critical parameter for automatic / autonomous is 'context coverage' which considers QIU in both specified contexts of use and in contexts beyond those initially explicitly identified. For autonomous vehicles, it is becoming recognised that the issue is not 'when' but 'where'. A similar situation will continue to apply to smart weapons. The safe and legal operation of smart weapons will remain context-dependent.

'Ordinary' automation is usually done badly, and has not learned the Human Factors lessons proffered since the mid-1960's. There are many unhelpful myths that continue to bring more bad automation into operation e.g. 'allocation of function', 'human error', 'cognitive bias'.  Really, MHC of ordinary automation is far from common.

HSI is practiced to a much more limited degree than it should be, so the pool of expertise is smaller than would be needed. The organisational capability to deliver or operate usable systems is very variable in both industrial and military organisations. Any sizeable switch to 'Centaur' Human-Autonomous Teamwork will hit cultural, organisational, and personnel obstacles on a grand scale.
The current killer robot exceptionalism will be unhelpful if it proves to be a deterrent to applying HSI, or if it continues to be a distraction from the wider problems of remote warfare now we have said Goodbye Uncanny Valley.

Back in the days of rule-based Knowledge Based Systems, the craft of the Knowledge Engineer involved spending 10% of the time devising an appropriate knowledge representation and 90% of the time trying to convince engineers that the human decision making approach was not flawed but contained subtleties that allowed adaptation to context, and that the proposed machine reasoning was seriously flawed. With the current fashion of GPU-powered Machine Learning (ML), this may not be possible. Further, XAI (explainable AI) is a long way from a proven remedy for the opaque nature of ML  ML can be brittle and fail in unexpected ways; The claim that the X part of the system will be able to generate an explanation under this circumstance is an extraordinary claim without extraordinary evidence.

No comments:

Post a Comment