Thursday 11 June 2015

Clarifying Transparency


A dip of  the toe into the topic of 'transparency', aimed at making the various meanings of the term a little more transparent.

Andy Clark has defined transparent (and opaque) technologies in his book 'Natural-Born Cyborgs'; "A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organism, and thus remains the focus of attention even during routine problem-solving activity. Notice that “opaque,” in this technical sense, does not mean “hard to understand” as much as “highly visible in use.” I may not understand how my hippocampus works, but it is a great example of a transparent technology nonetheless. I may know exactly how my home PC works, but it is opaque (in this special sense) nonetheless, as it keeps crashing and getting in the way of what I want to do. In the case of such opaque technologies, we distinguish sharply and continuously between the user and the tool."
An example of the difference might be 3D interaction with and without head tracking.

Robert Hoffman and Dave Woods' Laws of Cognitive Work include Mr. Weasley’s Law: Humans should be supported in rapidly achieving a veridical and useful understanding of the “intent” and “stance” of the machines. [This comes from Harry Potter: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”]. Gary Klein has discussed The Man behind the Curtain (from the Wizard of Oz). Information technology usually doesn’t let people see how it reasons; it’s not understandable.
Mihaela Vorvoreanu has picked up on The Discovery of Heaven, a novel of ideas by Dutch author Harry Mulisch: "He claims that power exists because of the Golden Wall that separates the masses (the public) from decision makers. Government, in his example, is a mystery hidden behind this Golden Wall, regarded by the masses (the subject of power) in awe. Once the Golden Wall falls (or becomes transparent), people see that behind it lies the same mess as outside it. There are people in there, too. Messy people, engaged in messy, imperfect decision making processes. The awe disappears. With it, the power. What happens actually, with the fall of the Golden Wall, is higher accountability and a more equitable distribution of power. Oh, and the risk of anarchy. But the Golden Wall must fall."

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability. Machine learning should be transparent to inspection e.g. for explanation, accountability or legal 'stare decisis'.
Alex Howard has argued for 'algorithmic transparency' in the use of big data for public policy. "Our world, awash in data, will require new techniques to ensure algorithmic accountability, leading the next-generation of computational journalists to file Freedom of Information requests for code, not just data, enabling them to reverse engineer how decisions and policies are being made by programs in the public and private sectors. To do otherwise would allow data-driven decision making to live inside of a black box, ruled by secret codes, hidden from the public eye or traditional methods of accountability. Given that such a condition could prove toxic to democratic governance and perhaps democracy itself, we can only hope that they succeed."
Algorithmic transparency seems linked to 'technological due process' proposed by Danielle Keats Citron. "A new concept of technological due process is essential to vindicate the norms underlying last century's procedural protections. This Article shows how a carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. It also provides a framework of mechanisms capable of enhancing the transparency, accountability, and accuracy of rules embedded in automated decision-making systems."
Zach Blas has proposed the term 'informatic opacity: "Today, if control and policing dominantly operate through making bodies informatically visible, then informatic opacity becomes a prized means of resistance against the state and its identity politics. Such opaque actions approach capture technologies as one instantiation of the vast uses of representation and visibility to control and oppress, and therefore, refuse the false promises of equality, rights, and inclusion offered by state representation and, alternately create radical exits that open pathways to self-determination and autonomy. In fact, a pervasive desire to flee visibility is casting a shadow across political, intellectual, and artistic spheres; acts of escape and opacity are everywhere today!"

At the level of user interaction, Woods and Sarter use the term 'observability': "The key to supporting human-machine communication and system awareness is a high level of system observability. Observability is the technical term that refers to the cognitive work needed to extract meaning from available data (Rasmussen, 1985). This term captures the fundamental relationship among data, observer and context of observation that is fundamental to effective feedback. Observability is distinct from data availability, which refers to the mere presence of data in some form in some location. Observability refers to processes involved in extracting useful information. It results from the interplay between a human user knowing when to look for what information at what point in time and a system that structures data to support
attentional guidance.... A completely unobservable system is characterized by users in almost all cases asking a version of all three of the following questions: (1) What is the system doing? (2) Why is it doing that? (3) What is it going to do next? When designing joint cognitive systems, (1) is often addressed, as it is relatively easy to show the current state of as system. (2) is sometimes addressed, depending on how intent/targets are defined in the system, and (3) is rarely pursued as it is obviously quite difficult to predict what a complex joint system is going to do next, even if the automaton is deterministic.
"

Gudela Grote's (2005) concept of 'Zone of No Control' is important: "Instead of lamenting the lack of human control over technology and of demanding over and over again that control be reinstated, the approach presented here assumes very explicitly that current and future technology contains more or less substantial zones of no control. Any system design should build on this assumption and develop concepts for handling the lack on control in a way that does not delegate the responsibility to the human operator, but holds system developers, the organizations operating the systems, and societal actors accountable. This could happen much more effectively if uncertainties were made transparent and the human operator were relieved of his or her stop-gap and backup function."

No comments:

Post a Comment