Governance of Pandemic Response by Artificial Intelligence (Part #3)
[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]
Agents: The point of interest is the extent to which the official response to COVID is administered and implemented by people termed "agents" -- and who identify themselves as such. If challenged they readily indicate that they are following orders from higher authority. Those of higher authority typically indicate that they are similarly beholden to even high authority. This may be qualified by reference to an advisory boards of health experts from which the most appropriate advice has been obtained.
It is however curious to note the transformation among experts from cautious expression of opinion to unquestionable claims to knowledge. Seemingly the knowledge of such experts derives primarily from models -- obviating any need for the qualified opinion so evident in their role as witnesses in legal proceedings (Andrea Lavazza and Mirko Farina, The Role of Experts in the Covid-19 Pandemic and the Limits of their Epistemic Authority in Democracy, Frontiers in Public Health, 8, July 2020):
In the 2020 Covid-19 pandemic, medical experts (virologists, epidemiologists, public health scholars, and statisticians alike) have become instrumental in suggesting policies to counteract the spread of coronavirus. Given the dangerousness and the extent of the contagion, almost no one has questioned the suggestions that these experts have advised policymakers to implement. Quite often the latter explicitly sought experts' advice and justified unpopular measures (e.g., restricting people's freedom of movement) by referring to the epistemic authority attributed to experts.Â
Identification of ultimate responsibility for the action of agents now recalls the dilemmas associated with the primary defence of Adolf Eichman. Termed superior orders, but also known as the Nuremberg defense (or "just following orders"), is a plea in a court of law that a person, whether a member of the military, law enforcement, a firefighting force, or the civilian population, should not be considered guilty of committing actions that were ordered by a superior officer or official.
Government officials at the highest level -- including the leadership -- make similar claims. It is of course necessarily the case that the health experts derive their authority from models that have been developed -- in all probability with the assistance of AI. It is then appropriate to ask at what point those in authority cease to be agents of a higher authority and can acknowledge their responsibility in ordering the implementation of the pandemic response strategy. Clearly the matter is rendered more complex when reference is made to an AI-designed model as the ultimate authority -- thereby transforming the health experts themselves into agents for the interpretation of the insights seemingly offered by the model.
Are there indeed "Eichman's" -- convinced of their innocence -- to be found in the presentation and administration of the strategic response to COVID? Related possibilities has been evoked with regard to the experimental use of inadequately tested vaccines on large populations -- seemingly in conflict with articles of the Nuremberg Code (Saranac Hale Spencer, Nuremberg Code Addresses Experimentation, Not Vaccines. FactCheck,org, 17 May 2021; HowardTenenbaum, The present COVID-19 vaccines violate all 10 tenets of the Nuremberg Medical Ethics Code as a guide for permitted medical experiments, TrialSiteNews, 29 June 2021).
Agency: From a philosophical perspective, agency is the capacity of an actor to act in a given environment. From a social science perspective, agency is defined as the capacity of individuals to act independently and to make their own free choices. By contrast, structure are those factors of influence (such as social class, religion, gender, ethnicity, ability, customs, etc.) that determine or limit agents and their decisions. An agent is an individual engaging with the social structure. It continues to be debated to what extent a person's actions are constrained by social systems. This debate concerns, at least partly, the level of reflexivity an agent may possess -- to the extent that this lends itself to evaluation..
Extensive clarification of understandings of agency have been presented by Maurice Yolles and colleagues (A Theory of the Collective Agency, SSRN, February 2014). That research has been further developed by Maurice Yolles and Gerhard Fink (A Configuration Approach to Mind set Agency Theory: a formative trait psychology with affect, cognition and behaviour, 2021; Governance Through Political Bureaucracy: an agency approach, Cybernetics, 48, 2019, 1).
Yolles has subsequently argued that the use of process intelligence, adopted as autopoiesis, is quite consistent with AI (Autopoiesis, its Efficacy and Stability: a metacybernetic view, forthcoming 2021). This distinguishes explicit and implicit cognition, noting the relevance of the the latter to AI. It is then unnecessary to propose that an AI system is aware:
To illustrate the distinction between implicit and explicit cognition, one can highlight the shift in the area of computing, in particular through adaptive artificial intelligence (AI) systems [Rogerio de Lemos and Marek Grzes, Self-Adaptive Artificial Intelligence, IEEE/ACM 14th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, 2019]. These systems embrace a need for: robustness, the ability to achieve high algorithmic accuracy; efficiency, the ability to achieve low use of resources in computation, memory, and power; and agility which includes an ability for recognition, and which responds to a need to alter operational conditions based on current needs. To enhance these attributes, conscious self-awareness is being introduced into processing, storing, retrieving information about self, and a capacity for individuating -- the ability for an entity to distinguish itself from others [R. Chatila, et al., Toward Self-Aware Robots, 2018). Such a development would enable robots to understand their environment and be cognizant about what they do and about the purpose of their actions, making timely initiatives beyond goals set by others, and to learn from their own experience, knowing what they have learned and how [A. Chella, et al, Developing Self-Awareness in Robots via Inner Speech, Frontiers in Robotics and AI, 19 February 2020]. [included references expanded]
If agency is explored in terms of "operacy", as articulated by Edward de Bono (Judgment, recognition and operacy, Extensor), can agents be understood as having degrees of operacy?
AI's as agents or having agency? If an AI is constructed to complete a certain task for humans. an AI is clearly held to be an agent. This understanding can be challenged when AI's develop to the degree envisaged with respect to governance, potentially with intentions that deviate from those of its constructors -- suggesting that an AI would then indeed have agency.
A workshop on AI and Society explored the topic of "agency", noting that it is defined differently across domains and cultures, relating to many of the topics of discussion in AI ethics, including responsibility and accountability. The group found paradoxes and incongruities, with many open questions, rather than answers. The output took the form of the following set of essays, many framed as provocations (Sarah Newman (AI & Agency, AI Pulse, 26 September 2019):
| Jon Bowen: Characterizing Agency Spondee Isobar: The Value of the Concept of Agency in an Increasingly Rational World Ababa Birchen: Human agency in the age of AI Mike Anjou: Agency to Change the World | Gabriel Lima: Can (and Should) AI Be Considered an Agent Carina Punk: How does AI affect human Autonomy? Sarah Newman: The Myth of Agency |
Agency of an agent? Clearly a fundamental issue is the extent to which agents have agency -- if agency implies a degree of independence and freedom of choice. It is as yet unclear how much agency an agent can be understood to have. For James W. Moore What Is the Sense of Agency and Why Does it Matter? Frontiers in Psychology, 7, 2016; 7, 1272):
The number of scientific investigations of sense of agency has increased considerably over the past 20 years or so. This increase is despite the fact that experiments on sense of agency face certain methodological problems. A major one is that the sense of agency is phenomenological thin... That is, when we make actions we are typically only minimally aware of our agent ic experiences. This is quite unlike conscious experience in other modalities, especially vision, where our experiences are typically phenomenologically strong and stable. What this means is that sense of agency can be difficult to measure. As a result of this, experimenters have had to be quite inventive in order to develop paradigms that capture this rather elusive experience.
The question is central to the implementation of the response to the pandemic. Who has freedom of choice and among what possibilities are they free to choose? Unfortunately the question goes to the root of the interminable debate regarding free will versus determinism.
Of greater significance in practice is the illusion of agency -- as attributed to an agent or in which the agent undulges (Cees Midden and Jaap Ham The Illusion of Agency: The Influence of the Agency of an Artificial Agent on Its Persuasive Power, International Conference on Persuasive Technology, 2012; Matthew William Fendt, et al, Achieving the Illusion of Agency, 2012).
For an AI, the art lies in enabling scripts to enhance the sense of self-importance of subordinate agents at every level -- whilst ensuring that overweening importance does not ultimately prove counterproductive in the interaction with the population to be controlled.
[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]