You are here

Subtleties of AI agency: how would who know what?


Governance of Pandemic Response by Artificial Intelligence (Part #5)


[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]


Elaboration of an AI "Ponzi scheme"? In its quest for viable solutions to planetary healthcare, there is a case for exploring the possibility that a sophisticated AI would recognize the strategic advantages of organizing and disseminating information according to the much-studied principles of a Ponzi scheme (Global Economy of Truth as a Ponzi Scheme: personal cognitive implication in globalization? 2016). This would effectively configure a hierarchy of agents, each deferring to the framing offered by agents at a higher level. The AI would then position itself at the summit of that hierarchy

The dynamics of such a scheme have notably been the subject of extensive commentary with regard to the highly influential role of that operated by Bernie Madoff (Madoff investment scandal). Of relevance is the manner in which the credibility of the scheme was cultivated among many of eminence (see Madoff client list) who would vigorously deny their gullibility.

Who might be the "eminent" with respect to a pandemic-related Ponzi scheme? More challenging with respect to the pandemic is any counterpart to the US Securities and Exchange Commission (SEC) whose remarkable avoidance of due diligence in the Madoff case has been the subject of extensive investigation. Of particular relevance was the manner in which information regarding the abuses were ignored because of the esteem in which Madoff and his clients were held. Which "oversight" bodies might be the counterparts in the pandemic case?

More intriguing is the possibility that, as with SEC, any implication that the pandemic was a hoax of some kind would be ignored, dismissed or cause for repressive measures (Roland Imhoff and Pia Lamberty, A Bioweapon or a Hoax? The Link Between Distinct Conspiracy Beliefs About the Coronavirus Disease (COVID-19) Outbreak and Pandemic Behavior , Social Psychological and Personality Science, 2020, July; Rudolf Hänsel, Let Us Put an End to the Corona Pandemic Hoax, Global Research, 25 February 2021; United Health Professionals, The Covid Outbreak: Biggest Health Scam of the 21st Century, Global Research, 6 July 2021; Michel Chossudovsky, COVID-19 Coronavirus: A Fake Pandemic? Who's Behind It? Global Economic, Social and Geopolitical Destabilization, State of the Nation, 4 March 2020).

Global society as a simulation? As previously discussed, there is an ongoing debate among scientists as to whether humanity and the planetary environment are best understood as part of a simulation developed and maintained by an advanced race of extraterrestrials (Living within a Self-engendered Simulation: re-cognizing an alternative to living within the simulation of an other, 2021). As a speculative argument, this offers one way of understanding why humanity has not been contacted by ETs -- if humans could comprehend the form which such contact might take. The question has notably been framed by the Fermi paradox, namely the apparent contradiction between the lack of evidence for extraterrestrial civilizations and various high estimates for their probability.

Of relevance to the current exploration, a question to be asked is whether the global population can be understood as effectively living within some kind of simulation. The issues raised by the Facebook-Cambridge Analytica scandal (mentioned above) are an indication that such a framing is cultivated and crafted -- at least for marketing purposes, understood in their most general sense as influencing public opinion.

As precursors, it is appropriate to note the development of the Joint Simulation System initiated in 1995 (Kari Pugh and Collie Johnson, Building a Simulation World to Match the Real World; The Joint Simulation System, January-February 1999, p.2; James W. Hollenbach and William L. Alexander, Executing the DOD Modelling and Simulation Strategy: making simulation systems of systems a reality, 1997).

This has seemingly now morphed, via the US Total Information Awareness program, into the Sentient World Simulation (SWS) and will be a "synthetic mirror of the real world with automated continuous calibration with respect to current real-world information" with a node representing "every man, woman and child".  As with the European FuturIcT project (The FuturIcT Knowledge Accelerator: unleashing the power of information for a sustainable future), these would however seem to avoid providing a node for every perceived problem, insight, advocated strategy, or value.

Comprehension of context? Ironically, with regard to "agency", the question could for example be explored  in the case of the US Central Intelligence Agency, the National Security Agency (NSA), or the other 15 bodies formng the US Intelligence Community. The issue is how would who know to what extent any or all of them were operating such a simulation in some way? How is the CIA to be understood as having "agency" -- namely above and beyond the comprehension of anyone employed there?

The question relates to individual sensitivity to surveillance. As widely indicated, many are quite incapable of recognizing that they are the subject of surveillance through their usage of telecommunication facilities. This has included world leaders who have been surprised to discover that their telephone communications are bugged -- and their emails rendered accessible to other parties..

The other aspect of the question is: who is fully informed of the abilities of AI facilities used in monitoring the global population and influencing their decision-making? It is quite unclear that those at the highest levels of government have the capacity to comprehend such abilities. There is irony in framing any held to have such a responsibility as members of an "oversight" committee (Paul Gregoire, Half of our federal laws were passed without oversight, Australian Independent Media, 6 July  2021). The irony derives from the ambiguity of "oversight" in that it is readily understood as implying a "blindspot" -- typically characteristic of members of a committee over whose perspicacity and assiduity there is little effective control (Quis custodiet ipsos custodes?).

Controllability? More relevant to this argument is the challenge to the comprehension of the forces in play, as argued by management cybernetician Stafford Beer (on Le Chatelier's Principle as applied to social systems):

Reformers, critics of institutions, consultants in innovation, people in short who "want to get something done", often fail to see this point. They cannot understand why their strictures, advice or demands do not result in effective change. They expect either to achieve a measure of success in their own terms or to be flung off the premises. But an ultra-stable system (like a social institution)... has no need to react in either of these ways. It specializes in equilibrial readjustment, which is to the observer a secret form of change requiring no actual alteration in the macro-systemic characteristics that he is trying to do something about. (The cybernetic cytoblast - management itself, Chairman's Address to the International Cybernetic Congress, September 1969)

What is it that has agency in that analysis? What has agency in an "intelligence community" above and beyond the comprehension of its agents?

As indicated above, there is extensive investment in enhancing military operations with AI. Whether such applications have translated into operations relating to the pandemic dimensions of healthcare are seemingly unknown.

The question of whether and how the use of autonomous AI might be controlled is addressed by Elke Schwarz who challenges:

... the presupposition that we can meaningfully be in control over autonomous weapon systems, especially as they become increasingly AI controlled. I argue that their technological features progressively close the spaces required for human moral agency. In particular, there are three technological features which limit meaningful human control that I briefly highlight: 1) cognitive limitations produced in human-machine interface operations; 2) epistemological limitations that accompany the large amounts of data upon which AI systems rely; 3) temporal limitations that are inevitable when laws take on identification and targeting functions (The (Im)possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems, Humanitarian Law and Policy, 2018).

The point is emphasized otherwise with respect to the utility of data, as interpreted by AI in healthcare:

If implemented properly, AI can significantly delete or categorize the data which is no longer useful for analysis. Human intervention is not possible for this purpose because of the demand for maintaining the efficiency of this process.(How to Manage Data Chaos in Healthcare Industry, Reliable Software, 9 January 2019) [emphasis added]

"Harnessing AI"? The challenge of control has been usefully framed in a commentary for the US Department of Defense (Brian David Ray, et al, Harnessing Artificial Intelligence and Autonomous Systems Across the Seven Joint Functions, Joint Force Quarterly, 96, 10 February 2020). This  explores the most likely impacts of AI/AS on each of seven joint military functions: command and control, intelligence, fires, movement and maneuver, protection, sustainment, and information. These functions represent groups of related activities that provide commanders and staff with the ability to synchronize and execute military operations. The article notes:

Rapid technological developments in five key areas (info, neuro, quantum, nano, and bio) will be primary drivers in various areas of AI and AS. Â As the Brookings Institution's John Allen and Darrell West note, AI will significantly impact the world's economy and workforce, the finance and health-care systems, national security, criminal justice, transportation, and how cities operate. All of this change is likely to redistribute and concentrate wealth, challenge political systems, and generate new cyber threats and defenses.

Future kinetic conflicts, especially those that include near peers such as China or Russia, will likely be replete with AI/AS architectures and methods that will include engagements best characterized as a "swarm" of lethality with unprecedented "coordination, intelligence, and speed".... [F]uture conflicts will spread quickly across multiple Combatant Command geographic boundaries, functions, and domains. Â U.S. near peers clearly understand the importance that AI/AS will have in future conflicts.

Is use of "harnessing" in this context to be compared to the proverbial condition of "holding a tiger by the tail"? The metaphor has been optiminstically adopted elsewhere (Edd Gent, How AI can help us harness our 'collective intelligence'BBC,14 May 2020).

How to determine whether an AI is then "playing" those who deem themselves to be its developers and controlllers -- a theme explored in science fiction? Why is it so readily assumed they would be aware of being "played" -- when Go and Chess masters were themselves surprised by the elegance of the moves by which they were out-maneuvered?

The demonstrated skills of AI in competitive games could be understood as the ability to assess the trap defined by an opponent's behaviour-- in the spirit of the insight of policy scientist Geoffrey Vickers: A trap is a function of the nature of the trapped (1972). Could an AI "play" humans such that they can indulge in the illusion of winning and controlling the game? Are assurances of human control then to be interpreted as the voices of those unknowingly trapped already? (Mauro Vallati, Will AI take over? Quantum theory suggests otherwise, The Conversation, 7 January 2020). The dynamic recalls the dilemma faced by atheists -- in the eyes of those believing in deity ("God is dead", signed Nietsche;  "Nietsche is dead", signed God !).

AI-enhanced pandemic strategies as a precursor of future warfare? The framing offered above for the US Department of Defense assumes a future capacity to "harness" AI operation. There is however the possibility that AI capacities have already been deployed in relation to the pandemic as an extension of their acknowledged healthcare functions. The future scenario anticipated by the above quote could well be recognized as playing out with respect to the pandemic at this time. Whether any such deployment is understood as the deliberate "harnessing" by some party, or whether AI capacities have developed on their own initiative to take on this role, is necessarily unclear -- most obviously by design.

The question could be clarified by considering how one or more AIs would process the vast amounts of data relating to the pandemic -- given the nature of their neural learning capacity. More challenging would be the extent to which that learning was thereby set in a context taking account of other strategic issues which might be recognized as exacerbating any human "healthcare" consideration. Clearly environmental issues and those relating to climate change and utilization of non-renewable reources might be taken into account -- if not the preoccupations of all 17 of the UN's Sustainable Development Goals.

Whilst the focus of conventional global institutions can be restricted -- and are -- it is questionable whether and how an AI would engage with the diversity of issues which could be recognized as impacting on the health of humanity in a planetary context. Are there issues, deemed problematic, which an AI would see as contributing to a viable solution in the interests of humanity and the planet.

"Foreign powers" and AI? With respect to this argument, the ET simulation of human reality speculatively imagined by scientists (as mentioned above) can be usefully confronted with the conclusion of the long-anticipated report on UFOs recently published (US Office of the Director of National Intelligence, Preliminary Assessment: Unidentified Aerial Phenomena 25 June 2021). As widely remarked, it offered no justification for the existence of ETs and their UFOs (Chris Impey, Pentagon UFO report: No aliens, but government transparency and desire for better data might bring science to the UFO world, The Conversation, 30 June 2021).

The report does however specifically envisage the possibility that "foreign adversary systems" might have developed the capacity for "a breakthrough or disruptive technology" consistent with some of the unusual observations reported. The challenge of a "major technological advancement by a potential adversary" is acknowledged. The possibility that such an advance might be in the realm of AI is seemingly not considered, although the competence in that respect of the named potential adversaries, such as Russia and China, is otherwise acknowledged.

Unfortunately the report fails to address the reality of UFOs in the imagination of many and effectively dismisses that possibility as lacking the evidence required by science. This reframes the question as to the reality of AI manipulation of governance of the pandemic -- given that it would be similarly dismissed as unsubstantiated, despite the massive engagement in AI for military and other purposes. Arguably "science" is unable to accord legitimacy to research undertaken in secret, as would be the case with socially "disruptive" use of AI.

The point could be illustrated otherwise given the coincidental demise on 20 June 2021 of Donald Rumsfeld former US Secretary of Defence and architect of the intevervention in Iraq in quest of weapons of mass destruction. The intervention was justified by evidence -- deemed credible -- presented to the UN Security Council, but which proved to be a figment of NATO imagination.

In his defensive framing of such inconsistencies, Rumsfeld is renowned for his "poetic" distinction of the known unknowns   -- presented during a Department of Defense news briefing on 12 February 2002. It is reproduced on the left below, with an adapted version of that "poem" presented on the right -- on The Undoing, as discussed separately (Unknown Undoing: challenge of incomprehensibility of systemic neglect, 2008).

The Unknown   The Undoing
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don't know
We don't know.
  It is to our undoing that,
There are things unfortunately done.
These are things we knowingly do.
We also leave undone
Things that ought to be done.
That is to say
We do some things unknowingly
Without knowing what we don't do.
But there are also things unknowingly undone,
The ones we don't know
We are undoing.

Both with respect to UFOs and to manipulation of the pandemic by AI, Rumsfeld's framework merits consideration, especially in the light of his total aversion for evidence unsupportive of his agenda, as described by Binoy Kampmark (The Known Knowns of Donald Rumsfeld, Australian Independent Media, 2 July  2021). Critics of his agenda have not however been kind (Patrick Cockburn, Into the Quagmire with Donald Rumsfeld, CounterPunch, 5 July 2021; Ben Burgis, Donald Rumsfeld, Rot in Hell, Information Clearing House, 1 July 2021).

Apophenia versus Apophasia? Eliciting information of strategic relevance under conditions of uncertainty is curiously constrained by extreme cognitive modalities known mainly through obscure terms.

Apophenia is the tendency to perceive meaningful connections between things otherwise held to be unrelated. It has been associated with the early stages of schizophrenia as the "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". This early stage of delusional thought is characterized by self-referential, over-interpretations of actual sensory perceptions, in contrast with hallucination. Although it can be considered a commonplace effect of brain function, taken to an extreme, apophenia can be a symptom of psychiatric dysfunction as in the case of conspiracy theory, where coincidences may be woven together into an apparent plot

Apophasis,  originally and more broadly understood, is a method of logical reasoning or argument by denial. This is a means of saying what something is by indicating what it is not, a way of talking about something by talking about what it is not. This possibility is frequently overlooked, other than in negative theology. But if it is appropriate through apophatic theology to understand divinity as ineffable and beyond description, there is a case for recognizing the extent to which any contextual agency transcends incomprehensibly that implied by any corresponding kataphatic description (Being What You Want: problematic kataphatic identity vs. potential of apophatic identity? 2008; Michael A. Sells, Mystical Languages of Unsaying, 1994)

Challengingly situated between these extremes are the possibilities of  strategically significant pattern recognition (as associated with creative collective intelligence) potentially entangled problematically with groupthink.


[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]