Governance of Pandemic Response by Artificial Intelligence (Part #6)
[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]
As noted above, however, such health experts defer to the models through which the pandemic is framed. It is from such a model that their authority is derived -- as indicated above in the case of the two models which have been so influential in framing the lockdown and related strategies. To what degree extensive use is made of "artificial intelligence" in the elaboration and operation of those models is far from clear -- despite the claims by WHO (noted above) with regard to the importance accorded to the healthcare role of AI.
Also less evident is the degree to which "artificial intelligence" then has a primary role in articulating the narrative -- as has been exemplified and documented in the case of the Facebook-Cambridge Analytica data scandal. There is also the question of whether those claiming to control any such use of AI are fully aware of the extent to which they are themselves effectively controlled by that facility.
As described by Warren Chik of Singapore Management University:
The ability of an AI system to conduct personal profiling could fundamentally change a user's digital personality, said Professor Chik, highlighting a cause of worry for many. "While an AI holds specific information such as your name and address, it also forms its own knowledge of your identity, and who you are as a person," Professor Chik said, citing algorithms used by social media feeds to collect data on one's identity, interests and surfing habits. From that data, the system then creates a profile of who they think you are. "These algorithms - which may be right or wrong - feed you information, articles and links, and as a result brings about an effect on your thinking. In other words, AI can mold human behaviour, and this is a risk that makes a lot of people uncomfortable," Professor Chik said. The threat is very real, he emphasised, noting that regulators have clearly identified a need to regulate the use of data in AI. (Engendering trust in an AI world,, Eureka-Alert/AAAS, 2 March 2020)
Of particular interest is how the articulation of the narrative at the most authoritative level acquires ever greater clarity and definition as it percolates down through agents at lower levels of authority -- each refining the certainty of the language they deploy with ever greater confidence. As in in any military structure, there is little scope for the content of a communication then to be challenged. Any efforts to do so are immediately subject to reprimand -- reinforced by peer group pressure.
For an agent, there is no room whatsoever for doubt -- as has become especially obvious in the case of the pro-vaccination narrative. Agents do not have questions regarding their role -- they provide answers to others in accordance with the script with which they have been provided -- or suffer the consequences of their failure to do so (Question Avoidance, Evasion, Aversion and Phobia: why we are unable to escape from traps, 2006).
If such is the case with respect to agents, it is perhaps curious that it is seemingly assumed that an AI of superior agency would also be valued for its focus on answers alone, More intriguing is whether it engendered questions of a higher order, as discussed separately ((Superquestions for Supercomputers: avoiding terra flops from misguided dependence on teraflops? 2010; Framing Cognitive Space for Higher Order Coherence, 2019).
Conflation of significance of "script": It is especially intriguing to note current use of the term "script". Clearly agents are expected to communicate according to a script and may well be carefully trained to do so in an appropriately designed program. In any press conference or other declaration, those at the highest levels of government necessarily rely on a carefully prepared text -- resulting in a communication which can be described as "scripted". This is in accord with the pattern of any media performance by actors -- dependent on script-writers. Earlier variants are evident in the reliance of priesthoods on liturgical scripts inspired by scriptures.
It is however the case that "script" is also widely used to describe the lines of computer code typical of any algorithm by which an AI would operate. It has effectively replaced use of the term "program". In the case of AI, there is the further issue of the extent to which an AI can develop and elaborate the scripts by which it operates -- as a consequence of neural learning.
Curiously however, use of "prescription" by physicians -- to specify to pharmacists the medication to be received by an individual -- is now abbreviated to "script" in some contexts. This is especially curious given the attested importance of AI to healthcare. It suggests a strange conflation of meaning and usage which merits careful attention. In French, for example, prescription is translated as ordonnance -- with a computer translated as ordinateur -- offering associations to notions of order.
Are agents to be understood as purveyors of scripts in a generic sense -- scripts which have ultimately been crafted by the intervention of AI to a degree which is necessarily unknown to most? The process is strangely reminiscent of the role of priesthoods in relation to a deity of which only the high priests can claim any real understanding. Priests of a lower order rely on use of scripts variously transmitted to them.
[Parts: First | Prev | Next | Last | All] [Links: To-K | Refs ]