AI 001

Authorizing Intelligence is a new Response Column similar to UUM(Understanding Understanding Media), in which I post and respond to writing on the emergence of Artificial Intelligence tools.

This column on an article posted in The Economist: Yuval Noah Harari argues that AI has hacked the operating system of human civilisation

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

The main thesis of Harari’s argument is that human culture is embedded in language, and that since language is a system, it is vulnerable to “hacking.” Thus an artificial intelligence superior to human intelligence could hack our language to gain control over us.

This is essentially the same arguement Harari made in an article for Wired back in 2018. I read that article then, and a single sentence from it contintues to strike me each time I think about AI:

“In the end, it’s a simple empirical matter: if the algorithms indeed understand what’s happening within you better than you understand it, authority will shift to them.”

It’s a statement that is very hard to refute. But while Harari uses the narrative framing of “what will life be like in 2050” to anchor his former article, we don’t have to worry over some future point when this will happen because its a process we’re already constantly inflicting upon eachother in many ways, not the least of which is advertising–that is advertising tells us a story in which purchasing a product is the story’s resolution. And, perhaps miraculously, it so often works–advertising “hacks” us.

This understanding takes us from understanding AI in the what is seeming the more and more ridiculous sense of “the singularity”(some future point when humankind stumbles upon a big AI and things quickly change), and places us and the present instead on a spectrum or timeline. That is, the future by which humans are subjugated to machines has indeed been case for much of history. And this has really ramped-up since the creation of the internet.

But, this intelligence is certainly built in our own image. That is AI tools like chatGPT or Midjourney are borne of our own language and images. As such- if they can be said to have any intention or desire, those intentions and desires are simply a product of what is being fed into them, and what is expected of them. And this is what I find maybe the most troubling, and where advertisement becomes a larger player: If we are teaching our AI tools on what we have created, and on what we are expecting of them, then we are specifically creating Capitalistic AI.

I should draw a difference here between two problems: Capitalist AI and Capitalistic AI. The former would be the problem of developing these tools within the milieu of capitalism such that atrocities of inequality are exacerbated by their control and use–the rich and powerful will use them to become more rich and powerful at the expense of an underclass that they will subjugate. This is the problem of capitalism simily amplified by AI.

Capitalistic AI is a pedegogical problem which means if we embue AI with the drives and desires of capitalism by teaching it on the foundations of capitalistic information, and using it with capitalistic expectations, we will create an intelligence optimized with the moral failings of captilism and should not be surprised when it destroys and subjugates humans with little regard except the accumulation of resources.

I’ve only touched on little of Hirari’s articles of which I intend to discuss both in greater detail as I continue this column.

I would also like to bring McLuhan into this conversation with a discussion of AI as McLuhan’s “extension” rhetoric. That is, the extent to which AI is an extension of and prosthesis for the human brain.