Are We Letting Big Tech Outsource Our Humanity?

0

The biggest problem with Artificial Intelligence will be the way we use it, writes Dr Richard Hil.

We’ve long been in a “mirror world” of hyper-reality, in which those old stalwarts of truth and reason have been mired in an algorithmic quagmire.

This began well before the onset of generative AI. The internet, once quaintly viewed as an ‘information highway’, became the hothouse repository of data-harvesting, mass surveillance and targeted consumerism. It has also encouraged violent, anonymised nativism and racialized tribalism through which junk theories and counter factualism have proliferated.

The lines between the public and private have been eviscerated. Routinely, as we gaze at our screens, Big Tech stares right back, all the time absorbing, assimilating, targeting. There’s really no escape, other than total abstinence on a remote island.

Over time, we’ve become entangled in a spider’s web. Everyone has been granted a digital voice, with the shrillest gaining traction among the lonely and isolated, the angry and disaffected. Appeals to reason and rationality are swept aside, dumped in the dustbin of privileged, white-masculinist discourse. Truth has been relativised to the point of oblivion. Fake news, alternative facts, bogus ideas and straw men have taken care of the rest.

With AI, the waters are even cloudier. We’re at the point where its generative capabilities can create ghost-like replicas and digitised avatars whose bearing and speech resemble the real thing. We can hardly tell the difference. The medium might be the message, but AI’s endless reproductive troves should worry us all. It is parasitic, feeding off what it ingests while offering us a coke-line to hyperactivity and infinite, profit-generating possibilities.

The latter have been celebrated but the full consequences of AI are yet to reveal themselves.

Like the Internet, AI promises much but delivers wild, unaccountable spaces, mixing personas and messaging to suit particular political and commercial agendas. Sure, it has its many positive uses – across a range of fields – but again, it comes with many dark sides.

The mirror world of which Namoi Klein speaks in Doppelganger, dedicates itself, among other things, to reworking seemingly progressive ideas while repackaging them for political advantage. This appropriation began in earnest with the Tea Party back in 2009, but has its origins further back in the darkest reaches of the totalitarian state. Thus, it was possible (as it is now) to speak of the ‘will of the people’ while repressing them, or to laud peace while prosecuting war.

new matilda, naomi klein
Climate change activist Naomi Klein. (IMAGE: Sandra Gonzalez, Flickr)

The Tea Party spoke of a ‘people’s movement’, ‘freedom’ and the excesses of ‘big government’, while feeding far-right libertarian interests, just as Trump and the Republican insurgents now speak of ‘freedom’ and ‘democracy’ while seeking to quell oppositional voices.

In classic Orwellian doublespeak, words like ‘the people’, freedom and democracy are deployed to signify unifying intentions but are in fact, as Naomi Klein notes, “the uncanny twin of what we once knew”.

The doppelganger appropriation of progressive discourse works because it resonates with a deep desire for justice. It sounds good. Democratic. Aspirational. Inclusive – to a point. Yet fused with a sense of victimhood and the identification of enemies – elites, the deep state, the mainstream media, illegal migrants, opportunistic refugees – these dark forces, say their opponents, can only be conquered through one ideology and one anointed leader.

Causative complexity in this schema is displaced by simple Hollywood-standard binaries of good and evil, destructive enemies and people just like us. Bristling with evangelical zeal, such imagined polarities morph easily, as Steve Bannon’s historical fictions attest, into grand civilizational struggles over which there can and must be, only one righteous outcome.

For Klein, the algorithmic world is about replacing the authentic with the synthetic. It is a “forgery of life” which ends up “destabilising our shared worlds”.  We should seek to understand these forces “to get to firmer ground”, Klein argues. This invites us to understand, as best we can, how modern technologies work, whose interests they serve, and the role they play in shaping the hegemonic order.

It should also compel us, as Noam Chomsky urges in his online critical thinking masterclass, to question how and why we engage with these technologies.

At times, I’ve been shocked how unthinkingly many of my seemingly progressive friends use generative AI, most often to fashion text.  I’ve seen it used to dream up titles for newspaper columns and conferences. One friend told me how he’d used AI to ‘write’ an article for a local newspaper. When I opined that, “well, you didn’t write it”, he appeared more bemused than outraged.

“Why wouldn’t I use it?”, he inquired.

What ensued was a lengthy discussion about the ethics of using AI. It’s important we have these sorts of discussions.

While the AI genie is well and truly out of the bottle, regulation has yet to catch up. But it may not. The more pressing concern is how do each of us engage with this tantalising technology and when and where to draw lines.

There’s nothing benign about AI. Nothing. Social media has taught us of the many problems with unleashing technologies over which we have little control. For all its claimed advances, social media has also contributed to more loneliness and isolation (despite claims of hyper-connectivity), diminished social skills (including empathy), and contributed to more anxiety and depression, mainly among young people.

The full gamut of social consequences of AI are yet to reveal themselves, but I am intrigued, for example, about how and why it’s being used to counsel young people, just as I am concerned about the net social effects of bots ‘caring’ for the aged or becoming programmed, supine ‘partners’.

What are the real motivations behind such things? What do they say about our society more generally? Outsourcing caring functions to machines and relegating intellectual capital to generative AI may appear quick, easy and ‘cost effective’ (it’s why AI is the leading investment ‘theme’ among Big Tech companies), but the real cost may be the loss of key aspects of our humanity.

Surely in a society riven with alienation and loneliness that’s too much of a price to pay? And what will happen to critical, independent thinking, creativity and the wild, ‘wondaful world of the human imaginary’?

Big Tech will tell you that this is Luddite, doomster chatter, but with an eye on spectacular profits they would say that, wouldn’t they? What perhaps should worry us most is how AI is being used not simply for commercial purposes – the profits of Apple, Microsoft, Alphabet, Nvidia  and other companies have soared of late – but how it serves to consolidate power in the hands of certain elites.

The latter do not want us to think too much about such things. That’s why simple acquiescence to this technology is so dangerous. It enables the powerful to remain so.

Dr Richard Hil is Adjunct Associate Professor in the School of Human Services and Social Work at Griffith University, Gold Coast, and Honorary Associate at the Centre for Peace and Conflict Studies, University of Sydney. Richard’s more recent books include Whackademia: An Insider’s Account of the Troubled University, published in 2013 by New South, and Selling Students Short; Why you won’t get the university education you deserve, published by Allen and Unwin in 2015.

[fbcomments]