What
is the political agenda of artificial intelligence?
Could AI single-handedly decide the course of our
history? Or will it end up as yet another technological invention that benefits
a certain subset of humans?
Santiago Zabala
ICREA Research Professor of Philosophy at the Pompeu
Fabra University
Claudio Gallo
Former La Stampa foreign desk editor and London
correspondent
Published On 17 May 2023
AI - hands of machine and human touching
While accepting that AI - like all era-defining
technology - comes with considerable downsides and dangers, we do not believe
that it could determine the course of history on its own, without any input or
guidance from humanity, Zabala and Gallo write [Getty Images]
“The hand mill gives you society with the feudal lord;
the steam mill society with the industrial capitalist,” Karl Marx once said.
And he was right. We have seen over and over again throughout history how
technological inventions determine the dominant mode of production and with it
the type of political authority present in a society.
So what will artificial intelligence give us? Who will
capitalise on this new technology, which is not only becoming a dominant
productive force in our societies (just like the hand mill and the steam mill
once were) but, as we keep reading in the news, also appears to be “fast escaping
our control”?
Could AI take on a life of its own, like so many seem
to believe it will, and single-handedly decide the course of our history? Or
will it end up as yet another technological invention that serves a particular
agenda and benefits a certain subset of humans?
Recently, examples of hyperrealistic, AI-generated
content, such as an “interview” with former Formula One world champion Michael
Schumacher, who has not been able to talk to the press since a devastating ski
accident in 2013; “photographs” showing former President Donald Trump being
arrested in New York; and seemingly authentic student essays “written” by
OpenAI’s famous chatbot ChatGPT have raised serious concerns among
intellectuals, politicians and academics about the dangers this new technology
may pose to our societies.
In March, such concerns led Apple co-founder Steve
Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk among
many others to sign an open letter accusing AI labs of being “locked in an
out-of-control race to develop and deploy ever more powerful digital minds that
no one – not even their creators – can understand, predict, or reliably
control” and calling on AI developers to pause their work. More recently,
Geoffrey Hinton – known as one of the three “godfathers of AI” quit Google “to
speak freely about the dangers of AI” and said he, at least in part, regrets
his contributions to the field.
We accept that AI – like all era-defining technology –
comes with considerable downsides and dangers, but contrary to Wozniak, Bengio,
Hinton and others, we do not believe that it could determine the course of
history on its own, without any input or guidance from humanity. We do not
share such concerns because we know that, just like it is the case with all our
other technological devices and systems, our political, social and cultural
agendas are also built into AI technologies. As philosopher Donna Haraway
explained, “Technology is not neutral. We’re inside of what we make, and it’s
inside of us.”
Before we further explain why we are not scared of a
so-called AI takeover, we must define and explain what AI – as what we are
dealing with now – actually is. This is a challenging task, not only because of
the complexity of the product at hand but also because of the media’s
mythologisation of AI.
What is being insistently communicated to the public
today is that the conscious machine is (almost) here, that our everyday world
will soon resemble the ones depicted in movies like 2001: A Space Odyssey,
Blade Runner and The Matrix.
This is a false narrative. While we are undoubtedly
building ever more capable computers and calculators, there is no indication
that we have created – or are anywhere close to creating – a digital mind that
can actually “think”.
Noam Chomsky recently argued (alongside Ian Roberts
and Jeffrey Watumull) in a New York Times article that “we know from the
science of linguistics and the philosophy of knowledge that [machine learning
programmes like ChatGPT] differ profoundly from how humans reason and use
language”. Despite its amazingly convincing answers to a variety of questions
from humans, ChatGPT is “a lumbering statistical engine for pattern matching,
gorging on hundreds of terabytes of data and extrapolating the most likely
conversational response or most probable answer to a scientific question”.
Mimicking German philosopher Martin Heidegger (and risking reigniting the
age-old battle between continental and analytical philosophers), we might say,
“AI doesn’t think. It simply calculates.”
Sign up for Al Jazeera
Week in the Middle East
Catch up on our coverage of the region, all in one
place.
E-mail address
Sign up
By signing up, you agree to our Privacy Policy
Federico Faggin, the inventor of the first commercial
microprocessor, the mythical Intel 4004, explained this clearly in his 2022
book Irriducibile (Irreducible): “There is a clear distinction between symbolic
machine ‘knowledge’ … and human semantic knowledge. The former is objective
information that can be copied and shared; the latter is a subjective and
private experience that occurs in the intimacy of the conscious being.”
Interpreting the latest theories of Quantum Physics,
Faggin appears to have produced a philosophical conclusion that fits curiously
well within ancient Neoplatonism – a feat that may ensure that he is forever
considered a heretic in scientific circles despite his incredible achievements
as an inventor.
But what does all this mean for our future? If our
super-intelligent Centaur Chiron cannot actually “think” (and therefore emerge
as an independent force that can determine the course of human history),
exactly who will it benefit and give political authority to? In other words,
what values will its decisions rely on?
Chomsky and his colleagues asked a similar question to
ChatGPT.
“As an AI, I do not have moral beliefs or the ability
to make moral judgments, so I cannot be considered immoral or moral,” the
chatbot told them. “My lack of moral beliefs is simply a result of my nature as
a machine learning model.”
Where have we heard of this position before? Is it not
eerily similar to the ethically neutral vision of hardcore liberalism?
Liberalism aspires to confine in the private
individual sphere all religious, civil and political values that proved so
dangerous and destructive in the 16th and 17th centuries. It wants all aspects
of society to be regulated by a particular – and in a way mysterious – form of
rationality: the market.
AI appears to be promoting the very same brand of
mysterious rationality. The truth is, it is emerging as the next global “big
business” innovation that will steal jobs from humans – making labourers,
doctors, barristers, journalists and many others redundant. The new bots’ moral
values are identical to the market’s. It is difficult to imagine all the
possible developments now, but a scary scenario is emerging.
David Krueger, assistant professor in machine learning
at the University of Cambridge, commented recently in New Scientist:
“Essentially every AI researcher (myself included) has received funding from
big tech. At some point, society may stop believing reassurances from people
with such strong conflicts of interest and conclude, as I have, that their
dismissal [of warnings about AI] betrays wishful thinking rather than good
counterarguments.”
If society stands up to AI and its promoters, it could
prove Marx wrong and prevent the leading technological development of the
current era from determining who holds political authority.
But for now, AI appears to be here to stay. And its
political agenda is fully synchronised with that of free market capitalism, the
principal (undeclared) goal and purpose of which is to tear apart any form of
social solidarity and community.
The danger of AI is not that it is an
impossible-to-control digital intelligence that could destroy our sense of self
and truth through the “fake” images, essays, news and histories it generates.
The danger is that this undeniably monumental invention appears to be basing
all its decisions and actions on the same destructive and dangerous values that
drive predatory capitalism.
The views expressed in this article are the authors’
own and do not necessarily reflect Al Jazeera’s editorial stance.
No comments:
Post a Comment