A Historical Context of the Risks of AI

Professor Adib-Moghaddam explores the historical context for bias in AI and highlights the problems in creating solutions to its risks.

Today, Artificial Intelligence seems all-pervasive. Every computer, smart device and app is teeming with options to add AI functionality, even if unwanted or unrequested. There is potential for AI to bring positive changes, but there are also great risks.

 

SOAS World spoke with Professor Arshin Adib-Moghaddam, Professor in Global Thought and Comparative Philosophies and Co-Director of the SOAS Centre for AI Futures to discuss the historical context and potential future of AI.

 

Professor Adib-Moghaddam’s new book The myth of good AI: A manifesto for critical Artificial Intelligence has just been published. 

What is AI?

Artificial intelligence is really just systems which mimic human intelligence. It’s the first technology developed in history that can not only match human intelligence, but be better at some tasks, and potentially at every task, that we are doing.

We’ve had enhancing technologies before – industrialisation for example; the systems of production that increased productivity, and even the internet was a tool for maximising productivity, connectivity and outreach - however these never entered the realm of intelligence and being creative and productive on a scale that only humans have claimed before.

With artificial intelligence, whether it is generative AI or artificial general intelligence – and generative AI is the first step towards artificial general intelligence – there is an immense capability of agency and of subjectivity with this technology.

AI systems, even the relatively primitive generative ones we currently have, have subjectivity: they create, and they create on a vast scale on a daily basis. This is an entirely new development in technology, which partially explains the suspicion and the potential dangers.

Why are AI models biased?

We have to consider what the data is that feeds into AI models, because as humans, we socialise with data: how we are raised in a family, formal education and society. It’s a dialectic between the subject and external influences, and that constitutes our principles and our ideas. It’s not very different with AI systems. They are data dependent and one way of finding out where bias comes from is to look at what the data is.

Critical social scientists have established for a long time that the data that is feeding all of these AI systems is biased, is Eurocentric. It’s not all encompassing, it's not inclusive and therefore we should not be surprised that AI systems are also biased.

We have to go back to the data and find out what we can do better in that regard in order to have better AI systems. That’s what philosophy always did - go back to the root causes of thinking in order to understand how we can be better at it. It's the same process with deciphering AI.

The data training for language learning models takes a lot from Anglophone sources, but AI in China or other countries will be taking data from other languages and sources. This can be equally subjective and biased. Centricity can come from anywhere.

In many ways you have to go back to the Enlightenment when the disciplines that we are engaged in were themselves created. One of the bases of the modern disciplines was that racism was considered a science. It was a particular European phenomenon in the way it was enacted and taught.

It was a system of governance at the same time. There had previously been discrimination, racial bias and forms of ethnic cleansing but they were not institutionalised or imagined as a science in the way they were in the European Enlightenment.

We had wonderful things being discovered and medical sciences that we benefit from, but at the same time, people were measuring the skulls of children in order to find out if they were a top or lower rank race – it was believed that this was a scientific approach. The disciplines were created by heterosexual men who had a particular worldview and a particular standing within society in order to speak to that hierarchisation of society which served them.

There was a political purpose: to create a system of power. When you claim it is a natural law and there is ‘science’ behind it, you go out in the world and tell people that they’re racially inferior and that's why they need to be governed, that's why they need to be civilised. It's an incredibly effective form and strategy of governance and helps explain the effectiveness and success of the Imperial systems.

This thinking tainted all the disciplines, in particular the social sciences and humanities. They became an ideology in order to serve that system.

It's only natural to fast forward to what is happening today to see that residues of this that still linger on. Artificial intelligence therefore, as it is created in the primarily Anglo-American world or the European-American centres, is tainted by that.

If you study social sciences in China or Iran or elsewhere, it doesn't help to substitute one type of centricity with a different centricity. There is a Chinese alternative to ChatGPT and it has a clear Chinese bias referring to Sun Tzu, Confucius and Chinese philosophers; this is just another language of power.

To ensure inclusivity of AI systems, it requires stripping things back to the human, and translating that into manageable concepts that accentuate our commonality. This will create better systems, because they will speak to all humanity and therefore to the European, Chinese or to anyone.

That's the ambition. It is not here. When I speak with policy makers, it’s not about an indictment of imperial histories, it’s about a scientific process to create better systems and that's the way to do it.

How will that be possible when these inherent biases serve the people who own the AI companies?

This is the dilemma, but it’s also the dilemma that academics deal with on a daily basis. Sometimes academics can see a solution but at the same time, one has to be humble knowing that no one cares.

That's the reality in mundane, political terms. It doesn't make a difference because outside of the ivory tower, there are dynamics that are not interested in the truth, but are interested in profit or power. It’s an ongoing struggle that we lose because we are seldom invited into the corridors of power in a decision-making capacity.

The only thing one can do - and it’s what philosophers have always done - is to hold up the truth and say: “This is my definition, my perception of the truth. Read it as we believe it will yield better outcomes.”

There are examples where it worked and created some of the Civil Society institutions that we have today. With AI, it's more difficult because it's such an oligopolistic monopolised realm and it's such a fast moving technology because it has agency, because it's self-creative on an immense scale with immense velocity. The changes are so vast that politicians can't cope so it happens independent of everything else. That's the nature of this technology and also the inherent danger.

SOAS and AI

Digital media has a long pedigree at SOAS, but the postgraduate course AI and Human Security was the first institutional manifestation of research-led teaching of artificial intelligence. It was initially controversial because it was not a typical SOAS topic, but it looks conceptually and theoretically from the perspective of non-Eurocentric thinkers; a comparative global outlook that is reflected in the method, case studies and content.

The SOAS Centre for AI Futures was launched in 2023 with colleagues including Dr Somnath Batabyal and Dr Fabio Gygi to look at how artificial intelligence writ large manifests in the context of the Global South.

My own [Professor Arshin Adib-Moghaddam] approach to this is to the Global South, not geographically, but the appreciation of the commonalities between some people in deprived areas of Birmingham or East London who may have more in common with deprived areas of the Global South than they have with Kensington, Chelsea or other affluent areas of the UK.

My book Is Artificial Intelligence Racist? was galvanised by the institutional infrastructure that is emerging. It is being followed by a new series covering artificial intelligence and the Global South called AI Futures. The first in the series is about to be published and is called The myth of good AI: A manifesto for critical Artificial Intelligence.

We now have a network of AI research that speaks to SOAS strengths. It’s a critical approach, which does not dismiss that AI can work, but we look at areas of society in which it creates havoc. We explore AI in the future of torture and how this is already evolving: AI assistance is being included in forms of interrogation.

We need to look at these other areas, not just how it already works in a medical field and what wonders it could do in that regard, although there’s much material on this because it’s also a marketing strategy led by the tech giants.

From an objective scientific perspective, we need to know what could be wrong in order to have a strategy that is all encompassing and understands and appreciates the blind spots as well.

What should alumni know about AI and what can they do?

This is a technology that potentially threatens everything that we know at the moment. This is not hyperbole, it is an inherently dangerous technology because it is kind of self-created and not only mimics human intelligence, but it outdoes it in many ways already.

This is dangerous and has to be managed. We can't always trust politicians to manage it because they’re surrounded by vested interests. We need the independent experts, the independent intellectuals, independent philosophers and thinkers to be present in the conversation and that will yield better results.

Alumni need to be educated and informed. It's the human - the individual - responsibility to educate oneself. Out of education, forms of constructive engagement can emerge. This also explains the importance of universities, the importance of the independence of universities as creative places where people can think and offer solutions.

 

We now have a network of AI research that speaks to SOAS strengths. It’s a critical approach, which does not dismiss that AI can work, but we look at areas of society in which it creates havoc.
— Professor Adib-Moghaddam