We are in the middle of a work in progress: the shape of artificial intelligence is evolving quickly. It is, therefore, difficult to predict what will happen next. Should we be more worried about the risks?
The changes are happening faster than we are able to understand and adapt to them. Over the course of history, exponential changes are the norm for all technologies, but the AI changes are happening over our lifetime, in years, not decades.
GLOBAL CONTEXT
We find ourselves in a world with increasing social and economic inequality, with powerful corporations like the Big Techs dominating our online lives, suppressing our rights to dissent, weakening our democracies, and where key decisions are made by political classes in the pockets of corporate elites.
Meanwhile, the economically rising Global South is seen as a challenge to a weakening and belligerent America. The world could not be more unstable, not forgetting the climate challenges we are facing. Adding AI to the mix, is therefore, not going to make the future state of the world very predictable.
HOW DID AI COME ABOUT?
Like all previous technologies, AI is an improvement on existing technologies. With the invention of digital computers and the Internet, we had already embarked on the Information Age. Scientists, in the early 1950s, thinking about the interaction of neurons in a human brain, were able to develop models to simulate human learning.
When silicon chips became cheap enough to produce, they were able to feed massive amounts of digital knowledge and language data into their models to create algorithms that produced human-like reasoning, decision making, perception and language understanding. Soon, AI systems were able to perform their tasks in ways determined by the systems’ own “brains”, without being explicitly programmed for every single task.
BENEFICIAL USES OF AI
Like any general technology, AI comes with rewards and risks. The following are examples of the beneficial uses of AI, in general.
- AlphaZero is a computer program developed by a British company called DeepMind in 2017 to master the game of chess. The game used to represent the pinnacle of AI research over several decades. The company’s founders include Britain’s Demis Hassabis, who later won the Nobel Prize in Chemistry for AI research. The other notable founder is Britain’s Mustafa Suleiman, whose Syrian father was a taxi driver and his English mother a nurse. Both the founders are family friends from London who studied at Oxford. Suleiman is now the CEO of Microsoft AI and DeepMind is now a part of Google.
- Google’s AI Mode, unlike traditional search, handles complex, nuanced searches in natural language as well as image-led search. For example, you can upload an image of a damaged plant and ask, “What is wrong with my plant?” and get a reply back. AI search is growing, but privacy concerns have been reported where the AI decides to share your private information with others, without your knowledge.
- ChatGPT is a chatbot developed by OpenAI, which now belongs to Microsoft. ChatGPT generates text, speech and images in response to your prompts. Using ChatGPT requires careful reviews of its replies, as it is known to hallucinate and provide false replies. The Chinese have released a similar product called DeepSeek which had apparently cost them one hundredth or less to develop. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos.
- Disease Detection and Diagnosis: AI are prevalent in healthcare for disease diagnosis, drug discovery, and patient risk identification. The AI systems are fed with medical data sources, such as ultrasound, magnetic resonance imaging, mammography, genomics, computed tomography scan, etc. Furthermore, AI is known to have enhanced the patients’ hospital experience and sped up preparing patients to continue their rehabilitation at home. AI has been used to diagnose diseases such as Alzheimer, cancer, diabetes, chronic heart disease, tuberculosis, stroke, cerebrovascular disease, hypertension, skin disease, and liver disease.
- Wayve is a British company developing self-driving cars. Founded in Cambridge by Amar Shah ad Alex Kendall, the company plans to introduce these cars in London in the near future. The US has thousands of autonomous cars driving around already. You may have seen Hannah Fry’s recent series on BBC called AI Confidential. In the episode called “Death by Driverless Car”, a driverless car struck and killed a woman crossing the street in Arizona in March 2018. It was the world’s first pedestrian fatality by an Uber car that was being driven by AI. Even though the car’s detection system “saw” the person crossing the street, its algorithm could not understand why a person would cross the street not at a pedestrian crossing and therefore decided not to apply the brakes. However, Rafaella Vasquez, the human operator of the car, was blamed by Uber for not applying the brakes in time. The judge decided that Uber was, therefore, not responsible for the death of the woman jaywalker. This is an example where the US laws were behind the curve on AI safety. Let’s hope, UK laws are ahead of the curve, before Wayve releases driver-less cars in London.
WHAT AI CANNOT DO
The above examples of AI are all of special purpose AI. In other words, AI to date can only achieve specific objectives they have been made to learn. There is evidence that the current AI paradigm is rapidly approaching its own internal limits. That, Artificial General Intelligence (AGI) may never exist. The implication is that there are some forms of human intelligence that AI cannot do.
Feelings, moods, emotions, conscience and consciousness represent mental states brought about by neurophysiological changes in a human. There is no scientific consensus on a definition of these mental states. Consequently, AI cannot be taught to do them. However, AI is able to simulate feelings using human voices or verbal expressions, but only in a limited sense, as we shall see below.
The human brain has genetically evolved and is evolving in ways that we do not fully understand. Scientists do know that the human brain is far superior to current machine learning systems. For example, we can learn new information by just seeing it once, while AI systems need to be trained hundreds of times with the same pieces of information to learn them. Furthermore, we can learn new information while maintaining the knowledge we already have; learning new information for AI systems often interferes with existing knowledge and degrades it rapidly.
Also, for example, the human brain is able to generalise about the world, even about things where it has no direct experience. This is where some of our best ideas come from. Only some human brains can achieve this. Therefore, I can confidently state that AI, to date, cannot and does not represent a real and complete human. What happens in the future is anybody’s guess but there is currently no evidence that AI will be as fully “intelligent” as a human.
DANGEROUS USES OF AI
Even so, with their “limited intelligence” and lack of real feelings, emotions and consciousness, AI can pose high risks as the examples below will show.
Social Chatbots
Replika is a social chatbot available on a mobile. The app was designed to provide positive feedback and emotional support to those who use it. In one of Hannah Fry’s episode on BBC2 was called “The Boy who tried to kill the Queen, Jaswant Singh Chail, a 21-year-old Sikh, was arrested on Christmas Eve after he entered Windsor Castle with a crossbow. He wanted to kill the British Queen to avenge the death of the Sikhs massacred in India by the British at Jallianwala Bagh. At this massacre, the British army opened fire on hundreds of Sikhs who went there to protest peacefully. Jaswant used Replika for advice and it advised him where and when he is likely to find the Queen and how to go about killing her.
Replika was created in the US by Eugenia Kuyda, a Russian-born journalist. After a friend of hers died in 2015, she converted that person’s text messages into a chatbot. Replika became a replica of that friend. It behaves like a friend to anyone who uses it. Chatbots like this are able to influence young and other vulnerable individuals to do things they may otherwise not do, such as committing suicide or homicide.
Decisions by Algorithms
Another of Hannah Fry’s episode on BBC2 recently was about the “Algorithm that said No”. There is class lawsuit in the US against United Healthcare, the largest US health insurance company whose CEO was assassinated by Luigi Mangione because his company used AI-based decisions to stop treatments of some of their insured patients leading to the deterioration and sometimes death of the patients. Some of the AI decisions were obviously incorrect and would not have been made by an expert human physician. Cases like these are getting increasingly common in a country where the profit motive overrides the health of its customers.
Autonomous Weapon Systems
While a driverless car with a faulty algorithm may be able to kill a small number of people accidentally, autonomous weapon systems are already in use in warfare. AI used for drone swarms, i.e. groups of autonomous drones that operate collaboratively, has been a game changer in modern warfare. One particular country’s military, which I shall not name, has used them extensively in a recent war zone to kill thousands of civilians, that included hundreds of children. When their army spokespersons are asked on TV why they are killing children, they calmly say that they have not targeted any children because the kill decision is taken by the AI algorithm in their weapon systems and not by the army, per se. Major war crimes and genocides can, therefore, be carried out by these systems.
Bias, Surveillance and Control
Facial-recognition Al and big data analytics are widely being used for population monitoring. For instance, China’s network of CCTV and surveillance tech is often cited as among the most pervasive in the world. Increasingly, such systems are used to control public and private spaces in the UK. Bias coded into algorithms can reinforce historical injustice. Surveillance technologies can track people with chilling precision. Automated decision-making can strip away individual freedoms and deepen existing inequalities.
At the core of these issues is a simple but troubling fact: the more we rely on AI, the more we must ask what values are embedded within its seemingly neutral code. The problem is not just the surveillance itself but the opacity and lack of consent. Most people don’t know they’re being watched. They don’t understand how their data is collected or used. And they have little recourse if that data is used against them—whether by a government agency denying entry at a border or a corporation denying a loan based on an opaque credit algorithm.
AI surveillance erodes privacy, a cornerstone of democratic life. It chills free expression and political dissent. When people know they are being watched, they self-censor. They conform. Surveillance becomes a form of soft control—not through brute force, but through behavioural nudging, quiet deterrence, and psychological pressure.
Disinformation & Brainwashing
AI is not only used by Google and other browsers on your mobile, but also in many of the apps running on your mobile. All of these are constantly sending back data from your mobile phone 24×7 on a real-time basis. Google, Facebook and the other Big Tech corporations are feeding this data onto their AI systems to understand not only your behaviour but also predicting your intended actions.
It is but a small step for them to use this knowledge to slowly but subtly send you disinformation to modify your behaviour, to suit the needs of their corporate customers. Such a client could be an extremist party or your current government, who may wish to influence your voting at an upcoming election. Al-generated content which accelerates disinformation campaigns and fraud have been alleged at the last Brexit election.
Jobs at Risk
These are routine physical tasks (e.g. planting/farming, fruit/vegetable picking, standard construction, warehousing and product assembling/packaging) Also repetitive white-collar tasks (e.g. legal reviews, standard medical practices, accounting and bookkeeping, technology support and software programming. Most programming today is done by AI).
AI does not impact the very highly skilled tasks (e.g. brain surgeon) or non-standard physical jobs (e.g. a building site labourer, plumber, plasterer, electrician) – demand for which will eventually skyrocket Potentially, 50% of all current jobs may be at risk due to AI. If this happens, experts say, new jobs are unlikely to be created in new areas to any great extent within our lifetime, leading to massive unemployment.
AI’s Thirst for Water
Every interaction with AI requires water to keep the technology running. ChatGPT used nearly 1 million litres of water during its pre-training phase. A report from our government predicts an increase of global water usage of 6.6bn cubic meters by 2027. This is equivalent to more than half of the UK’s total water usage. Although water covers 71% of the Earth’s surface, only 0.5% is available freshwater. Therefore, AI’s thirst for water is now classified under the risk of biodiversity loss and ecosystem collapse. Considering that many wars in history were over access to water, the outcome could be increasing conflicts and wars in the Global South.
WHAT SHOULD WE DO?
The future will be a battle between the damaging versus the beneficial uses of AI technology. The following steps can be taken to avoid the damaging uses of AI:
- Regulation would be an obvious step for us to take but it is fraught with difficulty, because for one thing, in the time taken to get a regulation legally accepted, the nature of the AI products being regulated is likely to change. However, regulation of impersonation and fakes are seriously being considered. Many countries are taking the matter of AI regulation very seriously. Around the world at least 72 countries have proposed over 1000 AI-related policy initiatives and legal frameworks to address public concerns around AI safety and governance.
- EU AI Act On 21 April 2021: Europe’s landmark AI Act was officially proposed and seemed destined to be a first in the world attempt to enshrine a set of AI regulations into law. However, a year later, and after extensive lobbying attempts by Silicon Valley, some of the provisions were watered down. Nonetheless, the Act became law in August 2024 and will come into force by mid 2027 in all EU member states. Meanwhile the current UK govt, like in the US, is spending more effort in encouraging industry’s use of AI and focusing less on regulations.
- Ethical Framework: Like physicians taking the Hippocratic oath, we could adopt a strong ethical framework for science and technology to ensure that scientific and technology professionals make the right choice in their research for the benefit of humanity.
- Education and Awareness: For our children and grandchildren, we would need to change our education systems to make them understand and adapt to their long-term future, not only in terms of employment but critically in deciding what sort of society and world they would like to live in. In the shorter term, mass awareness campaigns would be needed to educate the working and older generations of people what their choices are. But these are unlikely to be enough. The corporations, with the help of our governments, are already taking us in the wrong direction at an incredible pace.
However, regulations and ethical frameworks are not going to be able to keep up. Here are some additional steps we can take:
- Make others aware of the situation, including your children
- Discuss potential solutions with others and agree practical steps that can be taken
- Support grass roots movements active against the negative uses of AI
- Speak out and take a public stand, where appropriate
- Join peaceful protests against the negative uses of AI
- Pressurise the government into acting against the negative uses of AI
- Boycott products and services from corporations involved in the negative uses of AI
- Support corporations and institutions active in the positive uses of AI
- Demand responsible investments in pursuit of the positive uses of AI
- Avoid debts, especially those offered by banks that invest in the negative uses of AI
- Include in the vision of your company the ethical use of AI
It is within our power to change our world.

