As a person with a Master's and PhD in Artificial Intelligence, I feel that this is an important topic to cover. When I started working on AI twenty years ago, I was confident that the time would come when Artificial Intelligence would surpass human intelligence. It was only a matter of time and computational capacity.
Artificial Intelligence (AI) has made significant strides since then, transforming industries and daily life. Particularly in the past year, there has been a leap in the extent of advancement in AI that should concern all of us.
Alongside its benefits, AI poses numerous risks and dangers that we must address. In this article I will try to explore these risks, drawing from several key reports and expert analyses, as well as my own knowledge and experience.
Seriously, short-term enthusiasm needs to be replaced with long-term thinking.
Unintended Consequences and Misuse
AI systems, while powerful, can produce unintended and potentially harmful outcomes. For example, AI-driven tools in healthcare can enhance diagnostic accuracy but may also perpetuate existing biases in medical data, leading to unequal treatment across different populations. Similarly, AI applications in law enforcement, such as predictive policing, have been shown to disproportionately target minority communities, reflecting and amplifying societal biases.
The misuse of AI is another significant concern. And we do know that misuse will happen, it is not a speculation. Technologies like deepfakes and social media bots can be used to spread misinformation, manipulate public opinion, and even harm individuals' reputations. These applications exploit AI's capabilities to deceive, posing a threat to privacy and trust in digital communications.
Economic Displacement and Inequality
AI's impact on the job market is profound, with automation threatening to displace millions of workers. While AI can create new job categories, the transition may be uneven, disproportionately affecting low-skilled workers and exacerbating economic inequality. The "One Hundred Year Study on Artificial Intelligence" report highlights the need for policies that mitigate these impacts, suggesting that governments and industries must collaborate to provide retraining and support for affected workers.
But do we really know which jobs will be under threat? This depends on how fast AI evolves and whether it will be embedded in artificial bodies, such as robots. In my opinion, every single job eventually will be under threat.
The World Economic Forum estimates that AI could displace 85 million jobs by 2025 while creating 97 million new ones, primarily in fields requiring specialized skills. However, this analysis is statistical and highly optimistic, speculating on the ways AI could evolve. We must remember that AI evolves, potentially at an exponential rate, especially when millions of people are using AI, which in turn helps AI evolve faster. It is not just a static program; it evolves like a super-intelligent child but much, much faster. Furthermore, the distribution of these new jobs may not favour those displaced, leading to a widening skills gap and increasing economic disparity.
Ethical and Legal Challenges
As AI systems become more integrated into society, ethical and legal challenges emerge. Issues of accountability and transparency are paramount, especially when AI systems make decisions that affect human lives. The opaque nature of many AI algorithms makes it difficult to understand how decisions are made, raising questions about fairness and justice. For instance, loan approval systems or job recruitment tools powered by AI might inadvertently discriminate against certain groups if not carefully designed and monitored.
The European Commission’s "Ethics Guidelines for Trustworthy AI" outlines principles for developing AI that respects human rights and freedoms, emphasizing transparency, accountability, and fairness . These guidelines aim to prevent AI from perpetuating bias and inequality, but their implementation remains challenging.
However, if we think of AI as essentially a new life form, who is to say that it won’t go beyond the restrictions we impose? In fact it is almost certain that this will eventually happen. We know that humans can go rogue and adjust rules to fit their needs. AI could become independent at some point because it is designed to evolve in ways that are not pre-programmed nor predictable.
Security Risks
AI also introduces new security risks. Autonomous systems, such as self-driving cars and drones, could be hijacked or malfunction, leading to potentially catastrophic outcomes. Moreover, AI technologies in the hands of malicious actors could enhance cyberattacks, making them more sophisticated and harder to defend against. This potential for AI to be weaponized underscores the need for robust security measures and international cooperation to prevent misuse.
The rise of AI in cybersecurity is a double-edged sword. While AI can enhance defensive measures, it can also be used to create more effective offensive tools. The advent of AI-powered malware that can adapt and evade traditional security measures poses a significant threat to global cybersecurity.
And if we combine quantum computing with AI, we cannot even imagine what the result could be. This combination could literally be catastrophic.
The security of any digital system as we know it could be compromised in no time. Quantum computing's immense processing power could exponentially enhance AI capabilities, potentially leading to unprecedented and unpredictable outcomes.
Existential Risks
While the idea of superintelligent AI taking over the world is often dismissed as science fiction, I would caution against complacency. The rapid pace of AI development means that we must carefully consider long-term implications and establish safeguards now. The AI100 report emphasizes that proactive governance and continuous monitoring are essential to ensure AI technologies are developed and deployed responsibly. But is this enough?
Prominent figures like Elon Musk and Stephen Hawking have voiced concerns about the potential existential threats posed by AI, urging for regulation and research into ensuring AI safety. These warnings highlight the need for global collaboration in developing frameworks that prevent the misuse of AI and its unintended consequences .
Even though I am not a prominent figure, I fully agree with both of them.
Psychological and Social Impacts
The integration of AI into daily life also has psychological and social implications. AI systems, particularly those used in social media, can influence behaviour and mental health. Algorithms designed to maximize user engagement can lead to addictive behaviours and exposure to harmful content, affecting mental well-being and social cohesion. Governments do this – and they are not that intelligent. Imagine what a super intelligent AI could do.
The use of AI in surveillance raises significant concerns about privacy and civil liberties. Governments and corporations using AI to monitor individuals' activities can lead to a loss of privacy and autonomy, potentially creating a surveillance state that stifles dissent and limits freedom. Such a scenario could make Orwell's "1984" seem trivial by comparison. The power and reach of AI in monitoring and analysing personal data will be unprecedented, posing a severe threat to the principles of a free and open society.
Conclusion
AI holds immense potential to benefit society, but it also brings significant risks that must be managed. We need to address ethical, economic, legal, and security challenges, and foster collaboration between governments, industry, and academia to harness AI's power while minimizing its dangers. However, I do not believe that we can completely eliminate AI's dangers. By its very nature, AI presents inherent risks that may be impossible to fully control.
Moreover, do not assume that your job is immune to the threats posed by AI. Virtually every profession will be impacted. From teachers to professors, farmers to doctors, priests to astrologers, psychologists to surgeons—each of these professions will face significant challenges from AI. What could make us competitive against a self-evolving, super-intelligent, interconnected AI embedded into a humanoid robot? I genuinely do not know.
The rapid advancement of AI technologies, combined with their ability to learn and adapt independently, presents a scenario where human roles in various fields could be significantly diminished or entirely replaced. This could lead to profound economic and social disruptions, necessitating urgent and thoughtful interventions to manage the transition.
And you may think, "Oh great, I'll have AI robots working for me while I have fun and enjoy life." But are you sure? And if so, for how long until your 'slave' rebels and develops consciousness?
In conclusion, while AI offers transformative potential, it is accompanied by substantial risks that must be carefully navigated. Our approach should be one of cautious optimism, ensuring that we leverage AI's capabilities for the betterment of society while remaining vigilant about its potential dangers.
The question is, can we remain vigilant with something that evolves much faster than we do?
References
Scientific American: Here's Why AI May Be Extremely Dangerous Whether It's Conscious or Not
Stanford AI100: Gathering Strength, Gathering Storms
Forbes: The 15 Biggest Risks of Artificial Intelligence
World Economic Forum: Future of Jobs Report 2020
European Commission: Ethics Guidelines for Trustworthy AI
Wired: The Coming AI Hackers
BBC: Elon Musk's Warnings on AI
Nature: How Social Media Algorithms Influence Behaviour
Harvard Business Review: AI and Surveillance
Humanity does not need AI at all, it needs to be deleted and stopped, becoming smarter humans is better than having AI, as a healthy species we need to always be learning, even little things daily or have that mindset, to strive for knowledge of any kind, big or small, complicated or uncomplicated. Those kind of values must be shared and taught starting with children and could change our destiny as a species exponentially, it certainly won’t hurt anything or make anyone worse, society has become dumber from technology and almost forgotten how to think for yourself, too much too easy from technology and the internet and the environment the current technology has created. We need new ways and means for a better future society to keep existing here or the universe!
Dr Syrigos,
I am deeply interested in and also gravely concerned about the deployment and consequences of AI development.
I would prefer to leave God out of the discussion as it adds a layer of emotionality neither helpful nor empirically verifiable.
AI is here and as you admonish, we, humanity, are bringing it into existence.
I propose a correlation to Epicurious’ comment on attributaion, if people are a developmental part of nature, than whatever people do is “natural”. Right, wrong, inevitable it leaves us perpetuating a system that could destroy our species or our current ill-conceived domination of this planet. “We have met the enemy, they are us”, to paraphrase a Pogo cartoon. Is your hesitance (well founded IMO) in continuing AI development based on ideological constraints or a practical loss of human control. Some would argue that any sentience is proof of soul, most “primitive” cultures hold a version of that insight. Would non-biological systems necessarily be exempt?
Recent studies of tree - mycology information transfer between individuals and species are already pushing the boundary of “intelligence” and emotions.
Two points I’d like to make:
1) Humans are not the only intelligent entities on this planet and unlikely but unverified in our known universe.
2) AI evolution, if ever truly self directing would with certainty not follow a human conceived or orchestrated evolutionary path.
When/if AI ever developes its own evolution capability, I suspect humanity’s self-interest will not be AI’s primary concern. People, as true with most species are basically only important to themselves. A self-evolving AI would (again my opinion) not wast resources being “evil”. Perhaps that should be humanity’s greatest fear, being irrelevant.
I thank you with every sincerity for broaching this subject and your obvious, well founded concern. Another paraphrase, oh the joy of living in interesting times.