6 Comments
Jun 24Liked by Dr Ioannis Syrigos

Humanity does not need AI at all, it needs to be deleted and stopped, becoming smarter humans is better than having AI, as a healthy species we need to always be learning, even little things daily or have that mindset, to strive for knowledge of any kind, big or small, complicated or uncomplicated. Those kind of values must be shared and taught starting with children and could change our destiny as a species exponentially, it certainly won’t hurt anything or make anyone worse, society has become dumber from technology and almost forgotten how to think for yourself, too much too easy from technology and the internet and the environment the current technology has created. We need new ways and means for a better future society to keep existing here or the universe!

Expand full comment
Jun 2Liked by Dr Ioannis Syrigos

Dr Syrigos,

I am deeply interested in and also gravely concerned about the deployment and consequences of AI development.

I would prefer to leave God out of the discussion as it adds a layer of emotionality neither helpful nor empirically verifiable.

AI is here and as you admonish, we, humanity, are bringing it into existence.

I propose a correlation to Epicurious’ comment on attributaion, if people are a developmental part of nature, than whatever people do is “natural”. Right, wrong, inevitable it leaves us perpetuating a system that could destroy our species or our current ill-conceived domination of this planet. “We have met the enemy, they are us”, to paraphrase a Pogo cartoon. Is your hesitance (well founded IMO) in continuing AI development based on ideological constraints or a practical loss of human control. Some would argue that any sentience is proof of soul, most “primitive” cultures hold a version of that insight. Would non-biological systems necessarily be exempt?

Recent studies of tree - mycology information transfer between individuals and species are already pushing the boundary of “intelligence” and emotions.

Two points I’d like to make:

1) Humans are not the only intelligent entities on this planet and unlikely but unverified in our known universe.

2) AI evolution, if ever truly self directing would with certainty not follow a human conceived or orchestrated evolutionary path.

When/if AI ever developes its own evolution capability, I suspect humanity’s self-interest will not be AI’s primary concern. People, as true with most species are basically only important to themselves. A self-evolving AI would (again my opinion) not wast resources being “evil”. Perhaps that should be humanity’s greatest fear, being irrelevant.

I thank you with every sincerity for broaching this subject and your obvious, well founded concern. Another paraphrase, oh the joy of living in interesting times.

Expand full comment
May 31Liked by Dr Ioannis Syrigos

AI is obviously a potential threat to humanity due not only to becoming independent of human control but and far sooner becoming essential to any manufacturing or control system. AI doesn’t need to be better just necessary.

Presently AI is not intelligent it creates statically relevant deductions that are often simply wrong or hallucination but still statistically relevant to the question addressed.

If “be careful what you wish for” then AI pushes the realm of being careful what you ask as eventually even implication can induce statistically relevant hallucinations.

Who will have control of the “off” switch?

The danger isn’t only in falling behind an algorithm that deduces outcomes faster and perhaps better than humans it is in dependence on such systems to navigate daily life, relationships and our sense of personal value and responsibility.

Expand full comment
author

I agree with your points, Chris.

The average person may not realize that AI is not just a single algorithm. To put it simply, it is an algorithm capable of creating its own algorithms. It functions like an artificial brain, an artificial neural network. Nobody knows its limits (if there are any).

Expand full comment

Dr Syrigos, I did not intend to minimize the complexity or reach of current AI systems, astonishing is insufficient praise.

It’s my non-researched opinion that self-directed neural networks are still nascent examples of the network functioning of human brains. The obvious comparisons that humans and AI can deduce erroneous conclusions and both systems, hallucinate is not comforting.

Embedding a neural, self-generative network into technology potentially as powerful as quantum computing represents a literal quantum leap into the unknown. Generally, I’m game.

Thirteen billion years of “natural evolution” produced us, an organism capable of creating AI, any limitation is ours not natures.

If natural evolution continues and there are many historical deed-ends than AI as a natural consequence is inevitable for humanity and also perfectly natural.

As a math/computer investigation exercise, AI is, so far, both a stunning computational success and validation of human ingenuity.

I hope we survive each other.

Expand full comment
author
Jun 1·edited Jun 2Author

Absolutely Chris.

The AI as neither a natural consequence nor a product of natural evolution, as many mistakenly believe and desire. Although it is an amazing tool we developed, that will change history, I view it more as an intelligent thing, devoid of a soul. As you mentioned, the acceleration of computing power, especially with the advent of quantum computing, will lead to unimaginable and unpredictable consequences for AI development. Ultimately, I believe it will need to be halted. Allowing something to evolve much faster than us will eventually result in it taking control of us. I cannot see how we can control something more intelligent than us. We must proceed with caution and foresight.

Expand full comment