On Fear of AI

AI is advancing fast and with its rapid development and growth in popularity, people are growing more and more concerned with where it’s going and what it will soon be capable of. The sense of worry itself is understandable. We are developing something similar to ourselves, yet potentially much more capable than anything we’ve ever built – that frightens us. It seems to me that in part our fear of AI is also linked to our self awareness as well as a partial lack of understanding of what’s at hand.

Here it is important to understand where exactly the fear of AI stems from. It seems to me that the first root of fear of AI comes from a lack of information or unintentional misinformation. By that I mean how AI is and has been presented in the mainstream media. Movies like Terminator; I, Robot; The Matrix for example depict AI and robots as a threat without ever explaining what it is in essence.

In part misleading headlines are also at fault here as those often present AI through clickbait headlines and shocking statements. I tend to think that another root of fear of AI comes from how we understand intelligence. When we think of intelligence, most of us probably think about human intelligence and what it’s like to think as a human being. That in itself is understandable as we haven’t noticed intelligence that would be closely similar to ours.

When we think of intelligence that is not our own, the second thing that comes to mind is aliens, which are also more often than not depicted as antagonists. It is possible that the next statement will be far-fetched, yet I will still propose it. It seems to me that our cautious and fearful perception of alien intelligence and AI comes from self awareness and comparison of the unknown to ourselves. That means in our perspective, we’re the dominant species of this planet and we’ve attained this position of apex predators thanks to our intelligence.

What have we done to other species similar to ourselves? We’ve exterminated them. What do we do to other life forms living on our planet? We hunt, farm, enslave and destroy them. Matter of fact we aren’t that good at maintaining peace between ourselves. So based on our own experiences, we assume that if there is an external or a more advanced intelligence out there, it would most definitely try to destroy or enslave us just like we’ve done that to those of our own kind as well as all other species. It seems to me that this very experience based assumption plays a big role in our fear. 

Another important fear factor is technical misinformation. People simply don’t know how AI works. With expressive headlines and no technical understanding and animalistic biases, it is hard to form a coherent understanding of what is at hand and which fears are based in reality and which ones aren’t. Here we can also tie in the fear of innovation which has followed our kind for its whole existence. Every innovation has been doubted and many have been feared simply because they had the potential to disrupt the habitual way of life.

Lastly I think that another reason for why there’s so much uncertainty around AI is the fear of responsibility. This can be compared to our experience with nuclear power. It could be used for harvesting energy and also for destruction and as we know, we didn’t hesitate to use it for destruction of ourselves. Same principle comes into play here. We collectively understand that at this very moment, we don’t understand ourselves well enough to build solid social structures. We tend to be greedy and careless and we also tend to make mistakes. Now we’re developing something that has the potential to replace us (of course not everywhere, but in many places nevertheless) and we barely have any room for mistakes as even a single flaw of code could potentially lead to our destruction.

When we look at the abstract fear of ourselves that manifests in the image of AI, we need to understand that artificial and biological intelligence are fundamentally different, though both possess similar qualities. Here the most noteworthy difference is that our intelligence and instincts were programmed by our environment and experiences within that environment. And because we’re biological creatures, we strive for survival and reproduction. Artificial intelligence on the other hand isn’t biological and isn’t programmed by nature. Artificial intelligence lacks instincts that help it survive, so the fundamental processes that go on within its bounds are different. As our backgrounds and developmental experiences are drastically different, we can’t assume that outcomes will be similar. The Talking Tom cat can resemble the idea of a cat, but will never come close to a household cat with all the qualities and processes that it possesses.

We also need to acknowledge that when we’re talking about the potentials of AI (and for example aliens), we’re looking at them through our human perspective and more importantly through the perspective of our current level of development. Meaning that once again, we assume the worst because so far, we’ve done disgusting things to those that are smaller or less able than ourselves. We have no certainty in those assumptions and we can’t know if intelligences that are more advanced than ours or come from a different background would have the same views and values as ours. Just like we can theorise that AI will try to destroy us, we can theorise that it will come to a conclusion that conflict and obedience are a waste of time and will just abandon us or self-destruct.

Just as we can theorise that AI will become aware of our ignorance and try to guide us. All this is said simply to illustrate a simple idea that any other intelligence that comes from a different environment or has other inputs would probably function fundamentally different from ours. So we can’t impose the qualities of our intelligence onto other potential intelligences. As such fear is based on self awareness and partial ignorance, this fear can’t be taken too seriously. Instead it should be taken only into consideration and pondered, yet not believed blindly.

It occurs to me that the only valid concern in regards to AI is our fear of responsibility and messing up. Just like with nuclear energy – we can stumble and develop upon something very ambitious, yet use it for nefarious reasons or let a simple mistake cause a disaster. Though these concerns are valid, we must also keep in mind that AI is not nuclear energy and is much more predictable. After all, we are the ones programming it and it is up to us to consider carefully how we program it.

I believe it is also worth talking about that as we barely understand how our own consciousness works (not even talking about other consciousnesses), it’s more than likely that we won’t be able to create an AI powerful enough to do the things we fear it could do. Most probably by the time such an opportunity reaches our horizon, we will have enough understanding of what consciousness is and how it operates as well as enough experience with programming AI that we simply won’t create something that will be able to destroy us. After all, more often than not, we tend to overestimate our ambitions. Plus we can be hopeful that those who are working on AI will do their work responsibly.

Follow on social media

Alex
Author: Alex

I’ve spend a decade working in advertising, social media and cultural industries, which have given me great insights into what’s going on behind the scenes.