top of page

Should We Be Worried About ‘Self-Aware’ Artificial Intelligence?


If you’re a fan of movies/Arnold Schwarzenegger, chances are you’ve seen/heard of The Terminator. If not (have you been living under a rock?), let me break it down for you:

Schwarzenegger’s character; cyborg assassin T-800 Model 101 (AKA: The actual ‘Terminator’) is sent from the future to ‘terminate’ Sarah Connor. We come to learn that Arnie was sent back in time to prevent Sarah from ever conceiving John Connor; future leader of the rebellion against ‘Skynet’.


(T-800 Model 101 aka Terminator. Source: denofgeek)

Once an artificially intelligent defence network for the US Military, Skynet became self-aware and; as protagonist Kyle Reese (sent back to defend Sarah) describes it, “…saw all humans as a threat; not just the ones on the other side”. Thus begins the end for humanity; nuclear holocausts, enslavement, mass-genocide etcetera.

Besides making for some iconic science-fiction cinema, much has been discussed of self-aware ‘artificial intelligence (A.I.)’, and how; as Tesla Motors founder Elon Musk has stated, it poses as "[humanity's] biggest existential threat". Musk has since been an advocator for warning about the potential dangers of sentient A.I systems; investing $10 million into A.I development companies; such as Deepmind and Vicarious, to carefully monitor the progress we make with machine intelligence.

And he’s not alone either. Various technology-pioneers and academic bigwigs have voiced their concerns of a robotic future powered by artificial intelligence. Apple co-founder Steve Wozniak has previously been quoted as finding this robotic dystopia “scary and very bad for humans” (but has since believed if A.I beings do take over we’ll end up as pets). Stephen Hawking believes it can even spell the end for the human race.

Hawking and Musk have even joined the likes of Skype co-founder Jaan Tallinn and M.I.T cosmetologist Max Tegmark in signing the Future Of Life Institute’s open letter warning of the dangers self-aware A.I. systems impose.

So, the big question: Should we be concerned about an A.I uprising? After all, it remains possible. Hypothetically, should the development of self-aware A.I exceed human intelligence (as a result of biological evolution), we may indirectly create systems that breed a form of ‘’Superintelligence”.

As a result, ‘super-intelligent’ A.I. may then have the ability to reboot and reprogram itself; something known as “recursive self-improvement”. It is through recursive self-improvement that A.I systems can then become self-aware. Consistent improvement of it’s potential through recursive self-improvement can result in some sort of ‘intelligence explosion’; whereby the system continuously and rapidly increases it’s abilities at a rate that human intelligence cannot even begin to understand.

Through this theoretical ‘explosion’, an A.I system can create solutions without the need to understand human motivational tendencies. Without needing to understand these, emergent ‘Superintelligence’ can design, specify and provide solutions to meet its own ‘motivations’. Should that mean eradicating human life forms, then so be it.

Oxford philosopher Nick Bostrom provides a good example. In order to solve it’s own ‘motivations’, emergent Superintelligence may start by “…turning all the matter in the Solar System into a giant calculating device, in the process killing the person who asked the question”. Hence, the real danger here is that humans may become atoms for which a self-aware hyperintelligent A.I system can use for something else.

So there we have it: theoretically A.I. could soon become the biggest existential threat to humanity. However, what contributions can current A.I. research make to suggest that currently, it is not the threat it is made out to be?

Well, a lot it seems. Researchers are adamant that we won’t be creating sentient intelligence networks anytime soon. Best summarised by Imperial College professor of cognitive robotics Murray Shanahan: “We really have no idea how to make a human-level A.I”.

Why? Well, if we’re going to describe A.I. as self-aware, it is better to refer the network as ‘Artifical General Intelligence’ (A.G.I.), or ‘Artifical Consciousness’ (A.C.). This distinction remains crucial, because without a grasp of basic components of human general intelligence (Say, for example, common sense, logic and comprehending emotions), A.I. technology remains a series of complex algorithms that can’t solve a human problem in a ‘meaningful’ human way.

When researchers say ‘meaningful’, they mean a comprehension of thoughts, feelings and emotions: all things that accrue towards human self-awareness (or consciousness). Thus, it is suggested we can’t refer to A.I as self-aware if we can’t interact with it in this ‘meaningful’ way.

Computer scientists from the University of Tübingen in Germany faced a backlash of criticism from A.I. experts for their “Mario Lives!” project; which used current A.I. adaptive techniques to create a living version of video-game icon Mario; whose mood altered based on what it could find in it’s environment (“hungry” for coins).

Critics argue that Mario is still within the control of human-issued commands, and not emulating human behaviour-like psychology. His moods (“hungry”, “distressed”) do not drive any desire to succeed. Everything remains as part of a series of algorithms designed so that he will learn from the environment he is programmed into by humans. To suggest that Mario is ‘self-aware’ is to compromise decades of cognitive neuroscience research into what constitutes as human self-awareness.

Therein lies the problem with this confusion over A.I. and A.G.I. Advances in cognitive and neuro-sciences suggest that we aren’t any closer to understanding the internal functions of our mind; so what does this tell you about how close we are to developing self-aware A.I.?

Without concrete knowledge of the internal functioning of human minds, we can’t expect to develop A.I. that is self-aware of its own functioning anytime soon. Put simply by A.I engineer Jaron Lanier, “we can’t expect to duplicate something we can’t fully understand”.

So, let’s re-iterate. Currently, the real threat here is confusing A.I with A.G.I/A.C.

To conclude, while it remains hypothetically possible, all this needs a bit of common sense. We don’t understand our own self-awareness. Therefore, it remains unlikely whether we can create self-aware A.I machinery; hence post-apocalyptic robot-human dystopias merely remain a theoretical, but [currently] unrealistic possibility.

 
 
 

Comments


Join our mailing list

Never miss an article!

bottom of page