
Will conscious machines screw up as badly as we do?
A few weeks ago I completed the online course Elements of AI, a brilliant and incredibly popular course by the University of Helsinki and Reaktor. My boyfriend had read some positive comments from his friends at Reaktor about the course and brought it up while we were spending some pseudo-intellectual time on our comfy couch at home. We enjoy doing nerd stuff and learning together, and well, what could be more romantic than studying hand in hand the basics of Artificial Intelligence and Machine Learning with cute illustrations of robots and cars? I could probably think of a few things, but being a little bit geeks (wanna-be geek myself) it did feel like a great idea. And it was!

One of the tasks on the course was to do some research on AI related to a field of interest and find out how realistic and accurate are articles and other kinds of content we find online. We were to pick one article in specific from our findings and analyse it with those questions in mind. I probably chose a somehow complicated field, but I found it very interesting that even in very highly professional and well-known media publications the topic can be relatively inaccurate and important concepts are sometimes exaggerated or misleading, especially if readers are not experts in the area.
I hope that sharing the short analysis I wrote for the task in the course can raise good questions and provoke discussion. So here it is for you curious minds!
I picked the article “How to Build a Self-Conscious Machine” from Wired. I recommend you to read the article before you continue reading this, or at least scan it quickly. If it is too long scroll for your precious time (yes, it’s kind of a long one), then here’s a vague summary along with my observations after learning basics of AI and ML:
The author describes in a playful way that there is no question that “machines” can very soon, or even right now, be designed to become self-conscious. Confidently, he argues that “one thing (people) are certain to replicate (as a machine) is the gradual way that our consciousness turns on”. There’s no point on breaking down how we are able to do this or what this means technically, it’s not really that interesting piece of discussion. So he focuses on the argument of how this (replicating the human aspect of our brains) would be a terrible idea.
Hugh uses the Theory of Mind to describe what consciousness, and therefore what being human, is. He uses Theory of Mind in a reductionist way, as the behaviour of one person trying to guess what another one is thinking. This explanation works to humorously describe that the particular way our brains work makes us spend too much time worrying about others and about what others think about us, instead of spending time in more relevant thoughts or using our brains for better decision making. Hugh argues that we actually make many wrong decisions and many mistakes, or waste our time which could be spent on something a lot more useful, because of this “humanness” of our “intelligence”. So if we are to create artificial “intelligence”, he argues, we should avoid replicating “our” kind of intelligence and create instead a better kind, one that doesn’t make the mistakes we typically make as a species.
I am not sure if the way the author expresses full confidence on how easy it would be to build AI that imitates human intelligence is simple irony out of his humorous style, or if perhaps I’ve read the article with boring critical eyes. I believe, though, that these kinds of descriptions of artificial intelligence and stating that there’s an easy way of making robots human-like conscious with current the current state of science and technology, not only lacks realism, but it can be easily misinterpreted and enhance the already common fear around media of the approach of the “singularity”.
Maybe that oversimplification of the meaning of “Theory of Mind” is part of what bothers me about the possible consequences of the inaccuracy of terminology. The author leaves out the fact that Theory of Mind is about “mental states”, which include those of one’s self. This omission, in my opinion, leaves us with a very simplistic interpretation of consciousness (which is a major topic of the article). Before we can create artificial consciousness, we would need to see a clear and universal definition of human consciousness. Consciousness and even the mechanisms of our thoughts and decision making– including perceptions, desires, feelings, and the interpretation of those–are still not fully understood scientifically. And what is not understood, is therefore very difficult (if not impossible) to replicate with accuracy.
Hugh expresses some interesting views on the article and the writing style makes it a fun read so I would recommend you to go through the whole piece if you have not read it yet. So apologies if I might be spoiling the fun with my comments. My worry is that some important aspects of the main two topics from the title: “Artificial Intelligence” and “Consciousness”, might be left ambiguous, making the conclusion a possible fallacy.
Even though the question of how we would be able to create that level of AI that imitates human consciousness remains to be answered, the story concludes with a provocative thought worth reflecting upon.
“While it is certainly possible to do so (create human-like conscious machines), we may never build an artificial intelligence that is as human as we are. And yet we may build better humans anyway.”
Are we humans good enough to believe that replications of our nature will be positive for all and safe? Should we perhaps focus on understanding our faults first, to then be able to use technology for improving what really matters on a large scale?
This article was first published in LinkedIn.
Disclaimer: I am no expert in ML, AI, nor in Philosophy of Mind. I therefore appreciate any comments related to this post that could enlighten me and others with similar amateur knowledge. Thanks for reading!