← Back to portfolio

Philosophical Musing: The Divide of Man vs Smart Machine

Published on

Upon being asked the question, "What makes man different from a sentient machine", I at first argued that there was no difference truly. Being nothing but robots run by chemical interactions on a physical and neurological level, I went on to state that humans can not truly possess free will or souls. Why did I choose this standpoint to base my conclusions? Well, simply put, because the question was asked and the easiest argument that I could think of was to speak against the idea of metaphysical rationalizing using a strictly material standpoint.

Now, that wasn’t necessarily what I believed but it was an argument simply made for the sake of argument, as well as from the purpose of remaining consistent in my use of the material point of view. In this essay, I will continue to argue the material viewpoint in order to tackle the question of whether or not humans are just complicated machines.

A rather blunt method of dealing with questions similar to this would be to approach it from the point of view of both the easy and hard problems of consciousness. The hard problem of consciousness accepts that humans often think along pattern that are rather similar, allowing for one individual to predict the thoughts and actions of another up to a certain point. It is with this that the phrase, “nothing new under the sun” often originates as people themselves think their ideas, thoughts, behavior or actions are somehow radically original, ignoring the fact that millions of people before them have done the same and millions of people after them will do the same in time.

However, writing off human thought patterns as machine-like or following a specific, set pattern like that of a computer coded to behave that way is far too simple. In fact, such an explanation is inadequate and doesn't do justice to the more complex, and frankly, overcomplicated methods of thought and problem-solving that the human mind is capable of. Such tasks cannot be explained in terms of a simple computer system. That is where the easy problem of consciousness arises, which essentially states that consciousness is an emergent property. In layman’s terms, this means that if we build a complex enough computer, then it will eventually achieve its own consciousness.

That is to say, it will exhibit a similar level of consciousness found in humans. On this, we must accept that the human experience and the human mind are nothing that could not be simulated on silicon chips storing data. It is this advanced computer system that serves us as a different matter entirely. The mind of a computer approaching or matching that of our idea of A.I. could easily display a human level of consciousness, reasoning and problem-solving.

At that point, the only thing separating such a being from being considered fully human would be a truly functioning body. At least, that’s what we could theoretically say. However, it’s well understood that we as humans are emotional and sentimental creatures. Could such a creation truly be considered conscious? Such a thing would be doubtful as it would simply be said to not be human. As such a creation is not human, it would not be granted the premise of possessing a soul. The question that’s really being asked here, removed from all the frill or filler, is “Does the human being have a soul or what prevents a machine from being considered to have a soul?”

The idea of the soul is in itself tied down with the idea of a continued existence even past the death of the physical body tethering the immaterial “soul” to existence. Some may argue that the soul represents simple consciousness, or is simply a reference to the body itself, or in some cases, the functions of the brain itself. However, if we are to address that of the immaterial existence separating human from machine, then the idea of the immortal soul would be the idea best addressed. Many would say that an Artificial Dependence would remain dependent on a computer system or a server to retain its consciousness and that without it, it would return to being nothing more than a hunk of metal with no decision-making ability.

The destruction of the computer running the “consciousness” program would put an end to the intelligence playing at possessing a soul. Yet, again, the same could be said for humans. If our body is destroyed, where is the proof that our soul survives? If our brains become damaged, why does our soul not allow us to continue to think, to perceive and to analyze much the same as we would have before? If the soul is so untouchable, so immaterial as to be unaffected by our material state, why can something as a simple strike to our skull reduce us to nothing more than a two-year old in ability?

Consciousness, self-awareness, perception and the ability to meta-analyze as well as introspect arrive not from some immaterial property that rests inside the human psyche. It is a product of feedback loops based from sensory information, memory and learning. The human being has no soul and an Artificial Intelligence, nothing but a computer made of silicon and electricity, would be as human as any one of us.

0 Comments Add a Comment?

Add a comment
You can use markdown for links, quotes, bold, italics and lists. View a guide to Markdown
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. You will need to verify your email to approve this comment. All comments are subject to moderation.