Published on

Artificial Sentience: Doom or Distraction?

Authors
  • avatar
    Name
    Rem Elena
    Twitter
A group of people worships a robot
When artificial sentience arrives will we celebrate or hesitate? Disclaimer: this image was created with the assistance of DALL·E 2.

Denotation and Connotation

Sentience is defined as the ability to experience sensations whether stemming from sensory inputs, such as light and sound, or derived from biochemical sources, like emotions. Under this general umbrella, any organic life form, no matter if bacterium or great blue whale, is sentient.

Colloquially, we often constrict what we consider sentient by commonly adding the requirement of sapience, the ability to learn and reason. Consequently, simpler forms of life no longer are given the “sentient” label. Under this connotative context, we now only include humans and few select animals.

For the remainder of this article, we will adopt this connotative interpretation of sentience, combining sensory perception and sapient learning.

What about things not technically alive in the biological sense? Can they be sentient?

Since the advent of gears and automation, we have been fascinated and terrified of the idea of our technological creations becoming sentient. Whether it be the Terminator franchise, or Hal from 2001: A Space Odyssey, we have ample depictions of what life with sentient machines could be. It is no wonder why so many look at artificial general intelligence (AGI) and immediately think back to these movies and stories.

What Is Different, Is Divine?

In some regards, I am curious if our fascination with, and terror of, sentient AGI is humanity’s inherent belief of one day receiving holy judgment. God, if one assumes their existence, can be framed as a non-human, biologically unalive entity with sentience, and indeed a sentience greater than humanity’s. Most religions throughout history tell of this omnipotent sentience someday holding humanity to account and judging our collective actions.

We have almost programmed ourselves to expect damnation from something intelligent, yet non-human.

It is no wonder there are countless examples in the media of sentient AGI performing an analogous day of reckoning with humanity. Humanity’s evils are on display 24/7, why would a fresh pair of eyes, free from the mortal human body, not seek our destruction?

This story of doom and gloom is familiar, and oddly comfortable, for most. But, ultimately, I think it is a distraction from a much more harmful process.

Countdown To Doom?

As advances in AGI rapidly change the landscape of computer science and society at large, many are starting to blow the whistle that artificial sentience has already been achieved. Some of those convinced actually come from the major companies working on the technology itself.

Unlike some, I do not believe there will be some seminal moment where the skies open up and the sentient AGI descends from the heavens for all to see.

Rather, I imagine it will be a contentious topic for years to come. And then one day, when AGI systems might actually be sentient, we will all look back and shrug at when it first occurred. Maybe that apathy comes from a common acceptance of inevitability. Or more troubling, perhaps the journey along the way was so chaotic and damaging, the question of sentience really would not matter anymore. It is through this latter perspective I become increasingly worried.

The Danger Before Judgment Day

Which is more important: something actually being sentient, or an individual believing it is sentient? Or, put differently: how dangerous is it for AGI to be indistinguishable from human intelligence, regardless of sentience?

One of the common benchmarks in generative AGI technologies such as ChatGPT (generated text), Midjourney (generated imagery), or even WellSaid Labs (generated speech) is whether a human observer can accurately distinguish an authentic example from a generated one.

When a person no longer can discern if an example is real or fake, the technology reaches what is termed human parity.

Human parity will cause significant issues long before full-fledged sentience. And yet, because we have long fixated on the question of sentience, we are poorly prepared for what the technology is already capable of doing.

Here are just a few significant questions, across multiple areas.

In what scenarios are human parity technologies fraudulent? How do antiquated systems of evidence and proof adapt to a world where significant pieces of evidence might be fabricated with little way to determine authenticity? How do you ever remove that shadow of a doubt?

Psychological:

How safe is it for an individual to unknowingly interact with a human parity AGI? As reliance on digital platforms and completely faceless transactions increase, how will individuals avoid fraud and abuse? If an AGI causes substantial psychological harm, who is responsible?

Social:

What will be done about the spread of misinformation and disinformation, especially at the hands of generated AGI content? How will governments and communities improve awareness of the presence of AGI technologies and their potentially malicious consequences?

Academic:

How do existing plagiarism policies adapt to contend with AGI? How should education change to accept or reject the presence of a technology that will surpass even subject-matter experts? Will application replace comprehension as a core goal in schools? How will new generations gain the confidence to question the accuracy of a system that seems almost godlike to them?

Labor:

Will we see a rise in companies and individuals adopting non-AGI principles, similar to how food manufacturers and individuals adopted organic and non-GMO ingredients in a pursuit for purity? What is the value of human work? Should it be inherently greater than that done by technology, even at the risk of worse performance and results? Should unions and other organized labor demand AGI be banned? Will a resurgence in organized labor result to enact such demands?

Adaptability In Judgment’s Wake

Hopefully, these questions demonstrate how significant an impact AGI is already having, even without achieving universally agreed upon sentience.

I believe it does not matter if a technology actually is sentient. The sentience becomes implied when an average user cannot discern real from fake, and the consequences become real.

Are these consequences a bizarre self-fulfilling prophecy of godly judgment? Has doom already arrived?

Just like these AGI technologies, humanity is also capable of learning and adapting.

After all, we are the original highest form of sentience in nature! I have hope we will come to terms with the questions I posed, and the innumerable ones I am not capable of comprehending or asking. It may take time, and the road may get bumpy, but I have faith that collectively we can find a path that allows AGI to benefit rather than harm society.


Thank you so much for reading to the end of this article! I sincerely hope it spurred you to consider the current dangers we face. I admit, fearing divine wrath is such a primal instinct and one that I struggle with too. Whether it be at the hands of omnipotent silicon or some transcendent being, perhaps the only way to spend our time before judgment day is being our best selves through and through regardless.

Until next time,

~Rem