I, like most of the population, have been binging on Black Mirror. If you have seen the series, you know some episodes revolve around the creation and use of smarter, improved artificial intelligence, or AI. If you have not seen the series, (spoiler alert) the AIs that are depicted in two of the episodes are essentially copies of human counterparts, coded and thrown into a virtual world. It allows this coded AI to think, feel and behave as their human counterpart would in this fabricated realm.

However, in these episodes, this treatment of AIs is portrayed as inhumane. When the creators of these AIs first generate them, the AIs are essentially being pulled away from the world they know and implanted into a virtual space. By doing so, they become servants to the developer, punished when they do not do as asked, left in isolation and scared into behaving as their developer has coerced them to.

Watching these conditions being imposed on AIs, I found myself shocked by the treatment. How could a human treat another human as such? And then, I realized and continuously reminded myself, that these characters I was watching being tortured were just code. They are, in fact, not human. And then, that got me thinking: are we obligated to treat these human-like beings — such AIs — with respect as we would another human? If you program something to think and feel like a human, does that mean you should treat that resulting program humanely?

In my research and struggle to find my moral common-ground, I searched for the definition of “human,” and the definition I found was, “of, pertaining to, characteristic of, or having the nature of people.” And that really threw me for a loop. These AIs are absolutely not people. They do not have the same biological characteristics as humans.

However, this definition changes the whole argument that AIs are not humans. The whole premise of these Black Mirror episodes is that these AIs can learn, think, feel just as we can. It means they have the characteristics and behaviors that we do. This is even evident in today’s actual society, not just in the world of the Black Mirror dystopia. Our Siris, our Alexas, our Cortanas are programmed to learn about our lives and respond in ways humans would. If you ask Alexa to tell a joke, she will tell a joke. Siri learns and can determine where your “home” is depending on where you spend the most time (granted you have your location services enabled). I myself have felt obligated to say “please,” and “thank you,” to the AIs that I use frequently. Even writing this, I find that I use “they” instead of “it” to refer to these non-human technologies. It is a strange dynamic, and it makes me ask: have we interchanged the definitions of homosapiens and humans too flippantly? Have we, in our modern society, redefined what it means to be human? Should we then treat these AIs as if they were humans?

In short, I believe my answer to this is no. These AIs we are creating are just what they are named: artificial intelligence. Their emotions are not real. Their presence is virtual. They are code. AIs can have human qualities, but that does not make them humans. However, this does pose a moral dilemma and asks us, as scientists, engineers and artists to ponder whether or not these lifelike beings should be programmed as such. If we continue on this path, what comes next? We already have Sophia, the walking, talking, learning robot. Will we soon be interacting with tangible artificial intelligence? And, if so, does that change this analysis? Maybe it would not hurt to say “thank you” to an AI every once in a while. Because if and when the robots actually do take over, maybe they will be more lenient to the nicer humans, and obliterate only those who treated them poorly. That’s just natural, or should I say artificial, selection at work.