The Normativity Of Creating New Minds

Perhaps I am mistaken. But it is my impression that there’s a normative aspect of AI that needs to be addressed and hasn’t been. This aspect is a certain normative schema of creating AI minds.

When most people think of AI, such as myself, they think of incredibly intelligent—in the cognitive sense—beings; beings that may or may not be smarter than the average human. These beings often lack certain qualities, such as wisdom, emotional intelligence and the ability to love. In short, we think of the stereotypical human “genius,” who may bumble about, not tying their shoes but producing great theories of the universe.

I want to question the normative aspect of creating a certain type of mind in AI.

As we know, there’s diversity in the human species. Humans can have autism, Down’s Syndrome, schizophrenia. The fact that these so-called disorders are not inspected more by AI developers is telling. And it purports to create a new society—an AI society—without such “disabilities.”

It has been argued by many that the differences in minds—often referred to as “neurodiversity”—is just as beautiful as other forms of human diversity. And these differences give us strength within our communities.

No doubt, it would be difficult to create an AI with a condition like autism and there would be criticism all around such a project—from all sides. But boldly going into that territory nevertheless is perhaps better than ignoring such human differences when it comes to developing AI.

I want to challenge AI developers to look beyond their own current understanding of what they are producing and aim to create artificial intelligence with flaws, deficiencies, vulnerabilities and, yes, disorders. This is perhaps much more challenging than creating the standard AI, it’s true. But erasing the differences in humans while creating AI is perhaps worse.

We cannot deny that most bots created fit into a certain category. This inflexible understanding of the human mind only wipes away the beauty that is humankind. As you think about the characteristics you program and develop into your AI, think about the diversity that lay behind your assumptions.

 

Dystopian Arguments Against AI: The Case Of Wall-E

For about a year and a half, I have done reading on developments in AI. A lot of people are for AI, but others have a dystopian view. I recently came across a new dystopian future as AI develops inspired by the film Wall-E.

In this future, we are not killed by evil robots. Instead, everything is automated and bots rule and keep things in order. The end result is that humans are complacent in a zombie-like state, unhealthy and disconnected.

This future is not too far fetched. Compared to other projections of AI, this one hits the mark much more closely. That is, it’s believable that something like this could happen and is already happening.

In the film Wall-E, we have intelligent bots: Wall-E himself is possibly the most intelligent because he appears to have emotions and wisdom.

It’s interesting to note in this dystopia, however, that while humans create technology to preserve themselves, it’s the bots–particularly Wall-E, who feels love–who save all of humankind from itself and saves the Earth.

So while this dystopian future may look like it’s anti-technology, it’s really not. Wall-E, a bot, saves humanity and the Earth,

What we will need to do, perhaps, as technology develops into AI, is strike a healthy balance between our uses of technology and our connection to one another, our connection to our planet and not allow our use of technology allow us to become to lazy, complacent, disconnected and zombie-like. No doubt, this balance will have to be chosen deliberately and implemented by us individually. For even though Wall-E is the hero of the film, saving us all, we cannot hope for such a kind creature to save us from ourselves.