Some Normative Reflections on AI

We expect artificial intelligence (AI) to be smart. We may even expect it to be smarter than us. Technology, after all, has an aura of intelligence around it in general and AI, perhaps, even more so.

I want to examine some of the underlying normative aspects applied to AI. In particular, I want to ask: if our goal is to simulate human beings, shouldn’t we look for things such as flaws in thinking, bias, and, maybe even more controversial, psychological disabilities?

Currently in the United States, about 1 in 4 people have or will experience psychological disability. That’s quite a lot of people with quite a lot of “symptoms.” It’s still very taboo to talk about psychological disability. It’s perhaps worse to be seen as “endorsing” it.

However, a very brilliant philosopher of mind once told me that, “If you understand schizophrenia, you understand the whole of the human mind.”

Perhaps that is so. It does seem to people who try to “unlock the black box of schizophrenia” are trying to peer into the deepest parts of the human psyche. However, many attempts to open this black box using a biomedical model seem to have failed. That leaves us, then, with psychosocial accounts of psychosis, of which I am currently persuaded.

Famously, some philosophers have argued we simply cannot program for natural language. As AI becomes more and more real, it seems these arguments may be invalid. Moreover, others have argued we cannot program wisdom or emotions. These things, too, seem to be coming closer and closer to our understanding.

What I want to look at, however, is the attempt by some to create “superhumans” with AI. These “superhumans” do not necessarily need emotion, they don’t rely on bias at times, and they simply do not experience psychosis. These are considered “flaws.”

If the goal is to simulate humans, these efforts, of course, fail. If the goal is to create an uberman, they may succeed.

I want to question the assumption that we should be creating “superhumans” when we create bots. I want to argue that creating a genuine human mind artificially is not only “good enough,” it may be better than our superhuman efforts.

The human mind is still something that researchers still look at, examine and query about. It is amazing–even with its “flaws.” Consider my example. My mind has, at various times, produced visual, auditory and tactile hallucinations. We still do not know how these things precisely occur. If we did, we’d have a “cure” for so-called schizophrenia.

Current psychosocial theories suggest that people who experience psychosis are far more likely to have experienced trauma. We know, then, that trauma, such as child abuse or neglect, is highly correlated with the experience of psychosis. If trauma is the cause in some people, we still don’t know how the mind produces hallucinations. Making a bot, then, that can not only experience the harshness of trauma, but can, if all the horrific conditions are met, produce hallucinations, would be a feat in understanding not only the human mind but the bot mind.

Why program for “vulnerabilities,” you may ask? I attest that when people experience psychological disabilities, like psychosis, they are a testament to the fact that something, whatever it is, has gone horribly, horribly wrong. In other words, there has been some deep transgression that has caused the mind to produce what we consider “abnormal” activity. Users of AI–or, perhaps, fellows, friends and partners to AI–need to see these things produced in order to know that something has gone horribly wrong.

What this means is that we will have to treat AI the same way we treat humans both legally and ethically. A transgression toward a bot would be the same as a transgression toward a human.

If you are looking, then, to create an uberman that you can do with as you wish, who will be at your beck and call, who will fight for you, learn for you and do your various bidding, you may not want to consult me.

If, on the other hand, you see humans as “good enough” as they are–even with their “flaws”–and you wish to produce more of them, learn about them and replicate them in the form of hardware, I’m the person you’re after.

 

Leave a Reply