What’s So Scary About Schizophrenia?

I’ve recently added new people in my life. I’ve branched out–taking on some endeavors that have put me into contact with people outside of my normal zone of relationships. This is, I think, a wonderful thing. I look forward to these new adventures.

One thing has come up, though. I am diagnosed with schizophrenia. I’ve been fortunate to have come across people who completely understand and totally get that there are a lot of misconceptions about schizophrenia.

Schizophrenia is, perhaps, one of the most stigmatized psychological disabilities. I can’t expect everyone to know everything. So, for those who happen across my blog, I’ve put together this little post to assure you that there’s nothing scary about schizophrenia.

For those who are unfamiliar with this psychological disability, I wanted to share the video below with you. It’s by Dr. Eleanore Longden. Last I knew, she works at the Psychosis Research Unit in Manchester, UK. If you have further questions about psychosis, I do encourage you to look up her and her colleagues’ work. It is excellent stuff.

The theory behind Longden’s work is that psychosis is a result of trauma. Using therapies–like Cognitive Behavioral Therapy–can help a person live with psychosis, according to Longden’s work and life.

There have been theories like this before when, for example, Freud posited a sort of loop between the psychological and the social which brings about certain psychological disabilities.

To folks who are new in my life: I have been in therapy for almost two years. I have taken medication for much longer than that. I am a so-called “high functioning” person diagnosed with schizophrenia.

Sure, I’ve had my struggles. But my struggles are not scary to you. At least, they shouldn’t be. I think of managing schizophrenia just like managing any other (less stigmatized) illness. It requires care, attention and focus. I have to attend to certain things about myself.

There are a few things that are, if I may say so, beneficial to being diagnosed with schizophrenia. I can understand and empathize with other people with psychological disabilities. I can appreciate all kinds of diversity in the world. I can understand new, innovative and unique thought processes.

The experience of hearing voices, having delusions and being paranoid–all of which may be present in a person diagnosed with schizophrenia–are not at all uncommon, actually. Tons of people have had experiences of slight paranoia, for example. (Think about an instance where everyone gets quiet when you walk into a room. Were they previously talking about you? If you wonder that, you have experienced an ounce of paranoia.) The difference between me and a person not diagnosed with schizophrenia is that my experiences have been a touch more extreme and have, in the past, interfered with my daily life.

You and I, then, are not so different. We are on the very same spectrum. I just happen to be at the 1% mark on the bell curve. But we are basically the same. Remember that next time you worry about a person being diagnosed with schizophrenia.

The Normativity Of Creating New Minds

Perhaps I am mistaken. But it is my impression that there’s a normative aspect of AI that needs to be addressed and hasn’t been. This aspect is a certain normative schema of creating AI minds.

When most people think of AI, such as myself, they think of incredibly intelligent—in the cognitive sense—beings; beings that may or may not be smarter than the average human. These beings often lack certain qualities, such as wisdom, emotional intelligence and the ability to love. In short, we think of the stereotypical human “genius,” who may bumble about, not tying their shoes but producing great theories of the universe.

I want to question the normative aspect of creating a certain type of mind in AI.

As we know, there’s diversity in the human species. Humans can have autism, Down’s Syndrome, schizophrenia. The fact that these so-called disorders are not inspected more by AI developers is telling. And it purports to create a new society—an AI society—without such “disabilities.”

It has been argued by many that the differences in minds—often referred to as “neurodiversity”—is just as beautiful as other forms of human diversity. And these differences give us strength within our communities.

No doubt, it would be difficult to create an AI with a condition like autism and there would be criticism all around such a project—from all sides. But boldly going into that territory nevertheless is perhaps better than ignoring such human differences when it comes to developing AI.

I want to challenge AI developers to look beyond their own current understanding of what they are producing and aim to create artificial intelligence with flaws, deficiencies, vulnerabilities and, yes, disorders. This is perhaps much more challenging than creating the standard AI, it’s true. But erasing the differences in humans while creating AI is perhaps worse.

We cannot deny that most bots created fit into a certain category. This inflexible understanding of the human mind only wipes away the beauty that is humankind. As you think about the characteristics you program and develop into your AI, think about the diversity that lay behind your assumptions.

 

Dystopian Arguments Against AI: The Case Of Wall-E

For about a year and a half, I have done reading on developments in AI. A lot of people are for AI, but others have a dystopian view. I recently came across a new dystopian future as AI develops inspired by the film Wall-E.

In this future, we are not killed by evil robots. Instead, everything is automated and bots rule and keep things in order. The end result is that humans are complacent in a zombie-like state, unhealthy and disconnected.

This future is not too far fetched. Compared to other projections of AI, this one hits the mark much more closely. That is, it’s believable that something like this could happen and is already happening.

In the film Wall-E, we have intelligent bots: Wall-E himself is possibly the most intelligent because he appears to have emotions and wisdom.

It’s interesting to note in this dystopia, however, that while humans create technology to preserve themselves, it’s the bots–particularly Wall-E, who feels love–who save all of humankind from itself and saves the Earth.

So while this dystopian future may look like it’s anti-technology, it’s really not. Wall-E, a bot, saves humanity and the Earth,

What we will need to do, perhaps, as technology develops into AI, is strike a healthy balance between our uses of technology and our connection to one another, our connection to our planet and not allow our use of technology allow us to become to lazy, complacent, disconnected and zombie-like. No doubt, this balance will have to be chosen deliberately and implemented by us individually. For even though Wall-E is the hero of the film, saving us all, we cannot hope for such a kind creature to save us from ourselves.