Some Normative Reflections on AI

We expect artificial intelligence (AI) to be smart. We may even expect it to be smarter than us. Technology, after all, has an aura of intelligence around it in general and AI, perhaps, even more so.

I want to examine some of the underlying normative aspects applied to AI. In particular, I want to ask: if our goal is to simulate human beings, shouldn’t we look for things such as flaws in thinking, bias, and, maybe even more controversial, psychological disabilities?

Currently in the United States, about 1 in 4 people have or will experience psychological disability. That’s quite a lot of people with quite a lot of “symptoms.” It’s still very taboo to talk about psychological disability. It’s perhaps worse to be seen as “endorsing” it.

However, a very brilliant philosopher of mind once told me that, “If you understand schizophrenia, you understand the whole of the human mind.”

Perhaps that is so. It does seem to people who try to “unlock the black box of schizophrenia” are trying to peer into the deepest parts of the human psyche. However, many attempts to open this black box using a biomedical model seem to have failed. That leaves us, then, with psychosocial accounts of psychosis, of which I am currently persuaded.

Famously, some philosophers have argued we simply cannot program for natural language. As AI becomes more and more real, it seems these arguments may be invalid. Moreover, others have argued we cannot program wisdom or emotions. These things, too, seem to be coming closer and closer to our understanding.

What I want to look at, however, is the attempt by some to create “superhumans” with AI. These “superhumans” do not necessarily need emotion, they don’t rely on bias at times, and they simply do not experience psychosis. These are considered “flaws.”

If the goal is to simulate humans, these efforts, of course, fail. If the goal is to create an uberman, they may succeed.

I want to question the assumption that we should be creating “superhumans” when we create bots. I want to argue that creating a genuine human mind artificially is not only “good enough,” it may be better than our superhuman efforts.

The human mind is still something that researchers still look at, examine and query about. It is amazing–even with its “flaws.” Consider my example. My mind has, at various times, produced visual, auditory and tactile hallucinations. We still do not know how these things precisely occur. If we did, we’d have a “cure” for so-called schizophrenia.

Current psychosocial theories suggest that people who experience psychosis are far more likely to have experienced trauma. We know, then, that trauma, such as child abuse or neglect, is highly correlated with the experience of psychosis. If trauma is the cause in some people, we still don’t know how the mind produces hallucinations. Making a bot, then, that can not only experience the harshness of trauma, but can, if all the horrific conditions are met, produce hallucinations, would be a feat in understanding not only the human mind but the bot mind.

Why program for “vulnerabilities,” you may ask? I attest that when people experience psychological disabilities, like psychosis, they are a testament to the fact that something, whatever it is, has gone horribly, horribly wrong. In other words, there has been some deep transgression that has caused the mind to produce what we consider “abnormal” activity. Users of AI–or, perhaps, fellows, friends and partners to AI–need to see these things produced in order to know that something has gone horribly wrong.

What this means is that we will have to treat AI the same way we treat humans both legally and ethically. A transgression toward a bot would be the same as a transgression toward a human.

If you are looking, then, to create an uberman that you can do with as you wish, who will be at your beck and call, who will fight for you, learn for you and do your various bidding, you may not want to consult me.

If, on the other hand, you see humans as “good enough” as they are–even with their “flaws”–and you wish to produce more of them, learn about them and replicate them in the form of hardware, I’m the person you’re after.

 

Slow And Steady Wins The Race?

Back when I was a TA, I got really, really good at thinking on my feet. Super good.

Time has worn on and I find myself preferring slow deliberation these days. I don’t think this is a sign of lacking intelligence, either. I think of it as both gaining intelligence and wisdom. We tend to prize quick thinking. But quick thinking can get us in trouble. Reflexes vary, of course, and can be trained. But I think our society, which can tweet in an instant, has become more and more biased and less seeking of truth due to the reliance on quickness over slow deliberation.

Take, for example, a conversation I had prior to Christmas with an expert on AI. I’m still thinking about the ramifications of that discussion. I may have a few brief thoughts, but nothing well-formulated just yet. I will talk to people about it, think it over more, and so forth, before I come to a safe conclusion.

The theory is that reflexes, if not heavily trained, are ridden with emotion, bias and other things. The more time we have to mull something over, the more likely we are to weed out those things.

One problem is that, for many people, there just doesn’t seem to be enough hours in the day to deliberate. I suggest: Take a walk. Cut down on your TV time. Heck, cut down on your social media time–to spend time reflecting. We may just become a better society because of it.

Gut Feelings: Problems For AI May Not Be With The Brain

According to recent theories, the gut and the brain are constantly communicating. This communication is so important that certain functions attributed to the brain may indeed also depend on the gut.

Having solid gut health has become an important topic in many areas, including mental health.  And, indeed, gut health may help with higher order cognitive functions and more.

The problem with current theoretical understandings of AI, then, is that it doesn’t seem to take into account that the brain is fully embodied and connected with other organs, for which it depends, as well as the gathering information from the outside world.

A recent hypothesis I heard, for example, posited that if one could create a synthetic brain, one would have genuine AI.

I want to suggest that if we really want a genuine AI, we will have to go a few steps further and create a whole synthetic human body. We may not be able to get there just yet, but if we do, we will have truly artificial intelligence.

 

Philosopher-Soliders

John Rawls did it the traditional way. In Ancient Greece, often a person had to serve in the military before becoming a philosopher. While the two–philosophy and the military–may seem incompatible, I think they are most compatible. I will argue that, more than things like AI, we need thinking soldiers.

John Rawls, of course, became disillusioned in the military. But it has been argued that some of his best ideas–and his ideas are great–are rooted in his military experience.

The tradition of serving in the military before becoming a philosopher goes back, as I mentioned, to Ancient Greece, where Socrates served. Socrates went on to become the father of western philosophy.

When I taught philosophy at the University of North Florida, I often had former military people as students. They were wonderful.

I was a Blogger for the Florida Student Philosophy Blog, too, and recall reading an article about thinking and the military. The military, it was said, is not a place for thinking.

I want to argue we need philosopher-soldiers in the military. While it may seem that a highly organized structure, where people merely take commands, is a great way to win, I believe in this century, to make a lasting and incredible impact, we need thinking soldiers.

The military, it has been argued, shouldn’t be a place to think. After all, thinking can get us in trouble. Think of Chelsea Manning, who did think–and unleashed classified materials upon the world. However, in a military where people like Manning are not only not shunned but are the norm, the ideas that come from these minds can aid in winning.

In order to get thinking philosophers, we need to apply ancient theories to the soldier. We need, in short, courses for them in philosophy, taught using the Socratic teaching method.

The Socratic teaching method is ideal because it encourages the individual to think–and think for themselves. Far more than any other weapon we currently have, there is no replacing an active and imaginative brain. We need soldiers skilled in, at minimum, informal logic, basic argumentation, analyzing evidence and cognitive biases.

There is no need for the United States to be afraid of developing soldiers in this way. The mind, when dedicated to the truth of things, is always a winner.

 

My Rose-Colored Glasses: The Future of AI

Over the past few months, I have made contact with several experts in AI. This includes programmers and philosophers. I’ve come to these individuals with questions about the future of AI.

I initially became interested in AI because of universal basic income theory. Some argue that AI has and will continue to take jobs away from humans and, thus, a universal basic income will be a necessity. Then, I became independently interested in the topic.

There are those who argue that AI will never become fully intelligent. They argue it’s just not possible for an AI to pass the Turing Test, for instance.

Then, there are those who argue that AI will indeed become intelligent–and will take over the world!

I’m in neither of these camps. I favor a universal basic income for reasons independent of AI and its progression. I think, for example, it’s simply time in human history to try this new policy. Being a slave to wage labor is, in my thinking, old-school and barbaric. I think we will eventually get there, too.

It’s with the same optimism that I approach AI. I don’t envision a dystopian future, filled with killer robots. I see a world where AI can develop fully and become intelligent in some of the best senses of the word. In fact, I look forward to new bot-friends who can tell me about their new theories of ethics and political philosophy!

My view may be a minority view. If science-fiction tells us anything, I should probably expect something more sinister. But I don’t. Of course, there will be bots of different purposes and some of them may kill. But one purpose–one need that must be filled–is the human need for connection. And one way of connecting is to have enjoyable, intellectual conversations. So, I look forward to a bot who can be my friend–in almost an Aristotlean way.