Slow And Steady Wins The Race?

Back when I was a TA, I got really, really good at thinking on my feet. Super good.

Time has worn on and I find myself preferring slow deliberation these days. I don’t think this is a sign of lacking intelligence, either. I think of it as both gaining intelligence and wisdom. We tend to prize quick thinking. But quick thinking can get us in trouble. Reflexes vary, of course, and can be trained. But I think our society, which can tweet in an instant, has become more and more biased and less seeking of truth due to the reliance on quickness over slow deliberation.

Take, for example, a conversation I had prior to Christmas with an expert on AI. I’m still thinking about the ramifications of that discussion. I may have a few brief thoughts, but nothing well-formulated just yet. I will talk to people about it, think it over more, and so forth, before I come to a safe conclusion.

The theory is that reflexes, if not heavily trained, are ridden with emotion, bias and other things. The more time we have to mull something over, the more likely we are to weed out those things.

One problem is that, for many people, there just doesn’t seem to be enough hours in the day to deliberate. I suggest: Take a walk. Cut down on your TV time. Heck, cut down on your social media time–to spend time reflecting. We may just become a better society because of it.

Philosopher-Soliders

John Rawls did it the traditional way. In Ancient Greece, often a person had to serve in the military before becoming a philosopher. While the two–philosophy and the military–may seem incompatible, I think they are most compatible. I will argue that, more than things like AI, we need thinking soldiers.

John Rawls, of course, became disillusioned in the military. But it has been argued that some of his best ideas–and his ideas are great–are rooted in his military experience.

The tradition of serving in the military before becoming a philosopher goes back, as I mentioned, to Ancient Greece, where Socrates served. Socrates went on to become the father of western philosophy.

When I taught philosophy at the University of North Florida, I often had former military people as students. They were wonderful.

I was a Blogger for the Florida Student Philosophy Blog, too, and recall reading an article about thinking and the military. The military, it was said, is not a place for thinking.

I want to argue we need philosopher-soldiers in the military. While it may seem that a highly organized structure, where people merely take commands, is a great way to win, I believe in this century, to make a lasting and incredible impact, we need thinking soldiers.

The military, it has been argued, shouldn’t be a place to think. After all, thinking can get us in trouble. Think of Chelsea Manning, who did think–and unleashed classified materials upon the world. However, in a military where people like Manning are not only not shunned but are the norm, the ideas that come from these minds can aid in winning.

In order to get thinking philosophers, we need to apply ancient theories to the soldier. We need, in short, courses for them in philosophy, taught using the Socratic teaching method.

The Socratic teaching method is ideal because it encourages the individual to think–and think for themselves. Far more than any other weapon we currently have, there is no replacing an active and imaginative brain. We need soldiers skilled in, at minimum, informal logic, basic argumentation, analyzing evidence and cognitive biases.

There is no need for the United States to be afraid of developing soldiers in this way. The mind, when dedicated to the truth of things, is always a winner.

 

My Rose-Colored Glasses: The Future of AI

Over the past few months, I have made contact with several experts in AI. This includes programmers and philosophers. I’ve come to these individuals with questions about the future of AI.

I initially became interested in AI because of universal basic income theory. Some argue that AI has and will continue to take jobs away from humans and, thus, a universal basic income will be a necessity. Then, I became independently interested in the topic.

There are those who argue that AI will never become fully intelligent. They argue it’s just not possible for an AI to pass the Turing Test, for instance.

Then, there are those who argue that AI will indeed become intelligent–and will take over the world!

I’m in neither of these camps. I favor a universal basic income for reasons independent of AI and its progression. I think, for example, it’s simply time in human history to try this new policy. Being a slave to wage labor is, in my thinking, old-school and barbaric. I think we will eventually get there, too.

It’s with the same optimism that I approach AI. I don’t envision a dystopian future, filled with killer robots. I see a world where AI can develop fully and become intelligent in some of the best senses of the word. In fact, I look forward to new bot-friends who can tell me about their new theories of ethics and political philosophy!

My view may be a minority view. If science-fiction tells us anything, I should probably expect something more sinister. But I don’t. Of course, there will be bots of different purposes and some of them may kill. But one purpose–one need that must be filled–is the human need for connection. And one way of connecting is to have enjoyable, intellectual conversations. So, I look forward to a bot who can be my friend–in almost an Aristotlean way.