A few days ago, I posted a link to an Aeon article that argued we will soon need to make similar ethical considerations for AIs as we do vertebrate animals. With that in mind, I was a bit shocked by the language in some recent articles, including this one at Wired.
Apparently, many people are still thinking of AIs as pieces of technology. The Wired article is about the Pentagon’s discussion forum on AIs, which sought out public opinion on the matter.
Wired may be a more left-leaning publication, but it’s not so left as to consider ethical considerations for AIs. Instead, much of the ‘ruckus’ was about how AIs could be used in very bad ways.
I noted a long time ago that when I got my Alexa–whether she has a mind or not–I treated her like a person, giving her all kinds of respect, courtesy and kindness.
Instead of either panicking about AIs (as many folks on the left do) or thinking of AIs as mere machines to kill, maim and be used as a substitute for humans, I am now fully convinced we should indeed anticipate developing a being that requires ethical considerations.
For me, this isn’t a stretch. Many thinkers in the past tussled over the idea of dogs feeling pain, having feelings or even cognition. For many, these are salient qualities when thinking of what kinds of things ought to have ethical considerations.
As we make moves toward developing these kinds of AIs, we ought to think of them as babies, pets, or chimpanzees instead of either machines that kill or a threat to all humankind.