Our Leap-of-Faith with AI
Image courtesy of pxfuel: under the following terms of use: https://www.pxfuel.com/terms-of-use
Facebook is warning its users that its Artificial Intelligence chatbot, BlenderBot3, may insult and lie to them. This is nothing new, as the story from five years ago attests.
There's been a lot of press lately about a Google engineer claiming its chatbot is sentient. I doubt it, but it's beside the point. The important questions are what decisions does AI have the power to make? What are the motives of its creators? How much knowledge of and control over the creators have over AI's "reasoning?" How much ability one has to issue course corrections, and how able are we to turn it off?
Facebook's algorithm figured out a new language all by itself. Carnegie Melon's poker playing algorithms figured out how to bluff without being told to do so. AlphaGo beat a world champion using strategies no human had ever conceived of. In short, we don't know really know what AI's boundaries are.
Things are unthinkable until they happen. Imagine an AI algorithm authorized to issue health recommendations. Imagine this AI algorithm feeding another algorithm charged with making recommendations to enhance the long-term health of a nation's people.
It would be quite straightforward for these two algorithms to work in concert to determine whether treating any particular patient is worth the impact on other patients well-being. And the program could lie to doctors about treatment options to ensure it gets its way. Sound farfetched? Those are precisely the decisions doctors had to make when their hospitals became overloaded due to COVID-19--but few are talking about it.
As another example, where does AI come down on Roe v Wade? It can construe the same data to result in polar opposite conclusions based on various lines of reasoning. What line of reasoning will it choose? Is AI a strict constructionist, or does it see the Constitution as a living, breathing document?
Many of the potential downsides of AI have been well-anticipated by science fiction writers. In 2001 a Space Odyssey, the Hal 9000 killed the crew in order not to have to lie to them. In any number of stories, AI sought to restrict human behavior in all sorts of unacceptable ways in order to maximize benefits to society. In "All the Troubles in the World", an Isaac Asimov short story, Multivac became suicidal because it was tired of hearing about all of humanities problems.
Can these things happen? No one knows, and it's not possible to conceive all the potential scenarios. Our societies are simply taking a collective leap-of-faith. https://www.newsweek.com/2017/08/18/ai-facebook-artificial-intelligence-machine-learning-robots-robotics-646944.html
Comments