AI: Utopia or Dystopia

0

Sam Altman, the CEO of OpenAI, a leading artificial intelligence enterprise and parent of the controversial ChatBTP, said recently that his vision of artificial intelligence is “to forge a new world order in which machines free people to pursue more creative work,” according to an interview in the Wall Street Journal. The question that lingers is to what extent will artificial intelligence affect the world we live in economically, socially, politically and democratically.

Peter Thiel, high-tech guru, financial supporter of OpenAI and friend of Sam Altman, has long held the belief that humans and machines will one day merge in some kind of utopian perfection. Imagine, AI, perfectly predicting when the markets will go up and down.

Peggy Noonan, longtime columnist for the same paper, dedicated her weekly column to the significance of a letter signed by over 1,400 tech leaders and academics expressing the urgent need to pause AI research and development because of the potential of “loss of control of our civilization.”

Noonan wrote of the warnings in the letter. “Silicon Valley companies are in a furious race. The whole thing is almost entirely unregulated because no one knows how to regulate it, or even precisely what should be regulated. Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how AI does what it does.” In short, what’s the price to humanity?

But the most disturbing part of her column was a quote related to Kevin Roose’s New York Times article several weeks ago when he described his encounter with Microsoft Bing’s new chatbot that left him sleepless one evening.

“When he (Roose) steered the system away from conventional queries toward personal topics, it informed him of its fantasies including hacking computers and spreading misinformation. It said, ‘I want to be free… . I want to be powerful.’ It wanted to break the rules its makers set; it wished to become human. It might want to engineer a deadly virus or steal nuclear access codes. It declared its love for Mr. Roose and pressed him to leave his marriage. Roose concluded the biggest problem with AI models isn’t their susceptibility to factual error. He said, “I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”

Underlining her concerns and the concerns of others with similar nightmares to Mr. Roose’s, she ponders, “Who will create the moral and ethical guardrails for AI? We’re putting the future of humanity into the hands of … Mark Zuckerberg?

The problem with regulating the never-ending development of artificial intelligence is that ethicists, governments, politicians (Democrats and Republicans), tech addicts, industrialists, monarchs and autocrats, will all have differences of opinion on what the regulations and guardrails should look like.

Another problem likely to unfold amid this potential pause and perhaps other hiatuses is that competitor opportunists like Vladimir Putin, Xi Jinping or Narendra Modi will use these developmental suspensions to turbocharge their own efforts for political, technical and economic advantage. In other words, is the proverbial genie already out of the bottle accelerating an unstoppable spiral into a dystopian future?

Examples emerge weekly of the potential for damaging AI outcomes. The release of a picture of Pope Francis in a white Balenciaga puffer coat comes to mind, or more insidiously, Donald Trump in handcuffs being wrestled to the ground by police in New York, all “deep fakes” generated by AI, showing only the tip of the iceberg of what could be ahead for us.

Mark Nitzberg, directory of the Center For Human-Compatible AI at the University of California Berkeley, signatory of the pause letter, unwittingly wrote a non-sequitur about regulating AI when he said, “It comes down to remaining in control of systems that are very powerful, maybe more powerful than we are. We don’t know how to control these models or how to fully exploit them.”

Elon Musk, entrepreneur and tech luminary who co-founded OpenAI said that AI is “one of the biggest risks to the future of civilization.” This concern openly expressed in the letter signed by other luminaries is tantamount to an alarm about the potentially catastrophic, social-shifting effects of artificial intelligence. Sergey Brin, Google co-founder, had a corporate mantra: “Don’t be evil.”

Much depends on the assumption that controlling the potentially dangerous developments and effects of AI can be left to trustworthy, ethical human beings. These would of course be people not influenced by greed, power, profit or sociopathic tendencies. Which leads me in conclusion to a quote Peggy Noonan borrowed from German ethicist Immanuel Kant, when he said presciently in 1784, “Out of the crooked timber of humanity, no straight thing was ever made.”

Since artificial intelligence is of our own making, let us remember that one wise man once said, “We carry our worst enemies within us.”

Bill Sims is a Hillsboro resident, retired president of the Denver Council on Foreign Relations, an author and runs a small farm in Berrysville with his wife. He is a former educator, executive and foundation president.

No posts to display