Is AI a curse or a blessing?


I’ve written about this question before, but trending technology is significantly affecting the future of mankind, so I think it deserves repeated attention.

The firing and rehiring of OpenAI CEO Sam Altman has saturated the news headlines recently. First, he’s out, Emmett Shear is in, then Shear’s out and Altman’s back in. But who’s on first and who’s on third at this point doesn’t matter as much as the reasons behind this shake up and how it resonates in the tech world and downstream in the domestic and global political economy, and even more so perhaps, in national security.

Let me read between the news lines for a moment. Despite what corporate spokespersons and soothsayers will say to rationalize and quiet roiling waters, this upheaval in the nation’s leading artificial intelligence operation reveals a growing schism among AI developers.

Put simply, the schism goes something like this. Do AI creators accelerate their work to maximize AI technological leadership, maximize opportunities to monetize AI in the marketplace and lead and stay ahead of the world in the creation race? Or, as the more cautious developers have argued recently, and quite resolutely, do we proceed with determination, but cautiously, because untethered AI development can have potentially dangerous consequences with serious risks to the future of mankind?

This topic is enormously complex, so there’s also risks in trying to simplify its composition. That being said, let me give a couple of illustrative examples, local as well as international.

Cincinnati’s WLWT (channel 5) did a recent in-depth story about juvenile violence in the Cincinnati area titled “Kids Who Kill.” Of course, much of what was said in this story sounds all too familiar, kids who feel unsafe in their own neighborhoods, kids who join gangs for identity and protection, kids who are both predators and victims. But the administrative judge of juvenile court, Kari Blume, made a striking comment at the end of the story. She said that, “Social media was 100% fueling these shootings and making our youth violence worse.”

Let’s be blunt. Social media has infected the socio-economic culture of America, and the rest of the world. The cover of this week’s New Yorker Magazine says it all, a painting of America’s family sitting around a lovely and lavish Thanksgiving table, with everyone looking at and busy on their cell phones. How sad is that.

Contempt over the causal effects of social media may seem like a weak case against high-tech artificial intelligence, but even low grade technology, when not properly regulated can have serious consequences, as this story illustrates.

Let’s go global. Much is being written today about the use of drones in warfare. The tactical developments unfolding in Ukraine are evidence of how drones have become one of the battlefields most useful weapons. It’s one thing to have high-tech drones that fly under the supervision of pilots who may be near or far from the battlefield or actual place of attack, but what about higher tech drones under development that could have the ability to self-determine whether to fire and kill, deciding who, what, where and when to be lethal? High tech making moral ethical determinations? War crimes for robots?

When it comes to strategic matters like the above example, democratic morals and ethics may be challenged by autocratic regimes that have little or no scruples when it comes to such things. Which means that we could self-regulate ourselves into second place. Russia, China, Iran or any of many terrorist groups may have no qualms about machines making kill decisions, taught by their moral practices.

Again, there’s a risk in oversimplification but these examples illustrate why a schism is developing among tech developers. OpenAI’s ChatGPT is a somewhat benign example of a more sophisticated type of generative AI, but the point is that artificial intelligence can evolve to the point where it may supplant the human workforce, where it may take over decision making in critical economic or national security matters, where it could evolve to be dangerous without proper precautions.

A recent article in the respected journal The Economist put it this way: “Rapid progress in AI is arousing fear as well as excitement. How worried should we be? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart and replace us? Should we risk loss of control of our civilization?”

The Center for AI Safety, a non-profit, released an open letter not too long ago signed by more than 350 executives, researchers and engineers working on AI in which the following sentence appeared. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Sam Altman, the recently fired and rehired CEO of OpenAI was one of the signatories of that letter, which brings me back to this schism within the AI developmental community. Such a division among scientists, developers and CEOs demands the attention of everyday Americans, because it’s of paramount importance to our future. Barrel ahead or build in the necessary controls?

Kevin Roose of the New York Times wrote of the letter signed by the above signatories: “A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.”

Whether artificial intelligence is a curse or a blessing may not be a simple binary choice. It’s more of a question like nuclear power, that is, how do you use it responsibly? It is the responsibility of all of us Americans, not just technologists, to decide what the moral and ethical boundaries should be. It’s a call to be well educated about artificial intelligence and all its complexities.

For us humanoids it boils down to this homework assignment.

Learn to live.

Bill Sims is a Hillsboro resident, retired president of the Denver Council on Foreign Relations, an author and runs a small farm in Berrysville with his wife. He is a former educator, executive and foundation president.

No posts to display