In Greek mythology, the “Sirens” were three enchanting sisters who lured men with their songs to their death. An interesting preface to this story is that the god Zeus created these friendly Sirens to be playmates for his daughter. But as the myth narrative advances and Hades kidnaps Zeus’s daughter the Sirens eventually become evil; the progression from promise to fear.
This myth parable is a universal warning that reappears frequently in many cultural renditions. With respect to artificial intelligence (AI), let me put it this way: “That which is alluringly attractive can also be dangerous.”
Then there is the Arabian parable of the genie and what it means when a gifting genie is let out of the bottle; to wit, something has occurred that can’t be stopped or undone, or that will inevitably bring ongoing consequences.
These myth metaphors come to mind as I think of the progression of technology. It’s not a momentary thing for me. I’ve worried about the dark side of technology for years, with more than a little trepidation. It’s so wonderful, until it’s not.
The recent scientific news that we may be approaching an artificial intelligence event technologists call “The Singularity” worsens my fears. It’s the point where AI outsmarts human intelligence with all the proximate ramifications. It almost makes me want to visit an Amish Cathedral. So what is “Singularity”? It’s a term borrowed from astrophysicists for when a black hole reduces matter to infinite density. But AI technologists use the term to define the moment in time, the tipping point, “when artificial intelligence exceeds beyond human control and rapidly transforms society.”
Stanley Kubrick may have gotten it right way back in 1968 with “2001: A Space Odyssey”. A tense showdown between man and machine occurs when a spacecraft’s computer overrides the astronaut captain and takes over the ship.
An article last year in Forbes Magazine was titled, “Don’t Worry About AI Singularity: The Tipping Point is Already Here.”
A paper published by the California Institute of Technology’s Caltech Science Exchange put it this way: “As AI systems grow more sophisticated, they may become better at translating capabilities to different situations the way humans can. This would mean the creation of ‘artificial general intelligence’ or ‘true artificial intelligence,’ a primary goal among some researchers. Theoretically, this could result in artificial intelligence that transcends human intelligence. The term ‘singularity’ is sometimes used to describe a situation in which an AI system develops agency and grows beyond human ability to control it.” In other words, the genie’s out.
MIT’s Sloan Management Review posits the issue this way: “Is the convergence between artificial and human intelligence, which once seemed like just a gleam in the eyes of computer scientists and science fiction authors, almost upon us? And if robots become as clever as we are, how will the role of managers change?”
Columbia University’s business professor Bernd Schmitt answers that in the workplace of the future, “Humans may supplement the skills of machines — and not the other way around.” More about this in a moment.
This past week an announcement was made by a company known as Translated. It predicted that technological singularity would occur before the end of this decade. Its projection is based on language and the efficacy of AI-corrected translations versus professional human translations. Its basis is similar to a test of singularity called the “Turing Test,” a test in which a human being is unable to distinguish the machine from another human being in the replies to identical questions put to both.
You may have heard of IBM’s AI tool known as “Watson,” a question answering, problem solving tool for businesses. Think of companies in the future with their own Watsons, or Samsons, or Zeuses, now more thoughtful than humans. These corporate algorithmic machines would know more about management issues like pricing, dividends, marketing, hiring, mergers and acquisitions, economic trends, and innovation than “old-fashioned” human beings. For example, Exxon-Mobil’s machine would know best where and when to drill, when to make the shift to renewables, and when to fire unneeded human executives who were no longer adding value. In other words, Dr. Schmitt’s prophecy.
Technology moves so fast that it’s upon us before we realize how it’s affecting us. Oops, Chatbots (like ChatGPT) are upon us. How should teachers deal with it? Machines are taking our jobs. What does it mean for white-collar and blue-collar workers? Maybe human endeavors will shift to the trades.
Time Magazine wrote recently (Jan. 30 to Feb. 6) focusing on CEO Demis Hassabis, of Deepmind, a subsidiary of Google’s Alphabet, Inc: “He and his colleagues have been working toward the grand ambition of creating artificial general intelligence by building machines that can think, learn and be set to solve humanity’s toughest problems.”
Reporting further, Time noted that these wizards of AI also “grapple with near-term questions like what to do when an AI has the potential to be commandeered by rogue states… and Hassabis warns that AI is now on the cusp of being able to make tools that could be deeply damaging to human civilization, urging his competitors to proceed with more caution. Worse still, he points out that we are the guinea pigs.”
Hassabis’ warning may be likened to the proverbial case of the exasperated captain worried about icebergs and ending up going down with his gashed ship, or facing the consequences after the genie is out of the bottle, or after we have all regretfully been lured, naively, by the fascination of technological toys and face dangers now uncontrollable and irreversible.
Bill Sims is a Hillsboro resident, retired president of the Denver Council on Foreign Relations, an author and runs a small farm in Berrysville with his wife. He is a former educator, executive and foundation president.