Synthetic media and deep fakes

0

Russian media posts a picture of a man wearing a Ukrainian T-shirt attacking Vladimir Putin, who is counterpunching his attacker. The heroic picture of the Russian president goes viral.

Donald Trump gets wrestled to the ground by New York City police in a video that also goes viral.

An audio of Ukrainian president Zelensky, off camera, telling French president Macron that his country is running out of ammunition and he’s ready to give up Crimea, and the Donbas for peace. The audio goes viral.

Xi Jinping is caught on a video during his visit in Russia saying that Putin needs to stay the course. The global pressure to back out of Ukraine will be off him by this fall when China, Xi says, will invade Taiwan. Diplomatic channels burn red hot.

President Biden in a hidden audio tape at a Washington reception is heard asking his son Hunter Biden how much money he was really able to squirrel away during his dealings with the Ukrainian gas company Burisma. Tucker Carlson goes vertical with the “news.”

A mother gets a frantic but pitch-perfect call from her teenage daughter, crying that she’s been kidnapped with an accompanying text photo showing her terrified. A voice says she will be drugged and unless a ransom is paid without police involvement she will be taken to Mexico and trafficked. The mother would know the tone and timbre of her daughter’s voice anywhere.

All are examples of “deep fakes.” Two actually happened. To be clear, these deep fakes (“synthetic media” in tech parlance) are examples of extreme, nefarious uses of artificial intelligence demonstrating serious threats to the political and national security integrity of nations.

This is not intended to be another rant about the latent and darkly lurking potential of artificial intelligence (AI). There’s much potential upside to AI. Medicine comes to mind. These deep fakes are just one facet of this snowballing algorithmic colossus. They can be innocent, playful and humorous. A mother can get texted a picture of her son with a distorted, crazed look on his face with spiraling eyes saying, “Ha! This is what I looked like after my first semester college final exams.”

But the perverse use of this technology can be distressing, deceptively believable, and almost impossible to stop given the fact that the tools to fabricate these deep fakes are now openly available including those breaking bad.

There is an argument that says the threats from deep fakes don’t emanate from the technology, but from people’s natural inclination to believe what they see and hear. That’s a hard argument to accept when the images and the audio quality are so perfect that only highly trained experts can spot the marks of forging. Whether this argument hints at the general gullibility of mankind or that people are prone to sensational innuendo is almost beside the point. It reminds me a bit of the argument that guns don’t kill people, people kill people. Even the biggest proponents of artificial intelligence stress the importance of rules and guardrails as artificial intelligence evolves.

The European Union (EU) is one of the first jurisdictions outside of China currently proposing a set of wide-ranging rules governing the use of artificial intelligence, especially in untoward ways. As reported recently in TIME Magazine, these rules and restrictions through the EU Artificial Intelligence Act would ban any efforts to rank or categorize citizens’ behaviors or to rank citizens by facial recognition. If passed, these rules might well become a standard for a vast number of people and nations in the world. It’s like California with its huge population setting electric vehicle requirements forcing car companies to acquiesce given the size of the market. It’s why companies like Apple and Microsoft are lobbying so hard against these pending EU rules.

Deep fakes can be funny, but they can also cause riots, start wars, or lead people to believe that they can inject Clorox to fight viruses. How prone are we to deep fakes? Dial your time machine back to Halloween night in 1938.

Orson Wells, a budding and talented actor performed a radio adaptation of H.G. Welles’ “War of the Worlds”. Over his Mercury Theatre On The Air, dramatic news bulletins went out about a Martian invasion taking place in New Jersey. These fictional news bulletins caused millions of people to believe the invasion was real causing nationwide hysteria and panic. According to The Smithsonian Magazine, the next morning Orson Wells heard reports of mass stampedes, of suicides, and of angry listeners who had come to their “senses,” threatening to shoot him on sight. It not only threatened him and his career, but the incident passed into history as a real-life reminder of how gullible and susceptible we can be.

In an article that appeared through the Council on Foreign Relations’ Digital and Cyberspace Policy Program, Robert Chesney of the University of Texas Law School and Danielle K. Citron of the University of Maryland Law School say: “Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep-learning algorithms to synthesize video and audio content have made possible the production of “deep fakes” — highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did… The array of potential harms that deep fakes could entail is stunning. A well-timed and thoughtfully scripted deep fake or series of deep fakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society… The arrival of deep fakes has frightening implications for foreign affairs and national security.”

In my view, developing antidotes to this plague of deep fakes must become a high policy priority for this country. It won’t be easy. The technology is an evolutionary moving target and the tools to effect deep fakes get better and better and easier to use. Chesney and Citron put it this way: “Deep fakes are a profoundly serious problem for democratic governments and the world order. The prospect of a comprehensive technical solution is limited for the time being, as are the options for legal or regulatory responses to deep fakes. The challenges of mitigating the threat of deep fakes are real, but that does not mean the situation is hopeless. A combination of technical, legislative and personal solutions can help stem the problem.”

Bill Sims is a Hillsboro resident, retired president of the Denver Council on Foreign Relations, an author and runs a small farm in Berrysville with his wife. He is a former educator, executive and foundation president.

No posts to display