Share this...

by Larry Magid

Before I get to the potentially deadly serious part of today’s column, I’d like to start on the lighter side. Lighter, that is, unless you happen to be attorney Steven A. Schwartz.

In representing a man named Roberto Mata, who said he was injured aboard an Avianca flight, Schwartz reportedly filed a 10-page legal document, citing previous cases, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines. Just to be sure, the lawyer asked ChatGPT to verify that the cases were real. It said that they were.

Not surprisingly, Avianca’s lawyers, along with the judge, did their own research but couldn’t find references to the cases cited by Schwartz.  As it turned out, Schwartz, a veteran attorney, used ChatGPT for his legal research, which resulted in citations to cases that never existed. Schwartz later told the court that it was the first time he used ChatGPT and was “therefore unaware of the possibility that its content could be false.”

Fortunately, the opposing counsel and the judge found the errors before anything irreversible occurred. I don’t know the ultimate outcome of Mata vs. Avianca, but I trust the verdict will be based on fact rather than fiction.

AI chat makes mistakes

Schwartz learned what I and millions of other users of generative AI already know. These chatbots can be very useful, but they can also make up information that seems to be true but isn’t. I occasionally use ChatGPT to find information, but I always verify it before quoting it or relying on it. In my experience, almost everything it creates appears to be true because it reaches logical conclusions based on the information it has access to. But just because it appears to be logical doesn’t mean it’s true. As someone who has written for several of America’s leading newspapers, it is “logical” that I may have written for the Wall Street Journal and USA Today, as ChatGPT sometimes says. But I haven’t.

I don’t know if OpenAI, the company behind ChatGPT, has issued an advisory for lawyers, but it has published Educator Considerations for ChatGPT, which in part says that “it may fabricate source names, direct quotations, citations, and other details.”

Existential risk

And now for the more serious news story about generative AI.  You might have heard about the statement organized by the Center for AI Safety and signed by a large cohort of AI scientists and other leading figures in the field, including OpenAI CEO Sam Altman, Ilya Sutskever, OpenAI’s chief scientist, and Lila Ibrahim, COO of Google DeepMind.

These experts, many with a vested interest in developing and promulgating generative AI, agree that the risk is real and that governments need to consider ways to regulate and rein in the very industry they are part of.  The statement is only 22 words, but still quite chilling. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The Center for AI Safety pulls no punches. In its risk statement, it acknowledges that “AI has many beneficial applications,” yet “it can also be used to perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyberattacks. Even as AI systems are used with human involvement, AI agents are increasingly able to act autonomously to cause harm.” Looking to the future, these experts warn that “when AI becomes more advanced, it could eventually pose catastrophic or existential risks.”

We live with other existential risks

Bert the Turtle taught children to “duck and cover.”

As a society, we’ve become used to hearing about existential risks. I was in elementary school during the “duck and cover” drills of the 1950s and 1960s, where we practiced ducking under school desks as if that would actually protect us from a nuclear strike. If you need evidence, search for “Bert the Turtle” to view cartoons the government was using to convince children to “duck and cover.”

COVID panic is behind us, but it was an example of a very real threat contributing to the deaths of nearly 7 million people, according to the World Health Organization. Even if COVID remains under control through vaccinations, masking and drugs like Paxlovid, pandemics remain a serious risk. Although we are no longer ducking under our desks, we are hearing renewed warnings about the use of nuclear weapons.

And the folks from the Center for AI Safety didn’t even mention climate change, which is on the minds of many young people who worry whether Earth will continue to be inhabitable for people and other living things by the time they reach old age.

I worry about all of these things and hate that I’m now being told to add Generative AI to the list of things that might destroy us, but I also have confidence that these problems are all fixable or at least controllable in ways that can avoid catastrophic outcomes.

A note of optimism

We can’t eliminate risks completely, but if we come together on a global basis, we can minimize them or learn to live with them. That requires a combination of efforts, including regulation, industry cooperation, technological solutions and buy-in from the general public. It also requires distinguishing between facts and conspiracy theories and focusing on real solutions.

Almost everyone in the AI community agrees with OpenAI CEO Sam Altman that governments have an important role to play in regulation. Speaking before a U.S. Senate committee hearing last month, Altman said, “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. … We want to work with the government to prevent that from happening.”

Like early days of the automobile

In some ways, today’s AI is like the early days of the industrial revolution, which changed the nature of work and had an impact on our safety. An article in the Detroit News summarized the state of affairs during the period when automatable was first introduced to American streets, “In the first decade of the 20th century, there were no stop signs, warning signs, traffic lights, traffic cops, driver’s education, lane lines, street lighting, brake lights, driver’s licenses or posted speed limits.”

When it comes to generative AI, we need warning signs, traffic lights, traffic cops, driver’s education and many other safeguards.

I’m glad to see leaders of the AI industry and many in government taking the risks seriously. Properly managed, AI can make the world a better and safer place. It can power incredible medical breakthroughs, can help vastly reduce traffic deaths and empower creative people to be even more creative. But like other technologies, including fire, cars, kitchen knives and pharmaceuticals, it can also do harm if it is misused.

I’m both an optimist and a realist. The realist in me tells me that AI is here to stay and that there will be downsides to it. The optimist in me draws on decades of dealing with risks and the confidence that things will be OK as long as we make the right decisions.

Larry Magid is CEO of ConnectSafely.org, a nonprofit internet safety organization that receives financial support from companies that employ or are experimenting with generative AI.


Share this...