Share this...

by Larry Magid

This post is adapted from one that first appeared in the Mercury News

We’re hearing a lot about the dangers of generative AI like ChatGPT, the new Microsoft Bing and Google Bard. For example, Geoffrey Hinton, who is often referred to as the “godfather of AI,” recently quit his job at Google so he could speak freely about the risks of AI being developed by Google and other companies.

He told PBS News Hour that he worries about “the risk of super intelligent AI taking over control from people.” Numerous tech experts, including Elon Musk, have called for a pause in the development of powerful generative AI models while we explore their risks and ways to make them safer.

There’s reason for caution. If left unchecked and unregulated, there are all sorts of bad things that could happen, including AI systems evolving in ways that can harm humans and, more likely, empower bad actors to perpetuate scams and other crimes, generate and disseminate disinformation, defame people, spew hate, plan insurrections and so much more. It can also be biased and can put lots of people out of work. But, along with the possible risks come some safety and security benefits.

A little help from ChatGPT

In the spirit of full disclosure, I had a little help with this column by asking ChatGPT two questions. One was “how can generative AI make you safer?” and the other was “how can generative AI make social media safer?”  I asked the second question based on conversations I’ve had recently with social media safety experts who are bullish on how the technology can be used to help moderate their services. Except for what’s in quotes, the words in this column are my own, as are most of the suggestions. ChatGPT was helpful, and I make no apologies for taking advantage of what, for me, has become a research tool and writing aid, much like online search engines and spelling and grammar checkers.

How AI can make us safer

When it comes to security, ChatGPT correctly pointed out that “Generative AI can identify vulnerabilities, strengthen passwords, and develop countermeasures against cyber threats, thereby improving the overall security of networks and systems.”

It can also “analyze video surveillance data to identify potential threats,” although there are obvious privacy implications there.

AI can and already is being used to analyze patterns and behaviors to identify “anomalies, such as unusual transactions, to identify potential fraud.” It can help predict and recover from natural disasters, can be used in health care to monitor patients, make personalized health recommendations, and analyze images and other health tests to better diagnose conditions and recommend treatment.

A leading security expert, who requested not to be quoted by name, told me, “AI will turbocharge pre-existing trends. Bad actors will come up with new vulnerabilities, but the tools benefit the defenders just as much and maybe more.”

It’s also already being used to make vehicles safer. Despite its limitations and misleading name, Tesla’s so-called “Full Self Driving” uses AI to predict and avoid accidents, and Elon Musk, who has expressed concerns about the dangers of AI, has recently said that Tesla plans to use “end-to-end AI” to improve vehicle autonomy and safety.

There is enormous potential for health care ranging from diagnostic tools for professionals to consumer education. I’ve used ChatGPT, Bard and Bing to learn about medical conditions and got some good information, though I wouldn’t make medical decisions based on this information without first consulting a doctor.

And, as ChatGPT reminded me, “Generative AI can be used to create smart home security systems that learn occupants’ routines and detect unusual activities, alerting homeowners to potential threats.” Of course, it could also be used by criminals to predict when homeowners will be away and better plan their crimes.

Protecting children and adults from malicious actors

Social media companies are using AI to detect and remove spam, identify misinformation, and help prevent users from being targeted by malicious actors on social media. Industry insiders I’ve spoken with are optimistic that advances in AI will greatly improve their ability to police their services.

It can also be used to protect users from exposure to harmful and inappropriate content. It can be used to flag hate speech and cyberbullying and can be trained to understand a user’s particular triggers and other vulnerabilities to avoid exposing them to things that could harm or upset them. Because it can know who is using it, it could prevent children from seeing sexual or violent content while allowing access for adults.

It can also be used to identify and help remove illegal content, such as child sex abuse images or content that encourages self-harm.

It might also be used to predict harmful behavior, such as analyzing social media posts to predict who might inflict harm on others or themselves, although this, too, raises some privacy concerns as well as the risk of profiling.

Having said that, generative AI can also be used to create dangerous or inappropriate content, including virtual child abuse images, spam, hate speech, doctored photos, videos and audio and other forms of misinformation. These risks have become a major topic at gatherings of child safety experts.

Double-edged sword

AI is a double-edged sword, but so are almost all technologies, including the wheel, fire, kitchen knives, automobiles and even area rugs that create trip hazards.  On balance, I’m excited about the way AI can improve our lives and make us safer and healthier. But that doesn’t keep me from worrying about how it can be misused.

As we venture forward into the next paradigm shift in computing and the acquisition of knowledge, we need to be cautious, but we also need to avoid moral panics.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a nonprofit internet safety organization that receives financial support tech companies that employ generative AI.


Share this...