by Larry Magid
This post first appeared in the Mercury News
In my capacity as CEO of ConnectSafely, I’m working on a parents guide to generative AI, and, naturally, I turned to ChatGPT for some help. It gave me some good advice, which I’ll get to later, but first, some general background on “generative artificial intelligence” (GAI).
AI has been around for a long time, but generative AI, which can create new content, including text, images, music and even computer code, is relatively new. Just in the past few months, we’ve seen the emergence of some impressive early models, including ChatGPT from OpenAI, Google Bard and the new Microsoft Bing. Each of these relies on what is called a “large language model” that accesses and analyzes vast amounts of data that it finds online and uses it to generate new content. Microsoft’s Bing AI is a major investor in OpenAI, whose technology is used in its Bing AI product.
Answers questions, writes poems, plans vacations
GAI systems both speak and understand natural language. You can ask it a question or to perform a task the same way you speak to a person. For example, you could say, “what is the capital of France,” but you can also say, “write me a poem about a boy with curly hair.” You can get quite specific, such as “write me a story about a Jewish girl from China and her friend from Mexico” and you can even have it write songs about specific people. You can also use services like ChatGPT to plan a vacation. I just asked it to plan a road trip between Las Vegas and Rimrock, Arizona, and it gave me a very detailed itinerary. I then asked it to compare a couple of different attractions and was impressed with the level of detail.
These services can also write essays, which brings up some issues for educators who worry about students using it rather than doing their own writing.
‘Garbage in, garbage out’
My main concern with these services is that they don’t always cite their sources, and they do make mistakes. When I first asked it about myself, I found several mistakes, which others might not notice. It correctly listed several publications I’ve written for but added in a couple that were not correct. At one point, it had me winning an Emmy, which, sadly, never happened. The newest version of ChatGPT no longer makes that mistake, but it still erroneously thinks I used to write for the Wall Street Journal.
What I’ve found are mostly innocent and minor errors, but there is the danger of misinformation and even deliberate disinformation. These models don’t currently verify the accuracy of information but merely state “facts” based on the information they find. As they say in computer science, “garbage in, garbage out.” There is the risk of these systems regurgitating false information, which could lead to dangerous outcomes.
Using ChatGPT to help write a safety guide
I’m almost embarrassed to admit that it did an excellent job providing useful information for parents. The response was so good that I was tempted to simply copy and paste it. Somehow that seems wrong, though ChatGPT itself said it’s OK when I asked “can I use content from ChatGPT as if I wrote it.” It told me that “it’s generally acceptable to treat it as your own output, as long as you have properly interacted with the AI and provided input to guide the generation.” It may be OK with the owners of OpenAI, but it’s not OK with me. Even though it’s a machine, it still feels like plagiarism or, at the very least, dishonesty. As a journalist, I frequently quote sources, but I also cite them. I have a feeling that many educators would object to their students using ChatGPT content as if it were their own, and I suspect my editors would feel the same way since they’re paying me to provide original content.
So, instead of plagiarizing ChatGPT, I’m going to quote it here, as if I were reporting on an interview with an expert. It’s very common for me and other journalists to rely on experts but unethical not to cite your sources.
Advice for parents
The service said that parents should start by understanding the basics of generative AI and then discuss the pros and cons with their children, adding “it’s essential to have open discussions with your children about the benefits and potential risks associated with generative AI. This includes understanding how it can be used creatively, as well as the ethical concerns, such as deepfakes or misinformation.”
It also says to “teach critical thinking and media literacy,” which includes encouraging your children “to question the authenticity of the content they encounter online. Teach them to look for reliable sources, verify information, and be aware of the potential for AI-generated content, like deepfakes, that may be misleading or deceptive.” As someone who has been advising parents about online safety for decades, I couldn’t agree more.
Just as we say at ConnectSafely, ChatGPT advises parents to “monitor your child’s online activities” and “stay informed about the platforms, apps, and websites your children are using. Many of them may incorporate generative AI technologies. Keep an open line of communication and discuss any concerns or questions they may have.”
I also agree with ChatGPT that parents should “educate your children about the importance of protecting their personal information and maintaining strong privacy settings on the platforms they use.” It pointed out that “Some generative AI technologies can be exploited to gather personal information or create targeted content based on their preferences.”
Like a good teacher, ChatGPT says that it encourages “creativity and exploration,” advising parents to “encourage your children to explore AI-powered tools and resources that help them develop their skills and express their creativity in a safe and responsible manner.
Finally, it tells parents “it’s crucial to stay updated on the latest developments in generative AI and related technologies. Regularly research and engage with reputable sources to better understand the evolving digital landscape and make informed decisions for your family.”
As you can tell I’m very impressed with how well ChatGPT and other GAI systems work. They are very powerful and will only get more powerful. But, as “Uncle Ben” from Spiderman said “with great power comes great responsibility,” and that goes for those who develop and use these powerful technologies. We are in the very early stages of what is likely to impact knowledge and creativity as much as automobiles impacted transportation. And, like autos, there is the risk of bad things happening. I don’t know if Henry Ford put much thought into the unintended consequences of mass producing cars, but I sure hope that the AI community along with the public and regulators do all they can to minimize risks.
Larry Magid is a tech journalist and internet safety activist.