Share this...

By Trisha Prabhu

“Hi Trish! I keep hearing about gen AI and the threats it poses and how tech companies are coming up with rules to prevent these issues. Can you talk more about that?”

Hi there, and welcome back to another week of Ask Trish (and, believe it or not, our last Ask Trish of July 2023 — the time really does fly)!

Thank you so much to this week’s question-er for the timely, important question. AI — especially emerging AI technologies, like Dall-E and ChatGPT — are all the rage right now, and I imagine y’all are hearing tons about it in the news and in your conversations with friends, family, and co-workers. And, of course, not everyone is entirely excited; lots of folks have raised important questions about AI and its applications, noting potential negative consequences. In response, everyone from regulators to yes, tech companies, are thinking critically about what, if any, safeguards to implement to prevent and/or mitigate harms. These conversations are incredibly important and powerful, as they have the potential to shape the future of this technology and its use — so yes, it’s important to understand what these folks are saying. (Again, kudos to you for raising this question!)

In that vein, this week, I’m going to provide you all with a brief, easy-to-understand overview of some of the key issues folks have raised with regards to emerging AI technologies. I’ll then talk a little about proposed solutions, particularly, given the question-er’s interest, tech companies’ proposed solutions. I’ll end with some thoughts and comments on these solutions’ strengths and weaknesses. As for where we should go from here/what to do — I’ll go ahead and leave that to you.

Sound good? Let’s get into it:

Let’s start off with the problems (hip hip hooray! — haha). So what is it that folks (including lawmakers, researchers, technologists, and the public) are so concerned about when it comes to AI? Well, one key issue that people have raised is the way that AI can be used to generate disinformation. Think about it: technologies like Dall-E can create synthetic media that, if shared on social media, can instantly go viral — even if it’s fake. If it’s a hilarious meme, it’s not such a big deal…but what about a fake news story about a candidate for office? And what if that story goes viral the day before election day? You can see why that might raise some eyebrows. In that vein, a related key issue that worries folks is the way AI can be used to support illegal activity and human rights violations. AI could, for instance, be used to support weapons of war (uh oh) — and right now, there aren’t clear guidelines on what is and is not appropriate in that context. AI could also be used by terrorist organizations to generate content that is intended to traumatize or psychologically damage…or to incite violence, physical destruction, or a revolution. It might sound far-fetched to image AI causing deaths…but per the UN Security Council, it’s a very real and deeply concerning possibility. One other key issue I’ll highlight here (with the caveat that there are many, many others — I’m just focusing on some big ones!) is the way that AI could negatively alter society’s perceptions and attitudes. In the past, AI chatbots and technologies have received lots of bad press for producing racist responses or nudging people to engage in activities that are harmful or questionable — what if ChatGPT does the same? If you’ve played around with ChatGPT, you know that it’s not foolproof; despite efforts to curb some obvious problems, people can and do regularly break the system. Growing interest in using AI in the classroom only makes this concern more pressing.

Okay, so we’ve understood the problems. What are the proposed solutions? There are tons, as you can imagine — and I’ll only highlight some here. Some tech companies have, as I said, actually been trying to lead the conversation. The CEO of OpenAI — the company behind ChatGPT — Sam Altman told Congressional lawmakers that he believes technologies like ChatGPT should have to meet government-enforced safety regulations and tests before being made available to the public. Microsoft recently endorsed a number of rules too, including requiring that large-scale AI models be outfitted with a brake that ensures that an AI system can be slowed down/turned off. Both OpenAI and Microsoft have also said that technology companies should be required to obtain licenses to be able to build certain types of powerful AI models. In addition to these solutions, researchers and lawmakers have proposed lots and lots of others, including creation of an independent agency to oversee AI, requiring that technology companies disclose how their models work and asking that companies regularly report on the technology they’re developing/implementing and its (potential) impacts.

Okay — so now we’ve got the problems and potential solutions down. The last step is thinking about whether these solutions are any good…what are their strengths and weaknesses? Well, here are a couple of things to consider. For one, solutions like government-enforced safety regulations or requiring that companies obtain licenses are powerful in that they take a preventative approach to addressing harms — rather than fix problems after they happen, they can ensure AI technologies are safe before they are deployed and used. On the other hand, they can be difficult to put into practice. What, exactly, should those tests be? That’s an open question. On the other hand, transparency-related solutions, like requiring that companies disclose how their models operate, are helpful in that they are what I call “learning solutions” — they enable us to learn about the technology in question/its potential harms. Given how quickly the tech space evolves, that’s valuable. On the other hand, they don’t necessarily lay out clear standards re: what companies developing AI models can and cannot do. And with regards to calls to create a brake for AI models — frankly, I can’t think of a downside to that solution. It seems like a smart way to ensure that we’re always in control and an important acknowledgment that, despite best intentions, things might go wrong, and we have to be prepared to deal with those challenges.

And that’s AI’s harms and potential solutions, in a nutshell! I hope that that was a helpful overview and offered some interesting food for thought. I now leave it to you to draw your own conclusions as to how, given the potential challenges, we ought to respond to emerging AI technologies.

If you’re an Ask Trish regular, you know what’s coming next…before I sign off, I’d like to make my weekly request to ask all of you to please share any of your thoughts, questions, or concerns about the Internet/your digital experience here. As I often like to remind y’all, it really is so, so simple and easy to fill out the form. It genuinely takes 30 seconds (1 minute tops)! So don’t hesitate — please submit away! I’m really looking forward to hearing from you. Oh, and one other thing – if you have a second, please help us build our community on social media by hyping up Ask Trish videos with likes, comments, and shares! Thank you a ton in advance for spreading the word/raising awareness about Ask Trish! #youdabest

Have a great rest of your July,

Trish


Share this...