Share this...

by Larry Magid

This post first appeared in the Mercury News.

I’m on my way to Washington, D.C., for the Family Online Safety Institute’s (FOSI) annual conference. I have attended almost all of these events since they started in 2007, usually as a speaker, in my role as CEO of

Simply looking at the event’s agenda gives us an idea what’s concerning internet safety advocates, technology executives and government policy makers.

The first session, “Online Safety in an AI World” highlights the concern around artificial intelligence. AI can be incredibly beneficial. Along with speech recognition, it powers home digital assistants like Amazon Echo, Google Home and Apple SIRI. It enables companies like Santa Cruz-based Full Power Technologies to analyze your sleep patterns with a net-connected device under your mattress to analyze sleep patterns and make recommendations.  AI is the driving force behind self-driving cars and a crucial component of face and voice recognition, which can help make us safer and more secure. Indeed, a paper to be presented at the conference points out that AI can be used to “combat the spread of child sexual abuse material through technologies such as Microsoft’s PhotoDNA.” But it’s “not without its risk,” pointing to “misinformation such as deep fake content,” as an example.

AI can also be used in warfare, which could make it safer for American military personnel, but it could also have unintended consequences, including the risk of mass casualties if it got into the wrong hands, was hacked, or in the event of a software error.

There is also the risk of an AI algorithm failing to take necessary precautions. I still remember when my old GPS device told me to get from point A to point B by crossing a lake, even though there was no bridge. Had this been a fully autonomous system, my car might have driven into the lake.

Indeed, the tragedy of the two Boeing 737 Max crashes are examples of software gone awry. The anti-stall software put those planes into a dive as a “safety measure,” resulting in crashes and the deaths of 346 people.

I, too, am concerned about AI, but like nearly all new technologies, it’s also the subject of myths and moral panics. It is highly unlikely that a breed of evil machines will become our overlords. It’s important to the safety advocates – myself included – to put their concerns into context, and to realize that – in the early stages of a technology – we can’t always predict what will and won’t be dangerous.  When MySpace and other early social media services first became popular around 2004, there was widespread concern about children being sexually assaulted by online predators, which – though clearly a risk – turned out to be far more statistically unlikely than risks that weren’t being talked about at the time – like cyberbullying, reputation damage and obsessive use – that today affects millions of media users.

Cyberbullying is always discussed at the FOSI conference because it’s one of the biggest issues facing young people online. In past years, not-so-young adults – myself included – have talked about both the problem and the solution. This year, Lucy Thomas and Rosie Thomas, from Australia’s Project Rockit will talk about how “giving young people agency to reclaim technology for good,” likely makes a bigger difference than intervention from adults. I don’t know the precise age of these two Australian internet safety experts but – having served with them on Facebook’s Safety Advisory Board, I do know they’re many years younger than me, and most other internet safety activists I know. That’s a good thing, and it’s even better that they work mostly with young people, with the mission of “Empowering young people to lead change.”

As someone whose teenage years are decades behind me, I realize my own limitations in understanding the way teens use technology which is why my non-profit, ConnectSafely is now partnering with MyDigitalTat2, a Bay Area non-profit whose Teen Advisory Board is integral to its operations. I’ve spent a lot of time with these teen advisers and teen hosts of an upcoming podcast series we’re working on, and I have learned firsthand why it’s essential to have young people involved in all aspects of digital safety, including combating cyberbullying.

Microsoft also recognizes the value of teen advisers, with its recently concluded Council for Digital Good, an “initiative involving 15 teens from 12 U.S. states, selected to help advance our work in digital civility: promoting safer and healthier online interactions among all people.”

The FOSI conference’s keynote speaker is Federal Trade Commissioner Christine Wilson. I don’t know what she’ll speak about, but the FTC is the federal agency which has fined Facebook, Google and other tech companies for violating rules and consent decrees over issues including privacy and child protection. Having a federal regulator address this conference is appropriate, given the mood in Washington these days. It’s clear that Congress will eventually enact a national privacy law and will consider other legislation toward regulating social media and other parts of the internet.

FOSI has been running these annual conferences since 2007 and — in the early years – regulation was a dirty word among many in attendance, especially executives of internet companies. Today, it’s pretty much a forgone conclusion, though you can expect plenty of lobbying by companies that hope to shape it in ways that don’t require them to dramatically alter their business models.
Finally, there is a fireside chat with filmmaker and activist Tiffany Shlain about her new book “24/6: The Power of Unplugging One Day a Week.” Shlain is an advocate for a “digital sabbath,” one of many proposals to get people to put down their phones and tablets and step away from their computers to take a break, go for a walk or maybe just hang out with friends and family. That’s an idea whose time has come.

Share this...