Benefits of Apple’s Image Abuse Scanning Outweigh Risks

There has been quite a bit of controversy around Apple’s recent announcement that it will scan images stored in iCloud…

Aug 12, 2021

Share this...

by Larry Magid

This post first appeared in the Mercury News

Listen to “Apple is right to scan for child sex abuse material” 
Listen to the ConnectSafely Report for CBS News on why Apple is doing the right thing

There has been quite a bit of controversy around Apple’s recent announcement that it will scan images stored in iCloud from iPhone, iPad and Mac users for Child Sexual Abuse Material (CSAM), which is sometimes referred to as “child porn.” Some privacy and human rights advocates worry that it could be a slippery slope and could lead to a backdoor that could be used by authoritarian governments against activists and others. While I think it’s always appropriate to bring up concerns about the inappropriate use of any technology, I give Apple the benefit of the doubt. With proper safeguards in place, the technology can be used to protect children and not for general surveillance.

As regular readers of this column know, I am co-founder and CEO of, a non-profit dedicated to protecting children and others from online harms (Apple is not one of our funders). I am also the founder of For 20 years, until last year, I served on the board of the National Center for Missing and Exploited Children (NCMEC), which is the agency that will receive notification of any child sex abuse material found by Apple’s scanning technology.  I currently serve on a NCMEC advisory board.

Through my work with NCMEC, I learned a great deal about the dangers of CSAM, which former NCMEC CEO Ernie Allen referred to in Congressional testimony as “nothing more than crime scene photographs and videos of the rape, abuse, and sexual debasement of children.” Courts have repeatedly found that this type of material is not protected speech. I was present in 1997 when then Vice President Al Gore announced that NCMEC would operate the Cybertipline to receive reports of sexual abuse images and other online exploitations of children, and since then, internet service providers and social media companies have been reporting this material to NCMEC, which reviews it and when appropriate, refers cases to law enforcement.  NCMEC maintains a database of hashmarks of these images which it provides to companies to check against images that are posted or stored on their servers. Importantly, these hash codes do not include the images themselves.

Apple’s plan, according to the company is to “perform on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations.” The company will then transform the database into an “unreadable set of hashes that is securely stored on users’ devices.”  Then, said Apple, on-device matching process is performed for that image against the known CSAM hashes, “powered by a cryptographic technology which determines if there is a match without revealing the result.”  In other words, Apple is using its considerable expertise to accomplish its goal of detecting illegal dangerous content without jeopardizing user privacy.  The company said that it will “provide an extremely high level of accuracy” to ensure “less than a one in one trillion chance per year of incorrectly flagging a given account.”

I admit that no technology is perfect, and there is always a chance that something will go wrong. But that’s true with everything in life, and the chances of something going wrong in this case is a great deal less than false positives with traditional law enforcement methods.

Apple is also taking steps to prevent children from receiving or sending sexually explicit photos. While it is legal for adults to access  sexually explicit content involving consenting adults, it is not legal for adults to send such material to children, and regardless of whether it’s illegal, parents have an interest in knowing if their children are receiving or sending such material.

Apple is updating its Messages app so that if a child receives a sexually explicit image, the photo will be blurred and the child will be warned and “presented with helpful resources, and reassured it is okay if they do not want to view this photo.” The child may also be told that their parents will get a message if they do view it. The app will send similar messages if their child attempts to send such an image. In a FAQ, Apple said that this is an opt-in feature and applies only to children under 12. I’m pleased that this is opt-in and not aimed at teens. While I think sexting is generally a bad idea, especially for minors, I also realize that it’s not uncommon among teens and doesn’t necessarily lead to exploitation or other bad outcomes between consenting individuals, including teens. You’ll find more on teen sexting at

To reiterate, I think it’s appropriate for privacy and human rights groups to question Apple about this move. It is not without risks, but — like other appropriate remedies, the benefits outweigh the risks as long as Apple is very careful to avoid slip-ups and does not expand the use of this technology beyond its narrow scope.

Share this...