Share this...

Shouting Into the Void

NOTE: The following has been edited for length and clarity.

Larry: I’m Larry Magid, and this is Are We Doing Tech Right?, a new podcast from ConnectSafely, where we speak with experts from tech, education, healthcare, government, and academia about tech policies, platforms, and habits that affect our daily lives.

Kat and Viktorya, welcome. Thank you both for joining us on Are We Doing Tech Right? And this is a really important subject, one that ConnectSafely thinks a lot about, partially because we get a lot of email from people who say that they’ve tried to get support, some moderation, and sometimes they’ve got very good results, but often that they’re frustrated, that’s why they contact us.

So when I read through your report, it resonated with me. As much as we love our supporters, many of which are the companies that I think you are concerned about, we also understand that there are serious issues and we want to make sure that they understand that and that consumers understand what are the limitations of the reporting systems. And also one of the questions I’m going to ask you later, what can they do at this stage of the game until things get better?

Viktorya, why don’t you start by telling us about the study, Shouting Into the Void: Why Reporting Abuse to Social Media Platforms is So Hard and How to Fix It.

Viktorya Vilk: Gladly, and thank you so much for having us on the show. So Kat and I both work for nonprofits that protect and support people who are experiencing abuse online, particularly if they are writers, journalists, artists, content creators of various kinds.

And we’ve, at this point now, worked with probably thousands of people who experience online hate and harassment. If you’re under attack online, telling a platform that you’re under attack by reporting abuse to the platform is your first line of defense. You’re basically saying, “Hey, I need help. Someone is doing something that’s violating your terms of service or your rules, and you need to do something about it.”

The problem is, it became very clear to us that reporting mechanisms don’t work very well. So we spoke to over two dozen writers, journalists, and creators of all kinds about the experiences that they’ve had when they’ve tried to tell platforms that they’re in trouble and tried to tell platforms that they need help.

And what we heard again and again was deep frustration, exasperation, and harm that the actual mechanisms to report abuse cause, let alone whether there’s any response or all the other ways in which you might be experiencing challenges on social media platforms. So that was the premise for the report.

And so we set out to understand why does reporting work so badly? What can be done to fix it? And then issued a whole set of recommendations for social media platforms, which we are hoping that they will put into place, you know.

Larry: I kind of understand the issue. As a writer myself, I’ve had my share of negative contact with people.

I’ve fortunately never been a major target of a concerted attack, but I have had online pile ons. But most importantly, I’ve heard from other people, especially women. We’re hearing things, for example, of people who, for whatever reason, either put themselves out there or spotlights are shined on them, that they’re still people.​​ They still have feelings, they have families, they have fears, they have concerns and they need protection. 

Viktorya Vilk: Yeah, and I, I mean, look, when we talk about online abuse, I think people conjure up something like trolls and trolling and they associate it with fantasy and you know, something funny and subversive.

And that is not what we’re talking about here when we are talking about online abuse. So we are talking about things like, and I’m going to give you a couple of examples that are pretty disturbing. So big trigger warning there, but death threats, right, against people, but also their children, their spouses, their parents, their siblings.

For women, often threats of rape and other forms of sexual violence and sexual harassment that comment about their bodies or photoshops them into inappropriate imagery. We’re talking about hateful slurs and hateful memes and hateful images of all kinds. I’m also talking about things like doxing, right?

When you publish someone’s private, personal information on the internet, like their home address or their cell phone, without their consent. Or even something like swatting, where you actually send a fully armed SWAT team to someone’s home, and that has gotten people killed. So we’re talking about things that are pretty destructive, very, very serious and that can have really, really serious offline consequences. 

Larry: Now you’re, you so far haven’t addressed folks that are not necessarily in the public eye, just teenagers, just folks that might say something that angers somebody else. Maybe they have a political opinion or maybe they are a marginalized group and people are homophobic or racist or misogynist or whatever.

Do the recommendations apply to those folks as well, or are they primarily people who, to one degree or another, are in the public eye? 

Kat Lo: Yeah, this kind of harassment can happen to anyone, and in fact, that’s why these reporting mechanisms are so important to improve. Because in many cases, with newsrooms or high profile content creators, sometimes they can get additional support.

However, the average person, the only thing they can do is press that report button and block and mute. But, in many cases, those things don’t really work. So, dealing with a lot of these issues and a lot of this abuse online, they need the best tools possible. Especially to get that support from other people.

Larry: And you outlined, I think it was seven recommendations for industry in terms of the kinds of things that they could do. I wonder if you could sort of talk about, you know, what industry is doing wrong. And if you have any positive comments about what they’re doing, right. I’m sure, you know, we’d like to hear that as well, but you know, what, what needs to be done to improve the situation.

Viktorya Vilk: So I’ll talk a little bit about some of the problems that we found when we were doing this research. And I should point out that because of the core constituency of the two nonprofits we work with, we focused on adults, and we focused specifically on people who are writers, journalists, artists, and creators of all kinds, right?

But, everything we wrote about applies to anyone who is using the internet, you know, using social media platforms to make their voice heard, regardless of how public a profile they do or don’t have. 

Larry: And the reality is, even if you’re not a “professional journalist,” if you’re posting on social media, even if it’s just a snapshot of your vacation, you are a creator.

If you’re making a comment, you are writing something in the public, I mean, in a sense, this really is citizen journalism. 

Viktorya Vilk: It’s absolutely true, and I don’t want to go off on a tangent, but, Darnella Fraser, who was a teenager who filmed the murder of George Floyd, I cannot put into words how horrific the things that she was subjected to on the Internet.

This is someone who was a minor, who was subjected to on the Internet after she posted that video, and that video was an enormous public service. So just to underscore how important the point you just made, Larry, is, this really can impact anyone who’s trying to express themselves, make their voice heard, or do their civic duty, right, as a citizen, whether they’re a professional reporter or not.

But you had asked, actually, what are some of the things that platforms aren’t doing right when it comes to enabling users, right? Users of their platforms to report abuse. So I’ll talk a little bit about that and Kat can jump in and talk about some of the ways that we think that they can improve.

So when we spoke to about, like I said, about two dozen people, here’s some of the things we heard. First of all, they told us that the actual process of trying to tell a platform that you’re in trouble is extraordinarily time consuming and cumbersome. It takes many, many, many clicks, right, to try to get to, to go through the process and tell the platform that you’re reporting something abusive or problematic.

And yet, nowhere in that process is there room for you to add context and here’s why that matters. Some things are obviously a violation of platform policy and obviously abusive, right? If someone threatens to kill you, that seems clear enough, but sometimes abusive trolls are very good at using dog whistles and the content moderator on the other side who’s trying to decide is this a violation of policies or not might not know right the specific context. But you the user might know and so if there’s nowhere for you to put in some information that says “hey this is actually a dog whistle and here’s what it means”, the system breaks down, right?

So that’s one thing we heard. Another thing we heard from people is that most reporting systems don’t take into account any way to show that you might be experiencing coordinated or mob harassment. Larry, you mentioned that you’d experienced pile-ons and I’m very sorry to hear that but if you’ve experienced a pile-on, you’ll know that there is no way to tell the platform that what’s happening to you is a pile-on. You have to go through piecemeal, piece by piece of content, and imagine you’re doing eight, nine, ten clicks. It’s completely impossible if you’re somebody who’s gotten, you know, three thousand abusive messages in one hour.

How are you ever going to report all of that to a platform? And finally, users just don’t understand how reporting actually works, right? They don’t understand which buttons they’re supposed to press from a basic user experience standpoint they find it very confusing and then they have no idea if the platform has reached any kind of decision about what they reported, let alone why the platform has reached that decision, which can actually be re-traumatizing, right?

If you’ve experienced something that is traumatic, you go out of your way to tell the platform that you need help and absolutely nothing happens. Or the platform tells you, oh, actually, this isn’t a thing at all. This death threat that you experienced is really nothing. So there’s nothing we can do. That actually can further re-traumatize people.

So these are some of the things that we found that were not working about systems that help individual users report abuse to platforms. But we do actually have very concrete recommendations for how this can be improved that are frankly, not rocket science. 

Kat Lo: One of the things that was important to us with writing these recommendations was that they can be actionable, that they can actually be implemented by the companies.

And we do have a number of aspirational recommendations where, yes, this would take a lot more time and effort, but so many of these things are so simple. 

Larry: I love your title, Shouting into the Void, because that’s what it feels like sometimes. And your first recommendation, create a dashboard for tracking reports, outcomes, and history, is something that could actually close that void.

Because I have heard from people who, and I’ve had that experience myself, where I’ve made a report and that’s it. I hear nothing back. I have no idea whether they’ve ignored it. They haven’t gotten to it yet. They’ve decided I was full of crap and they, you know, it’s not going to follow up. I just don’t know. And, and I hear that a lot from folks that write, writing to ConnectSafely. 

Kat Lo: So having something like an email inbox, right. Is, is incredibly helpful knowing what’s been looked at, knowing the status of, of what the report is in, like, you know, is the report being reviewed? Is it not yet been reviewed? I just learned of a new kind of response that a platform will give you, which is that we didn’t have time to review it. So that’s it and so there are a lot of different ways. 

Viktorya Vilk: Yeah. 

Kat Lo: So there are a lot of times that the companies can give these, these responses and then if it’s just in your notifications, it’s gone. If you’ve gotten a hundred notifications and that’s all that your app tracks I guess you’ll never see that report again.

Um, so yeah, a reporting dashboard is really important, and we’ve actually found that companies have features that are similar to dashboards that we’re advocating for, but for different functions. So we saw that, uh, YouTube’s copyright dashboard has all of these details, and we know it’s because of a lot of regulation, a lot of, basically, there’s a lot of legal requirements for companies to handle copyright in a very integrated, aggressive way.

So you’ll see things on this copyright dashboard that are the title, what is the channel, when was it submitted, what’s the status of the report? And so on. 

Larry: If you’ve ever ordered anything on Amazon, they will tell you precisely the status of your order, sometimes down to how many houses away the delivery truck is.

And so it seems to me that if they can do that with packages traveling all over the world, maybe companies could do that with abuse reports. 

Viktorya Vilk:  That is spot on, Larry. 

Kat Lo: Yeah, exactly. 

Larry: I’ve seen reports where a company will come back and basically say to the reporter that what you are reporting doesn’t violate our terms of service or our community standards.

Yet, objectively, when I look at the complaint, whether they violate the community standards or not, I can see why they’re upsetting. Is there any way to resolve that? I mean, other than going to something like Meta’s board that, you know, their board that supposedly rules on these things. But, you know, there seems to be a gap there sometimes.

Viktorya Vilk: So there’s definitely a gap. I mean, there are a lot of different issues we identified, right? One is when you report and nothing happens and you hear nothing, you don’t know what’s going on. Another is when you report and the platform tells you that the thing that you are absolutely certain violates platform policy doesn’t.

And to be fair, sometimes things really don’t violate platform policy. You feel that they do. The content moderators feels that they don’t, and that’s that. However, there’s a lot of gray area.

Larry: Even I think about that before I post these days. You know, what are, what are the responses going to be? And, and I, in a way it’s almost a form of self censorship.

I try to avoid it, but it, it, it can’t possibly not go through my mind before I make a post. 

Viktorya Vilk: Larry, I think it’s actually brave of you to say that out loud, and I’ll tell you why. I mean, I work with tons of reporters and writers, and reporters and writers are very reluctant to speak openly about the fact that some of this hate and harassment and death threats have made them self censor, right?

Because of course, none of us want to be censoring ourselves. We want to be perceived as being able to, you know, do our jobs come hell or high water, but that is a very high cost. If you’re worried someone’s going to hurt your child, right? Or hurt your spouse or hurt your mother? Of course, it’s reasonable that you might rethink what you are and aren’t going to say, but we can’t pretend then that there is no self censorship or censorship impact or free expression impact of this kind of abuse.

So I just think that people often create this false dichotomy between free speech and online abuse that I take real issue with, right? If you, it’s very, very difficult to express yourself freely if you are being bombarded with slurs every day. People are talking about your body every time you show up anywhere and threatening to kill you and rape you.

Larry: That is a way to suppress someone’s speech.

Kat Lo:  In fact. There’s a second dichotomy that kind of amplifies this issue, which is the dichotomy of online and offline, right? Like, we think of online harms as being something distinct from offline harms, and that if you just, in quotes, log off, then you’ll be fine, right?

But so much online, like so many, so much of social life occurs online, so much of it affects you online, your livelihood, your family, how you communicate with people who are close to you. So I think that dichotomy is false, but it’s still often kind of weaponized to discount the idea that we actually do need to have more, uh, thoughtful governance online.

Larry: Yeah. One of the questions we were thinking about is, you know, the whole issue of the moderators and what they go through and also allies. So, someone says something, they get abuse, and then somebody comes in and, you know, supports that person, or the moderator takes it down, and somehow that moderator might be doxxed, I don’t know.

How often that can happen, but it’s certainly an issue. What is kind of the pressure that’s, that sort of, for those of us who may not be personally getting attacked, but nevertheless are trying to do what we can, perhaps you guys yourselves, trying to do what you can to, to protect folks who are being abused?

Kat Lo: Yeah, and you actually have a lot of people on the front lines in a sense. You have content moderators who are hired by companies. You actually have fact checkers and civil society groups who are plugging into those content moderation systems to talk about the human rights violations or very harmful misinformation.

And you have these community moderators online who are basically like one of us who are helping keep their own community safe. And so they all have these massive mental health challenges, from having to see difficult content, constantly having to see. At Meedan, a number of the fact checkers we work with are tremendously affected when they fact check elections that impact their own life, right?

So people seeing you know, domestic violence or sexual assault in these posts, if they have themselves experienced that, they have another layer of trauma that they experience on top of that. So there are so many vectors of trauma that moderators and the people around them experience that, well, there’s something that should be done about that.

Larry: Yeah, and also child abuse images. Sometimes they have to look at those as well in order to remove them from the platform, and that can be triggering and often these are folks that are marginally employed, sometimes working in underdeveloped countries. It’s very challenging. 

Viktorya Vilk: It’s not just the people who are on the receiving end of abuse that are impacted.

All of the folks are kind of around them, including their allies and content moderators. And what I meant about those being two different groups is that allies are people like Kat and I, who work for civil society organizations, who you can reach out to and we’ll try to help you if you are experiencing hate and harassment online.

Allies might be your colleague, your friend, your family member who cares about you. They may speak out publicly to defend you. They may reach out to you privately. They may be the person who offers to go through all of the abusive content on your feeds and in your DMs and deal with it for you, which means that they then see all of that stuff, right?

And sometimes if you speak out publicly on behalf of someone else, you yourself become the target of the mob, which has happened to me and it’s happened to other people that I work with. That is one group of people. The other group of people are the ones Kat was talking about, and these are content moderators, just as you said, Larry, who are often working in the global majority who are paid absolute peanuts, right?

Very, very little money sometimes, don’t have robust access to professional mental health care, and are watching some of the most horrific content that the internet has to offer. And so I just think that we are not, we, we all want to benefit from these platforms. We love to be as. Connected as we are around the world.

We think that these platforms are free because we don’t have to pay for them unless you want to pay for a blue check or something, right? What we don’t understand is we’re paying in all of these other ways. We’re paying with our attention. We’re paying with our private data. We’re paying with the psychological wellbeing and physical wellbeing of untold numbers of content moderators.

And many, many of us users who, in one way or another, can have negative consequences as a result of wanting to, you know, speak to one another across the globe on these platforms. 

Larry: So, we need to, we need to wrap up shortly and, you know, you’ve written, done a number of recommendations for industry. I’ll just go through them again.

You mentioned creating the dashboard. We talked about that earlier. Giving users greater clarity and control as they report abuse. Align reporting mechanisms with platform policies. Offer users two reporting options: expedited and comprehensive. And I completely agree with that. Sometimes you want to go into detail.

Sometimes you want to get through it quickly. You should either way, be able to do that. Adapting reporting to address coordinated and repeated harassment and in other words, the difference between the occasional, I assume that’s what you mean, versus those people who are being constantly harassed over and over again.

Viktorya Vilk: Or a mob, right? Getting hundreds and hundreds of messages in 20 minutes. 

Larry: Right, which is different than having one person that might once in a while say something nasty about you. I mean, not that that’s not potentially horrific as well. Um, interesting, integrate documentation to reporting and making it easier to access support when reporting.

And most of those I think are fairly clear. I don’t know if you feel a need to comment on specifically any of the ones you’re recommending, but they all seem like obvious things that the industry ought to be thinking about. 

Kat Lo: Yeah, I think in some cases, the devil is in the details. So, you know, in terms of making reporting mechanisms more user friendly, what does that mean?

A quick example that came up during the interviews is that there were so many times that this one person walked through the reporting process. In this case, we did a walkthrough of them reporting something and we were going through the reporting flow screen by screen and we hit the next button. Hoping to offer more information and it said, “thank you, your report has been submitted”. And we thought, “oh no, we wanted to give more information.” We wanted a text box for context. Sometimes I wanted to test if I report something for hate speech versus harassment, does it look different? And it does sometimes, right? So these mechanisms aren’t always consistent so that when people report things, they’re often surprised and it kind of takes the feeling of agency even further out of their hands.

Larry: Yeah, and often it all isn’t necessarily clear to the reporter, to the person doing the report, what the companies might mean by particular baskets of abuse that they, that they create. That’s why I think probably, I assume one of the reasons you’re calling for documentation so you can at least understand, you know, what these buckets of categories mean.

Viktorya Vilk: We call for documentation for that reason, but also because if you are someone who is trying to get help from law enforcement. If you’re trying to talk to a lawyer about the abuse you’re experiencing, if you’re trying to tell your employer what’s happening to you, you need evidence. If you report something to a platform and they actually take the content down, you lose any evidence that it ever happened.

And then no one will help you because they, you know, how do you prove that you didn’t make it up? So you need to have some kind of official formal looking evidence that something actually happened. That, you know, you were threatened, you reported it, it was addressed. That was why we talked about documentation.

Larry: And that actually brings me to my next question, because we do at ConnectSafely, we advise people to take screenshots of whatever abusive material and just hold those in case it, but that brings up the next question. Okay. We’ve talked about what companies can do, but while we’re waiting, what can folks out there who are listening do if they’re experiencing abuse?

Viktorya Vilk: So actually I think there’s tons of things everybody can do. I think there’s things that people, individuals on the receiving end of abuse can do, which I’m happy to say a few things. There are things employers of people who need to use social media to do their jobs can do, right? And there are things that we can do for one another as allies when we see someone under attack online.

So I will say that if you are an individual who’s experiencing abuse, it’s really important to do exactly what you said, Larry. It’s really important to document, right, to take screenshots to save evidence. Try to use whatever platform features are at your disposal, whether it’s reporting, blocking, muting, hiding, whatever it is that the platform offers, use it.

Tell someone that you are under attack. It can be a trusted friend, a family member. It is profoundly isolating to be attacked online, and it can be really scary, and people have a tendency sometimes to sort of shut down or to isolate. It’s really, really important to tell someone what’s happening to you and to ask for help.

And don’t beat yourself up if it has a strong impact on your mental well being, that is normal, there’s nothing wrong with you if getting death threats disturbs you and upsets you, that’s a completely normal reaction to death threats. And so we often have to tell people that call us, this is not your fault, and there is nothing wrong with you, and you didn’t do anything to deserve this.

So those are some of the things, and I don’t know if Kat wants to say anything about.

Larry: Some of us in the public eye, some, some of us in the public eye think that somehow we should be, you know, thick skin, stronger than that. Well, we’re humans.

Viktorya Vilk: Everybody’s human and in fact, some of these things just shouldn’t be happening.

Right, right. These are not normal things that should be happening. Kat, do you wanna say anything about what people can do as allies or employers or whatever else you think is sort of appropriate? 

Kat Lo: Yeah. As employers, companies need to be able to anticipate what’s happening to their staff, right? So if you’re going to release a controversial product or something, who’s going to be the face of it?

Are they prepared? And companies should help with this, but I think any individual, and I hate saying this before you get harassed, you should dig into your digital security. Feel like you have kind of a sense of fluency of your, of your online space so that when things do happen, you can feel more in control when they do happen to you.

Right. And I think one reason we bring up documentation is related to what you were saying, Viktorya, which is you feel like you’re going crazy when it happens to you, especially if your family members don’t quite get it and so on. And so either you talk about it and people say, is this all you talk about? Or you don’t talk about it and they say, well, it must be fine now. So being able to actually build up support and understanding with people around you is really important. But from that perspective, it really is the ally. The onus should be on the ally for. Checking in, um, asking, “Fo you want me to look at your tweets? You shouldn’t look at your tweets. I can look at them for you. I can report, et cetera, et cetera.” So, I think those are really important measures from employer to friend to the individual. 

Viktorya Vilk: We actually work. Oh, sorry, I was just going to say, we actually work at PEN America, Larry, with employers. Lots of different kinds of employers, particularly media organizations and publishing houses to help them put policies, protocols and systems in place to protect and support their people in the face of online abuse, because the burden of dealing with online abuse almost always falls squarely on the shoulders of the person who’s on the receiving end, which isn’t really fair, right?

If you are someone who has to be online to do your job, then your employer has a responsibility to help protect and support you, whether that’s helping you with your digital security, helping you deal with law enforcement or whatever else it is that your employer can do. And so at PEN America, we work with institutions to help them build policies, build protocols, build systems, and take some of that burden off of individuals.

Larry: Well, this has been a really useful conversation and I really appreciate the time you’ve taken. Kat Lo, the Program Manager for Digital Health Lab at Medan, and Viktorya Vilk, the Director of Digital Safety and Free Expression at PEN America. Thank you so much. Before I let you go, how do people get a hold of your organization, and more information?

Viktorya Vilk: So please look us up on pen.org 

Kat Lo: You can find us at meedan.com.

Larry: Kat Lo and Viktorya Vilk. Thank you so much.

Viktorya Vilk: Thank you so much for having us. It’s a pleasure. 

Larry: Are We Doing Tech Right is produced by Christopher Le. Maureen Cochan is the Executive Producer. Theme music by Will Magid. I’m Larry Magid.


Share this...