Share this...

There is an old saying that figures don’t lie but liars figure. That’s a good thing to keep in mind when examining how some companies market the results of surveys.
OK, perhaps lying is too strong a term, but I’ve seen too many press releases that promote the results of a survey but don’t tell the entire story, and surveys whose methodology — including the questions asked and how the sample was derived — simply doesn’t pass muster. Unfortunately, all too often journalists and bloggers pick up on these press releases without critically examining the methodology or digging further to make sure that the actual data confirms the headlines.
In many cases, when a company or research organization announces the results of a survey, the press release provides a brief summary of the survey but not much detail beyond that. Before I write about a survey, I ask to see the underlying report. It should include a summary of the methodology, including how the sample was derived, the actual questions asked, and how they were answered. Sometimes, after examining that report, it’s clear to me that the methodology was flawed or the summary isn’t fully supported by the underlying data.

 Common mistakes

My examples deal with studies about online safety and privacy – issues I pay close attention to as a tech journalist and as co-director of, a nonprofit Internet safety organization.
Take a recent study conducted by Harris Poll, which issued a press releaseshouting that “6 in 10 Americans Say They or Someone They Know Have been Bullied.”
The words “someone they know” caused me to question this headline. Asking people if they’ve been bullied is legitimate, but adding “someone you know” muddies things. Frankly, I was surprised the answer wasn’t closer to 10 in 10. If someone asked me “have you or someone you know been killed in a plane crash?” I’d have to say yes because one person I knew did die in that manner. But that doesn’t mean it’s happened to me, and it certainly doesn’t make it common.
The press release also reported that “this is an issue affecting a great many Americans, and there’s a very real perception that it’s getting worse.” But when I looked at the underlying data, I found 44 percent of adults said they had been bullied when they were in school and 10 percent had bullied others. When asked about their own kids, they reported that 9 percent had been bullied and only 2 percent said their kids had bullied others.
That’s not worse — it’s actually much better. Based on their own data, Harris’ headline should have read “Parents Report Far Fewer Kids Are Being Bullied Now Than When They Were in School.”
It’s important to differentiate between perception of a problem and the problem itself. If a survey, for example, finds that people are concerned about an increase in crime, that’s interesting data about perception. But it doesn’t necessarily mean crime is on the rise.
A 2010 study conducted by Harris for Internet security firm McAfee came with a press pitch promising “shocking findings of teens’ online behavior.” But when I read the actual report, the data was far from shocking. As I wrote in my post about the study, it was “actually a reassuring portrait of how most young people are exercising reasonable caution in their use of technology.” So, instead of regurgitating their “shocking” headline, my headline was “Study has good news about kids’ online behavior.”
In 2013, Microsoft released a survey report prepared by comScore about kids’ access to devices and online services. The methodology section claimed that “with a pure probability sample of 1025 one could say with a ninety-five percent probability that the overall results would have a sampling error of +/- 3.1 percentage points.”
Sounds very scientific, but the survey wasn’t even close to a pure probability sample. It was an opt-in survey linked from Microsoft’s Safety and Security Center page, Facebook ads and StumbleUpon. The survey results and questions didn’t appear biased, but a dead giveaway that it failed as a representative sample was the gender breakdown of 76% male and 24% female. Had the sample actually been representative it would have been close to 50/50.
Then there was a much-cited study that found that 69 percent of kids had been subjected to cyberbullying, based on a sample of nearly 11,000 young people between 13 and 22.  With such a large sample, one might think this would be an incredibly accurate study, but when I looked into the methodology, I saw that “the survey was available in the ‘Ditch the Label’ virtual help desk on Habbo Hotel between the dates of 28th August to 10th September 2013.”
In other words, this was an opt-in survey reached from the page of an anti-bullying organization on a U.K.-based social network. That’s like conducting a crime survey from a police station or trying to estimate the global cancer rate by interviewing people in an oncologist’s waiting room.
Expert advice
It’s been a while since I’ve designed surveys, so I consulted with David Finkelhor, a University of New Hampshire professor who’s director of the school’s Crimes against Children Research Center and an authority on survey design and methodology. Finklehor said that it’s “incumbent on journalists to find out if there are other surveys or studies on the issue, because frequently there are some that come to other conclusions.”
His other advice:

  • Look critically at the questions, and don’t assume the headline of the press release accurately reflects them.
  • Look at the sample. Is it relevant to the questions asked? Does it truly reflect the population it claims to represent? See if the sample might favor people who are interested in the subject, concerned about it, or experienced with it. Be especially leery of “opt-in” samples where people can volunteer, or where the subject matter is advertised ahead of time.
  • Ask who’s funding the survey. A lot of surveys are funded by advocacy groups or businesses that want to generate some concern about an issue.
  • Ask somebody who knows something about surveying whether this is a good scientific effort. A lot of surveys aren’t.
  • See if the questions were written in an “advocacy fashion” to make something look big or small. Warning signs include vague definitions of the issue, wording such as “anybody you know” that inflates the numbers, or questions asking if something has occurred over a long period of time.

Finklehor said “it should be a requirement” for organizations that go public with survey data to offer breakdowns of that data including the questions, a description of the survey design including how the sample was selected and whether respondents were reimbursed, how the questions were answered, and how many people who were asked to participate declined. It should include the questions, the response categories and the percentages.
Finklehor tends to put more trust in surveys published in refereed academic journals, but you shouldn’t assume that a survey is accurate just because it’s associated with a university. You need to look at who on that campus actually conducted the survey (I’ve seen some surveys cited that turned out to undergraduate class projects with extremely small samples) and examine their methodology.
Journalists don’t have to take advanced courses in statistical research to understand surveys, but they do have to apply the same critical eye they’d bring to any other source material. Examine the methodology and the motives, and don’t write an article based only on a press release or a brief summary.
This post by Larry Magid first appeared on the website

Share this...