Facebook Fueled Anti-Refugee Attacks in Germany, New Research Suggests


#1

“Their reams of data converged on a breathtaking statistic: Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50 percent.”

“The uptick in violence did not correlate with general web use or other related factors; this was not about the internet as an open platform for mobilization or communication. It was particular to Facebook.”


#2

How could we translate “to one standard deviation above the national average” in layman’s words? Thanks :slight_smile:


#3

In areas where Facebook was used more by each person than the national average, attacks on refugees increased by about 50 percent.

I think they were being wordy with statistics maths in order to be precise. But given that it’s a long sentence already, it didn’t make it very accessible!


#4

So, we are talking about the places where everybody* was using FB more than the national average? I can’t even think there could be such places (if not in very small villages).


#5

I think it’s where the average use of Facebook is above average.


#6

But how much above average?

1% above average doesn’t seem like much to me.

20% above average might push people into behaving in a different way.

IMHO it’s great if the NY Times wants to rely on serious research done by a serious University (and there is no doubt that Warwick is a top-notch University for the Social Sciences).

But I think journalists at the NY Times should also make an effort so that everybody reading their articles can understand what they are talking about…


#7

TL;DR Article claims: Towns saw an increase in Anti-Refugee attacks if the average daily Facebook use of that town was higher than for 84% of all people nationally.

That percentage is related to the number of people, not to the average time spent on use of Facebook. (Conditions apply, see below.)


Long version:
I quickly browsed the scientific paper to find what they measured, but realized I would have to read it more or less in full to understand it fully. They have somehow used Germany’s most popular Facebook page (Nutella Germany) to gauge general Facebook use.

So I’ll just speculate :wink:

Suppose they managed to estimate the time spent on Facebook per user per day. Then they computed the average of that, arriving at a national average and an average per town.

Now, about the “standard deviation”: of course, the time a person spends on Facebook is usually not the average for the population, but how far off from the average are they?

One way to express that would be as a percentage of the average. Say the average is 2 hours and a person spends 2 hours 30 minutes. Then that would be 25% higher than the average.

That’s easy to understand intuitively. The problem is that it doesn’t say anything about how common it is to be 25% above the average value - and it depends on the average. So 30 minutes is 25% of 2 hours, but only 5% of 10 hours.

Another way is relating to the rest of the population. Pick a limit on either side of the average, and count how many people fall within this range. Say 50% use Facebook between 1:50 and 2:15 per day.

This is what the standard deviation describes. The standard deviation also factors in how the data is spread around the average, which is often a good thing.

When we don’t know the exact distribution, assuming normal distribution might be ok. In that case, one standard deviation means those 68% of people closest to the average (34% below, 34% above).

Being above one standard deviation means a person belongs to the 16% most active Facebook users, or that 84% of the people use Facebook less than that person.

In the report, those towns had an average use higher than what 84% of all the people (nationally) had. That means they either had This could be the case if, for example, there was a group of extremely active people or a larger part of their inhabitants fell within the 16% most active users as seen nation-wide.

Caveats:

  • I have not looked into the actual measure of activity used in the report
  • I have not looked into the actual distribution of the data, so those specific percentages may not apply
  • I have not checked if claims in the article have support in the report
  • Statistical correlation does not mean that one thing caused the other, there could be a hidden cause affecting both things

#8

Interesting questions:

Causation? Correlation? And are we sure we know what is causing what, as the Times story is so forcefully arguing?


#9

The focus of their outrage should be more on Angela Merkel for welcoming the invasion by migration against the majority of citizen’s wishes. The refugees are easier, yet innocent, targets. It’s too bad they can’t hold votes to boot her out mid-term; that could properly redirect the focus of citizen’s discontent, reducing violence.


#10

One standard deviation above the national average is difficult to interpret, I agree. You can think of it like this: suppose you have a town where Facebook use is 1 SD above average – call it Zuckerbergstadt.

if you take a random person from Zuckerbergstadt and a random person from elsewhere, there roughly a 75% chance that the person from Zuckerbergstadt will have a higher facebook usage than the random person from elsewhere. And, of course, a 25% chance that the person from Zuckerbergstadt will have a lower Facebook usage.

I’m not sure if that makes it easier, though.


#11

Hi rconroy,

is that really a good-enough shorthand for what it means?

If it is, it’s all the more absurd that the journalist from the NY Times did not take the time to explain it in simple terms.


#12

I have to admit that researchers are lazy about interpreting effect sizes in plain language. Although I routinely convert standard deviation differences into person-to-person comparisons, most people don’t because they can’t! Published papers simply use the standard deviation as a measure of effect – and almost no-one understands what the heck a standard deviation is.

I agree – this is really about explaining your findings, and scientists (and science journalists!) are usually bad at it.