NUMBERS ARE PEOPLE COCK-UP BEFORE CONSPIRACY • CITE PRIMARY SOURCES OR GO HOME


Sunday, 11 October 2015

WE NEED TO TALK ABOUT GREEK SURVEYS

UPDATE 21/11/15: Join me in protesting the Greek government's attempt to discredit and punish staff at the polling unit of the University of Macedonia; the Independent Greeks' conspiracy theories and personal vendettas have hurt to many people already. Sign the petition here.

The days leading up to the second Greek election of 2015 saw a rash of new polls suggesting Syriza and ND were neck-and-neck ahead of the second election of 2015. No less esteemed an organ than the FT was citing Greek polls to inform its readers that ND's Meimarakis (whose mustache alone should have given him away as a throwback to the party's deep past) had refreshed the party's image and was proving such 'a formidable opponent' for A. Tsipras that ND was in with a chance again.

It was, we now know, bollocks. So were surveys conducted ahead of this summer's referendum, whose errors were made that much more humiliating for pollsters by the fact the actual result fell outside even the envelope of projections published in previous days. No less successful were predictions of the January election result, which again showed a dead heat to the very end, despite a comfortable lead for Syriza in real life. The graph below shows the evolution of polls, with large circles representing the actual election results.

"ElectionMonthlyAverageGraphGreece2015" by Impru20 - Own work. Licensed under CC BY-SA 3.0 via Commons.

Now readers know how this blog works. I explore cock-up before conspiracy. And I believe in surveys. I deeply respect those who produce them. They are tools with limitations, but they are good tools nonetheless. When surveys start to malfunction our window into the real world narrows, and we become more dependent on the confirmation bias factories in our heads and in our social circles.

So what can account for the errors we've seen? The following explanations have been put forward.
  • non-response (non-contact and refusal): an increasing share of the population mistrust pollsters and this attitude is correlated with particular, non-mainstream political views. More importantly, more and more people reject landlines altogether in favour of mobiles and the internet. Finally, social and economic upheaval makes people's contacts less reliable as households fall apart, people emigrate, and domestic life is disrupted. The people affected by these phenomena are systematically different in their politics from those unaffected, and are under-represented in what should be 'representative' samples. There is certainly enough evidence that these influences are important. See for instance UK results, and US findings on the matter, and some of my own work on emigration. To put it simply, rising non-response is a symptom of social deterioration; reduced fertility rates, reduced neighborhood cohesion, precarious employment and social isolation.
  • fluid public opinion/unit non-response: It is possible that individuals are becoming increasingly unwilling to discuss their politics specifically with pollsters, either out of mistrust or out of disillusionment, and that this attitude is correlated with a predisposition to vote for non-mainstream parties. For an empirically founded overview of this argument, in Greek, see here. To this explanation should be added the fact that the context of recent Greek elections and even more so the Greek referendum was extreme; public opinion may have been genuinely fluid to such an extent that late surges can make a difference. For an argument and data in support, see HuffPo's surprisingly good coverage here. Alternatively, pollsters' failure to allocate 'don't know responses' to a likely preference (eg based on their previous voting record, which would usually have favoured Syriza) may have been at fault (an argument to that effect here)
  • abstention: with abstention on the rise, Greek survey results are becoming increasingly sensitive to people's propensity to vote and small variations in nationwide voting intention can be magnified in real elections.
  • herding: pollsters are worried about being the odd-one-out and weed out or suppress survey results that put them at odds with their peers. Nate Silver makes a good case for this here. Ironically, New Democracy staffers caught on to this theory and spent much of the eve of the last elections briefing opinion formers (of whom I am not one) that ND was indeed ahead in surveys but the pollsters were self-censoring out of fear. They got caught.
  • tampering: pollsters are in cahoots with political parties whom they insist on painting in a favourable light. There were multiple accusations of such tampering in the aftermath of the 2015 elections, reviewed in this excellent post on Dateline: Atlantis. My feeling is that this may be true of some, but not nearly enough pollsters to explain the outcomes we've seen (see below on the performance of 'non-establishment' polls)
How to deal with these problems?
  • Stick to cross-tabs. The Press Project ran its own crowdfunded survey ahead of the September elections. Despite a stated intention to compare its findings with those of other polls, in the end TPP simply chose to report on cross-tabulations only (full video here and more commentary here). Although this was likely motivated by the relatively small sample size achieved, I would hazard a guess that TPP found themselves faced with the same methodological issues as everyone else. 
  • Invite scrutiny. Pollsters ProRata (who came closest of all to the actual September election results) have opened up their survey datasets to researchers. [Or did they? See Comments section!]
  • Extrapolate from biased samples. Ironically, the innovators here were populist blogs such as Tromaktiko (see a sample here), who quickly realised how unrepresentative their readership is and developed simple rules of thumb in order to work backwards from this to a more 'normal' population. At least one company (the unknown and ungoogle-able Clear Foundations Institute) has tried to combine a telephone survey with a web survey (see here). These approaches so far seem to have failed as badly as the old-fashioned methods, but that's not to say that there aren't valid ways of extrapolating from biased samples (example here).
  • Build the sampling frame or weights matrix around referendum results. Pollsters have repeatedly claimed that their raw data were of the highest quality, but errors crept in as the data were weighted. In particular, weights based on past electoral preferences became unreliable as the Greek electorate became more fluid and deranged. A solution would be to use the recent referendum as the reference for weighting. The sampling frame approach was, for instance, suggested here. ProRata also appear to have used respondents' referendum voting record in weighting responses (as opposed to or in addition to past electoral voting record).
An even bigger problem 

Ultimately, political polls may not matter much; I've yet to see convincing evidence that undecided or tactical voters are swayed by them, although private party funding may be influenced.

More to the point, political polls will always be subject to accusations of tampering. At the back of the Greek people's heads, they are part of politics and thus to be taken with a grain of salt. But what if similar problems were impacting the surveys used to produce national statistics?

National statistics surveys have an advantage over polls in that they are allowed to be much more expensive than political surveys, they have to follow a common methodology, and are, in some cases, compulsory for the poor bastards chosen to provide a response. And while there are potentially incentives to tamper with, say, unemployment rates or poverty rates, tampering with more obscure findings like IT usage statistics is pretty inconceivable.  

It's not difficult to find the non-response rates associated with national statistics surveys. ELSTAT publishes a good deal of this information under each theme's 'methodology' section (see an example for the Labour Force Survey here). Additionally Eurostat summarises the national quality reports for each survey on an annual basis (see an example for EU-SILC here). In the graph below, I've put figures together for some of my favourite national statistics (on employment, household spending, living conditions and IT use). Plotting them all together makes for grim viewing.


First of all, it's plain to see that there is a long-term trend towards rising non-response rates, mirroring the experience of pollsters in Greece and abroad. However, this trend clearly accelerated after 2010, the first memorandum year, which gives credence to a 'social breakdown non-response' theory. Second, the years 2011 and 2012 seem to have been a particularly sharp deviation from this trend - which I believe is due to the fact that this was the period of mass disruption - e.g. Greek's sense of physical safety and food poverty were at their worst. Third, not all surveys have seen non-response develop in the same way. EU-SILC, a survey designed to tease out signs of poverty and its implications, has managed non-response a lot better, and appears to be returning to normal. How is SILC different? I am not sure - bear with me as I try to find out.

The Labour Force Survey provides even more detail on non-response. It shows, for instance, that it's the 'uncontactables' that account for the rise in non-response. They also show that major urban areas have substantially higher non-response rates. Attica, in particular, has a non-response of around 40%, twice its pre-crisis level. This should get ELSTAT worried.


One might recoil in horror from such numbers - we get our unemployment figures from a survey that has 25% non-response? WTF?! It's important to remember, however, that statisticians don't just let non-response be; they contact different households fitting the same general description (e.g. local area, age of reference person) until they fill their quota for each box on their grid. When non-response is random, or displays no systematic pattern that is related to the subject of the survey, it does not bias the resulting figures, even if it's really quite large. Greek election polls had ca. 80% non-response even back when they were accurate (data here).

Ultimately, I'm not even sure that the Syriza surge was correlated to the rise of the uncontactables. Electoral data are available here, and a regional breakdown of non-response is available, at least for LFS. The interesting thing is that the correlation between transience and Syriza vote growth is quite weak




TO BE CONTINUED

9 comments:

  1. Nice summary. Thumbs up!

    However pollster ProRata did NOT "open its data to researchers" and surely did not invite scrutiny.

    They just hosted an online "job fair" for PhD candidates in PolSci, who could have some limited access to some survey data (of unknown "grain level") and were required to write up a NARRATIVE that could impress the ProRata board.

    I do hope ProRata embrace your suggestions and open up their microdata to all parties interested, instead of playing tacky PR moves.



    ReplyDelete
    Replies
    1. Hi anon. I apologise as I did not check the level of limitations involved in ProRata's initiative, and I haven't corroborated your comments either. I have no reason to doubt you and, if true, what you say is very disappointing. However, is there any more detail you can provide on the T&Cs? I will be happy to post/host here.

      Delete
  2. I needed several examples to write an article on this subject, and your article was of great help to me 카지노사이트

    ReplyDelete
  3. Excellent post! I will take a note of your site and continue checking for new data because its very helpful for me.
    토토사이트

    ReplyDelete
  4. https://gamezoom.xyz Obtaining a exercise routine lover can drastically improve your muscle mass-building final results. Your lover might be a useful method to obtain determination for sticking with your exercise routine treatment, and driving anyone to increase your endeavours as you figure out. Using a reputable companion to sort out with will also help help keep you safe since you will have a spotter.

    ReplyDelete
  5. Casino Roll Casino Roll - CasinoRoll
    What santiyepazari.com is a Casino Roll Casino? It 메이저사이트 추천 is a roll casino game 무료슬롯머신 where you can win big money by 슬롯 꽁 머니 making your first deposit at a casino in the process. It's a 이 스포츠 dice game where you can

    ReplyDelete
  6. It's for slow-loading 사설토토, sometimes they affect your deployment that can damage Google's high-quality scores 토토사이트웹. If you complain to AdWords about advertising and marketing 카지노사이트...You can now add this RSS to my e-mail and find out more about yourself 바카라사이트.

    ReplyDelete

  7. Please keep us informed like this. Thank you for sharing

    ReplyDelete

  8. This article is truly amazing. Appreciative for sharing such mind blowing information.

    ReplyDelete

Please remember that I am not notified of any comments and will not respond via comments.

Try to keep your criticism constructive and if you don't like something, do tell me how to fix it. If I use any of your suggestions, you will be duly credited.

Although I'm happy to entertain criticism of myself in the comments section, I will not tolerate hate speech. You will be given a written warning and after that I will delete further offending comments.

I will also delete any comments that are clearly randomly generated by third parties for their own promotion.

Occasionally, your comments may land in the spam box, which may cause them to appear with a slight delay as I have to approve them myself.

Thanks in advance for your kind words... and your trolling, if you are so inclined.