Sunday, April 26, 2015

Elections and Polls


How does an election poll work as a business, for example, in the US?
"For pollsters, there's no money in asking questions about elections and releasing the numbers to the media. They do it as a marketing tool to attract clients who want to know what people think about, say, shampoo."

The fact is that commercial polls are where money is and election polls, if paid for by news sources like newspapers, radio and TV, are not profitable, and are often operated at a loss.

Opinion polls and market research has been done for a long time. But there is no real way to verify even some simple information like what percentage of the people are using this brand of shampoo or that brand of toothpaste. Forget about asking the opinions of the people if they prefer this policy or that policy or if they like this or that political party, or the government, especially if you are at a place where people still needs to get used to speaking out.

Predicting elections results correctly is about the only way for pollsters to show they know their stuff. But it is not always easy to get the predictions right. A classic example is the 1936 Roosevelt-Landon presidential election in the US. The Literary Digest conducted a postal opinion poll aiming to reach 10 million people, a quarter of the electorate. After tabulating 2.4 million returns they predicted that Landon would win by a convincing 55 per cent to 41 per cent. But the actual result was that Roosevelt crushed Landon by 61 per cent to 37 per cent. In contrast, a small survey of 3000 interviews conducted by the opinion poll pioneer George Gallup came much closer to the final vote, forecasting a comfortable victory for Roosevelt.

With this success Gallup went on to establish the “American Institute of Public Opinion (AIPO)” with the goal “impartially to measure and report public opinion on political and social issues of the day without regard to the rightness and wisdom of the views expressed.” It was the real beginning of the claim by pollsters, past and present, that polls "can measure the true will of the people and that, through the polls, the people get a real voice in between elections and on all kinds of issues." Now there are reasons to suspect if that is not too much of a publicized or idealized view.
                                                                                                                          
For the history and development of opinion polls we will have to look at the history of this industry in the US. It has its beginnings there and it still has its biggest presence there. Forties in the last century was the time institutionalization and professionalization of public opinion research made headway. In 1941 the first university institute National Opinion Research Center at the University of Chicago was founded followed by American Association for Public Opinion Research, the first professional/academic association in 1946. The World Association for Public Opinion Research followed one year later. In 1948 the Public Opinion Quarterly the first academic journal was published. Development of opinion polls in Europe was somewhat delayed because of the effects of the World War.

Yet 12 years after Gallup's acclaim, the most famous failure was the polls predicting that Republican Thomas Dewey would beat incumbent Democratic president Harry Truman in the 1948 election. Not only Gallup, but two other major pollsters Crossley and Roper were wrong too.


Candidate
Party
Electoral Votes
Percent Popular
Votes
Final
Gallup
Estimate
Final
Roper
Estimate
Final
Crossley
Estimate
Harry Truman
Democrat
303
49.6%
44.5%
38%
45%
Thomas Dewey
Republican
189
45.1%
49.5%
53%
50%

"Between 1956 and 2004 the US presidential elections showed an average deviance of only 1.9 percent (based on Mosteller method 3, one of several statistical ways how to calculate the margin of error ... But there have been major disasters for the pollsters in many countries, e.g. the US presidential elections of 1980, the British parliamentary election of 1992, or the German parliamentary election of 2005", and US presidential elections again in 2012. Even with these exceptions the polls were correct in predicting election outcomes most of the time. This is really not surprising, if we note that in mature democracies people have little reasons to lie as to whom they'll vote for and given that the sample of eligible voters is truly representative, and if the voter turnout is big enough, the numbers will always prove to be correct. So, the explanations for prediction failures must find fault with things other than the problem of the deceiving citizenry in those countries.

When an incumbent and an inspiring candidate meet in the presidential elections the Americans have a simple explanation for the results: the people let the good one stay and kick out the bum. Maybe some pollsters don't believe in such simple formulas or else they find it much harder to distinguish a good one and a bum than people do.

The International New York Times (Why Polls Can Sometimes Get Things So Wrong,
July 3, 2014) explained:

The science of polling is sound, but if you ask the wrong group of people your poll questions, you can get the wrong answers. Think of it this way: An arrow shot by an expert marksman has some chance of hitting the target depending on the wind, the distance and any number of other things, but if the marksman aims at the wrong target, those other things have nothing to do with why the arrow misses.

In 2012 Obama-Romney presidential elections, Gallup was wrong again and gave four factors that reduced the accuracy of its polling in a 17-page report. According to Huffington Post, June 5, 2013 they were:

Misidentification of Likely Voters. ... using a procedure developed in the 1950s ... Last year, this likely voter model moved Gallup's estimate of the margin separating Obama and Romney 4 points in Romney's direction.
Under-Representation of Regions. Gallup also weights its data by a variety of factors ... effectively undersampling states that vote more Democratic.

Faulty Representation of Race and Ethnicity. ...Gallup in recent years has used an unusual method to ask about race that distorted the racial composition of its samples when the data were eighted. ...This led to a disproportionate number of people who said they were multiracial, and that in turn distorted the weighting procedure, effectively giving too much weight to some white voters.

Nonstandard Sampling Method. Before 2011, Gallup had selected phone numbers using random digit dialing, or RDD, which calls randomly generated numbers. This is the procedure that most national media polls have used for decades. ... Gallup made a significant change in 2011, when it dropped the RDD methodology for its landline sample, using instead numbers randomly selected from those listed in residential telephone directories. ... But the change came with a downside: Not everyone who has a landline has a listed number. Although Gallup's initial research indicated that cell phone calls would cover the difference, they didn't: The listed sample turned out to be older and more heavily Republican than the RDD sample.
It is evident that all things being equal, exit-poll, that is, polls taken at a sample of the voting stations of a sample of voters after they have voted, would be much more accurate. But exit polling is not allowed everywhere and for example Singapore and New Zealand have a complete ban. Less accuracy aside, even when exit polls are allowed, pre-election polls are also in much demand. Because "they are the basis for campaign strategy by candidates, parties, and interest groups. They are the primary tool that academics and journalists use to understand voting behavior."

It is correct to say that in US, the first day after elections is the beginning of the season for next rounds of election polling. The only alertness, tenacity, and single mindedness comparable with that in the case of our citizens seems to be their search for cramming masters for their children as soon as a high-school completion exam (which also doubles as the university entrance exam) is over. Maybe we could expect a different kind of polling and marketing industry based on this to develop as we now see a flurry of activities by polling pundits trying to make inroads into Myanmar.


                                                                  



No comments:

Post a Comment