Exit polls work like this (EXPLAINER: How Exit Polls Work,
Robin Sproul, Nov. 1, 2014, ABC News):
How conducted:
Interviewers
stand outside polling places in precincts that are randomly selected. They
attempt to interview voters leaving the polling place at specific intervals
(every third or fifth voter, for example).
Voters who agree to participate in the poll fill out a short
questionnaire and place it in a ballot box. Interviewers phone in results three
times during the day.
When a voter refuses to participate, the interviewer notes
the gender and approximate age and race of that voter. In this way, the exit
poll can be statistically corrected to make sure all voters are fairly
represented in the final results.
Questions asked:
... typical(ly)
... Who they just voted for in key races; What opinions they hold about the
candidates and important issues; The demographic characteristics of the voter
Example
of a 2014 exit poll issue question:
How worried are you about the
direction of the nation’s economy in the next year?
• Very worried
• Somewhat worried
• Not too worried
• Not at all worried
Are
exit polls accurate?
... like any other survey,
are subject to sampling errors. Before news organizations report any exit poll
results or make projections ... they compare results to pre-election polls,
past precinct voting history, and have statisticians and political experts
carefully review the data.
After the polls close the exit poll results
are weighted using the actual vote count to make the data more accurate. Even
projections that are made without any actual vote data are not based solely on
the results of exit polls.
Accounting for the early votes or by mail:
In the
2012 election, just over one third of Americans voted before Election Day,
using some form of absentee or early voting. Capturing information about these
voters is challenging, but it is critical to report accurate information about
all voters.
In states with high
numbers of absentee/early voters, telephone polls are conducted to reach those
voters. Data from these telephone polls are combined with the exit poll data to
provide a complete portrait of all voters.
Reporting exit poll results:
On Election Day, there is a
strict quarantine on any news coming from the early waves of exit poll data
until 5:00 p.m. ET. By about 5:45 p.m., some initial demographic information
about voter turnout will be available on ABCNews.com.
Winners will not be projected until polls are
closed, so announcements come on a state-by-state basis as individual state
polls close. Information will be constantly updated throughout the evening on
ABCNews.com and on all ABC News programs.
The following are the hard questions needed to ask, not only
by the media, but also by the public to critically understand a poll result (20 Questions A Journalist Should Ask About
Poll Results, http://www.ncpp.org/?q=node/4):
1. Who did
the poll?
2. Who paid
for the poll and why was it done?
3. How many
people were interviewed for the survey?
4. How were
those people chosen?
5. What area
(nation, state, or region) or what group (teachers, lawyers, Democratic voters,
etc.) were these people chosen from?
6. Are the
results based on the answers of all the people interviewed?
7. Who should
have been interviewed and was not? Or do response rates matter?
8. When was
the poll done?
9. How were
the interviews conducted?
10. What
about polls on the Internet or World Wide Web?
11. What is
the sampling error for the poll results?
12. Who’s on
first?
13. What
other kinds of factors can skew poll results?
14. What
questions were asked?
15. In what
order were the questions asked?
16. What
about "push polls"?
17. What
other polls have been done on this topic? Do they say the same thing? If they
are different, why are they different?
18. What
about exit polls?
19. What else
needs to be included in the report of the poll?
20. So I've
asked all the questions. The answers sound good. Should we report the results?
Pollsters like to use the word "scientific"
liberally. You find the terms scientific
polling or scientific sample of
eligible voters or voters in their writings constantly. That may be because
pollsters mostly deal with the public and so they favor the term scientific as that would be more
appealing over the probability sample which is the standard description
used in mainstream sample survey literature.
"A fundamental tenet of scientific measurement is that
the measuring device is standardized over different objects being measured"
[Groves 1989, as
cited by Bishop and Mockabee, Survey
Practice, Vol 4, No 6(2011)]. It follows that any question should mean
essentially the same thing to all respondents. A question should be worded so
that every one should be answering
the same question. Without such standardization you can't know if your
measurements are reliable and valid.
Although most public opinion researchers have
certainly become sensitive to the effects of variations in question wording and
context, they have, with few exceptions, been much less attentive – if not
oblivious – to how the meaning-and-interpretation of survey questions can vary
across respondents and over time even when the wording and context of the
question itself remains identical. How well, for question? What, if anything,
do we know about how respondents interpret the question: “Do you approve or disapprove of
the way Barack Obama is handling his job as president?” Or how he’s handling the economy? What does “handling
his job as president” actually mean to them? What does “the economy” mean?
Does it vary across respondents and over time?
On the other hand, the pollsters' obsession with the qualifier
"scientific" may be traceable to sociological roots. According to
sociologist Howard S. Becker (Criticism
of polls and surveys in American social science, March 24, 2015,
Observatoire des Sondages), in addition to the for-profit commercial branch of
polling, there is the other and more ambitious branch, grown out of academic
survey research, that tried to make a scientific
social science:
We can better understand today’s
polls if we see them as part of a larger movement, designed to create a
“scientific” social science, whose two connected but distinguishable branches
collaborated in the shared effort to legitimate a style of research that came
to be known, variously, as survey research or polling. One branch grew out of
the interest of businesses, and the advertising agencies they supported, in
finding out what their audiences and customers wanted so that they could make
larger profits. The other grew out of the statistical tradition in sociology
which, ... wanted to prove that sociology and related disciplines studying
contemporary society were “real sciences” like physics and chemistry, capable
of producing demonstrably true generalizations and laws by using the rigorous
methods of measurement and statistical and mathematical analysis of those
sciences. ...
For the question of whether this ambition of sociologists in
America succeeded, Becker says:
Many
thought, and still think, that they succeeded, and the proof can be seen in the
pages of the major American sociological journals, where studies in this style
provide the vast majority of articles.
But
the victory was never complete. ... And then the whole enterprise lost substantial ground
as a result of the 1948 presidential election in the United States. ...
This event caused a serious reconsideration of the many
problems of doing accurate polling and making predictions that withstood the
test of reality. The credibility of the whole operation was being openly
doubted. This failure of polling and survey methods affected both the commercial
interests of the big polling organizations and the aspirations and continued
existence of academic research organizations and individual scholars.
The predictive failure of the election polls had important
consequences. In the time between 1936 and 1948, polling had become a large
business, which made its profits by doing surveys designed to help commercial
enterprises—manufacturers, advertisers, radio networks, Hollywood studios—guess
what the buying public would respond to in a way that would make money for
them. Election studies had become what they have remained [2], the one kind of survey study whose
accuracy can be assessed by comparison with the events it is meant to predict.
The resilient polling industry and the social scientists
have taken such failures to heart and have been looking for richer techniques
to improve their analyses, predictions, and estimates ever since. Methodologically
polling started out with "straw polls" before 1936.
□ Literary
Digest had successes with straw polls from 1916 to
1932.
□ Literary Digest (with 2.3 million in straw sample) ousted by
Gallup in 1936 with his 3000 respondents in quota
sample.
□ Later, quota sampling gave way to random sampling or more
precisely probability sampling. Today every lookup for the definition of
"scientific polling" gives something like "Scientific polling consists of
surveying a random sample of the population in order to obtain statistically
significant results for an upcoming vote or election".
There could be many shades of scientific
polling as far as random selection goes and some may not be so scientific. In a recent publication by
Open Society Foundation (From Novelty to
Normalcy: Polling in Myanmar’s Democratic Transition, March 15, 2015,
available at http://www.opensocietyfoundations.org/sites/default/files/polling-myanmar-democratic-transition-20150318.pdf
) talking about survey sampling
in Myanmar we've read that:
"At
the local level in Myanmar, households are selected by a random process, much
as they are in countries with a longer research tradition. Most Myanmar research
companies we spoke with described techniques that are standard in international
research: direct multi-stage sampling of geographical areas, then sampling of
spots (villages or other small geographic units), then sampling of dwellings by
a random walk procedure, then identification of qualified individuals in the
dwelling by means of screening questions, and then selection of an individual
... ". (pp. 13-14)
If so
the random process would be marred by the random
walk procedure which is not that
standard. About that, Designing
Household Survey Samples: Practical Guidelines (ST/ESA/STAT/SER.F/98,
United Nations, 2005) said:
22.
Another type of non-probability sampling that is widely used is the so-called “random
walk” procedure at the last stage of a household survey. The technique is often
used even if the prior stages of the sample were selected with legitimate
probability methods. The illustration below shows a type of sampling that is a
combination of random walk and quota sampling. The latter is another
non-probability technique in which interviewers are given quotas of certain types
of persons to interview.
Example
To
illustrate the method, interviewers are instructed to begin the interview
process at some random geographic point in, say, a village, and follow a
specified path of travel to select the households to interview. It may entail
either selecting every nth household or screening each one along the
path of travel to ascertain the presence of a special target population such as
children under 5 years old. In the latter instance each qualifying household
would be interviewed for the survey until a pre-determined quota has been
reached.
And it went on to describe the technique which appears essentially
to be to avoid the listing of households. However, the verdict was that in
practice a random sample may not be realized as its supporters claimed because:
24. ... It usually fails due to (a) interviewer behaviour
and (b) the treatment of nonresponse households including those that are
potentially non-response. It has been shown in countless studies that when
interviewers are given control of sample selection in the field, biased samples
result.
And it is more likely to be biased:
25. ... With the quota sample approach, persons
who are difficult to contact or unwilling to participate are more likely to be
underrepresented than would be the case in a probability sample. In the latter
case interviewers are generally required to make several callbacks to households
where its members are temporarily unavailable. ...
In addition to the evolution of scientific
polls out of ad hoc straw polls, Hillygus observed these new trends in polling (The Evolution of Election Polling in the
United States, Public Opinion Quarterly, Vol. 75, No. 5, 2011, http://poq.oxfordjournals.org/ content/75/5/962.full.pdf#page=1&view=FitH):
[1] "In
forecasting the election, statistical models and prediction markets appear to
be viable alternatives to polling predictions, especially early in the
campaign."
[2] "In
understanding voting behavior, surveys are increasingly replaced by
experimental designs or alternative measures of attitudes and behaviors."
[3] "In
campaign strategy polls are increasingly second fiddle to massive databases
from voter files and consumer databases, changing the campaign messages that we
see."
She also observed that:
With the
proliferation in polls, we have also seen greater variability in the
methodologies used and the quality of the data. The lack of transparency about
those methodologies has contributed to skepticism about the industry. Coupled
with changes in technology and the information environment, it is perhaps no
wonder that polls have lost some of their luster.
All in all, for those of us who were
not yet captivated by the magical allure of polling, it would be wise to give
hard looks at messages of both the polling pundits and their critics. Besides, we
should as well keep our eyes wide open for the up and coming technologies and
new trends.
A prominent Myanmar historian once
said that the purpose of learning history is to prevent ourselves from becoming
dumb asses. Learning technology and science is perhaps to make our hearts to be
able to defy stresses or ignore temptations a bit longer, so that our heads may
have a bit more time to think.
No comments:
Post a Comment