Did you ever wonder why polls taken at the same general time can differ -- sometimes significantly. The general answer is a difference in samples. One poll's sample might be better than another. But while that can be true, it is a little more complicated than that.
Yesterday, I showed you a New York Times / Siena College Poll. That poll was done of Florida voters, and showed Hillary Clinton clinging to a slim 1 point lead there. But in addition to publishing their poll, the New York Times did a very interesting thing -- they gave all the raw data of their survey to four other distinguished pollsters, and asked them what that data showed.
Guess what happened. Including their own poll, they came up with five different results -- ranging from a Clinton lead of four points to a Trump lead of 1 point (see the top chart). How can that happen with all five using the same raw data sample?
The answer lies in how they each treat that same raw data. All polls question many more people than they actually report. That's so they hopefully can get an accurate picture of how the general population is thinking. They weight the poll according to percentages of the population -- by trying to narrow the poll to the correct percentage of racial groups, age groups, etc.
And they might not even agree on those percentages. Note the second chart. The five polls all had differing percentages just of the racial groups. You can expect the same with age and other groups.
And it doesn't stop there. Then then try to narrow the poll down to "likely voters", and they differ is how to do that (see the bottom chart). Some use the respondent's self-reporting on whether he/she will vote, while others use the voting history of the area being polled. Then they might differ on how to weight those likely voters -- whether to use a traditional method, or devise a model of their own. And finally, they might differ on whether to use as a reference the census numbers or the registered voter numbers.
As you can see, a poll is much more complicated than just calling people and asking them questions, and decisions must be made at each step in the process.
I shoed the the above steps just to remind you that polling is not an exact science. It is based on scientific principals, but a bad decision at any point can nullify that science. The truth is that a poll is an educated guess. It can be a very good guess, but also can be a very bad guess. It depends on the skill and honesty of those doing the poll.
Anyone reading this blog regularly will know that I have a weakness for polls. I will continue to bring you the results of many polls, because I do believe they can give us a vague picture of what is happening in an electoral race -- especially when averaged together. But I caution you -- never take any poll as the absolute final truth.