PPP vs. Nate Silver and potentially cooked polls (user search)
       |           

Welcome, Guest. Please login or register.
May 18, 2024, 03:39:47 PM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Presidential Elections - Analysis and Discussion
  Presidential Election Process
  Polling (Moderator: muon2)
  PPP vs. Nate Silver and potentially cooked polls (search mode)
Pages: [1]
Author Topic: PPP vs. Nate Silver and potentially cooked polls  (Read 16552 times)
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« on: September 18, 2013, 08:51:32 PM »

Here's the original Nate Cohn takedown of PPP's methodology, btw:

http://www.newrepublic.com/article/114682/ppp-polling-methodology-opaque-flawed
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #1 on: September 20, 2013, 02:48:24 AM »

Nate Cohn offers more criticism of PPP here:

http://www.newrepublic.com/article/114769/ppp-methodology-results-arent-defense

essentially arguing that it looks like PPP uses ad hoc weighting to get their results closer to the polling average, in cases where other pollsters have already polled the race.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #2 on: September 20, 2013, 08:21:08 PM »

Nate Cohn offers more criticism of PPP here:

http://www.newrepublic.com/article/114769/ppp-methodology-results-arent-defense

essentially arguing that it looks like PPP uses ad hoc weighting to get their results closer to the polling average, in cases where other pollsters have already polled the race.


I've had this thought before.

1. Start a fake polling company.
2. Release bogus polls that simply report results close to averages of previously published polls.
3. Be right most of the time (and when you're wrong, everyone else is too).
4. Establish a great reputation as the most accurate pollster.

(Leap of faith)

5. Make money.

Tongue

You're not the first one to have thought of this:

https://uselectionatlas.org/FORUM/index.php?topic=153727.msg3300429#msg3300429
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #3 on: September 21, 2013, 02:13:34 AM »

A few graphs in that latest Cohn piece jump out at me.  First, PPP has zero house effect, which is at least consistent with the idea that they're fitting towards the polling average from other polls in the final days before the election:



Second, when they do a statewide poll for both a gubernatorial/senatorial race and a presidential race at the same time, the "error" (how far off their poll is from the actual election result) depends on whether the state is a gubernatorial/senatorial battleground or a presidential battleground:



This suggests that they might be juicing the numbers to fit towards the overall polling average in competitive races, at the expense of creating a greater error in the same state's other races.  This phenomenon does not exist in polls from firms that do live interviews:



Finally, this is a graph that's not specific to PPP, but which covers all robopollsters….it shows that robopolls become more accurate in races where there are also polls from live interviewers:



Which, if taken at face value, would suggest that other robopollsters are juicing their results as well, in order to make themselves look like better pollsters than they really are.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #4 on: September 21, 2013, 02:20:55 AM »

Nate Cohn offers more criticism of PPP here:

http://www.newrepublic.com/article/114769/ppp-methodology-results-arent-defense

essentially arguing that it looks like PPP uses ad hoc weighting to get their results closer to the polling average, in cases where other pollsters have already polled the race.


What I gather from this is that the only difference between PPP's methodology and the hocus-pocus of traditional poll weighting practices is that PPP has the opportunity to correct their numbers into something somewhat sensible whenever they realize they have an outlier. Every pollster has an assumption about how they expect things will/should look, and that certainly comes into play when adjusting demographic ratios, or establishing likely voter screens, or merging their cellphone sample or internet panel with the rest of the poll.

It's not scientifically rigorous, no, but claiming to be so is a sham in the first place when you're dealing with response rates in the single digits.

The point is that the hocus pocus that a pollster performs on the data is something that they should establish (at least internally, to themselves) *before* they conduct the poll.  They shouldn't conduct the poll, take a look at the results, and then fudge the weighting around to make sure that the result "looks right", or lines up with other polls.  If they're fudging the methodology like that, which is what's at least being hinted at with the data that Cohn presents, then they're basically committing fraud.

They could well be a below average pollster that is making themselves look like an above average pollster by juicing the numbers like this.   And if you take the final graph I gave in my last post at face value, then it's possible that other robopollsters are doing this as well.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #5 on: September 21, 2013, 11:14:25 PM »

In case it isn't obvious why juicing your results, as is being alleged here, is so bad, let me explain with this simplified thought experiment:

Let's suppose you have some hypothetical pollster called QQQ who's juicing their results.  That is, they conduct real polls, but the polls are pretty mediocre.  They're in fact below average pollsters in cases where they're just relying on their own numbers and letting the chips fall where they may.  However, it turns out that QQQ has been juicing their results in the following (exaggerated....not saying that PPP goes this far) way: Whenever there's a race that has been polled by at least three other pollsters in the last month, QQQ throws out their own numbers, and publishes a "poll" which is just the average of everyone else's polls for that race.

Since the average of all public polls is more often than not a decent predictor of the results, then QQQ's pollster ratings will actually look really good, since in most of the races where they're just copying everyone else, they'll be pretty accurate.  Sure, there will also be races where there aren't enough polls from other pollsters to meet their "at least three in the last month" criteria, so QQQ will just publish their own stuff, which isn't that accurate, but their overall "pollster rating" will look pretty good, because the good stuff outweighs the bad.

So isn't this "OK" for the poll consumer?  If QQQ is giving you numbers that, on average, are a pretty good predictor of the outcome of the race, aren't they "good" pollsters?

No, of course not.  What you, the poll consumer, want from a pollster is an indication of where the race stands.  You want things to work so that, when you see a certain result from a good pollster, it will cause you change your assessment of where the race stands.  But if that pollster is usually just giving you an average of everyone else's polls, then they're not giving you any new information that wouldn't already exist if they hadn't released the "poll".  And then the rest of the time, they're giving you a "real" poll which is of mediocre quality.

So again, if they're doing this without telling you, then they're basically committing fraud.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


« Reply #6 on: May 29, 2014, 09:19:29 PM »

More Cohn criticism of the methodology of robo-polls:

http://www.nytimes.com/2014/05/29/upshot/when-polling-is-more-like-guessing.html?_r=0
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.038 seconds with 12 queries.