Overtime Politics Thread (WARNING: Possible fraud) (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
June 03, 2024, 07:24:39 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2016 U.S. Presidential Primary Election Polls
  Overtime Politics Thread (WARNING: Possible fraud) (search mode)
Pages: [1]
Author Topic: Overtime Politics Thread (WARNING: Possible fraud)  (Read 73192 times)
BundouYMB
Jr. Member
***
Posts: 910


« on: April 07, 2016, 08:02:57 PM »


Why does it matter?

Bullsh!t made up polls are bullsh!t made up polls. Even if the numbers end up being closer than others. If you look at our predictions threads some posters come close to predicting the actual results. But if they set up what they claimed was a polling company and that those predicted numbers were from polls, that doesn't mean they'd be correct or that the polls would be any less nonsense.

This was still a pure fraud and I have to laugh at anyone whoever took it seriously after all the evidence came in. What's embarrassing is Atlas was the only place where that was the case, even r/sandersforpresident quit citing it.

I don't think anyone took them seriously after Super Tuesday. It's just funny that a number of polls were much worse than some fake pollster.

I mean, a fake pollster *should* do better than quite a few real pollsters assuming they simply put out polls that are close the current polling average. Assuming that you did that, you'll always do better than whoever was on the "wrong" side of the average. I think one of the things that got people initially sniffing around Research 2000 was that their polls were <i>never</i> "out there." In fact, they virtually all tracked extremely close to the average, or in lieu of that to whatever you'd expect according to the conventional wisdom. In real polling even slight mistakes in weighting or sampling in real polls have the potential to produce wildly incorrect results, so it's expected for there to be occasional junkers (especially given the generally low standard of work of all American pollsters, but I digress.)
Logged
BundouYMB
Jr. Member
***
Posts: 910


« Reply #1 on: April 07, 2016, 08:28:30 PM »


Why does it matter?

Bullsh!t made up polls are bullsh!t made up polls. Even if the numbers end up being closer than others. If you look at our predictions threads some posters come close to predicting the actual results. But if they set up what they claimed was a polling company and that those predicted numbers were from polls, that doesn't mean they'd be correct or that the polls would be any less nonsense.

This was still a pure fraud and I have to laugh at anyone whoever took it seriously after all the evidence came in. What's embarrassing is Atlas was the only place where that was the case, even r/sandersforpresident quit citing it.

I don't think anyone took them seriously after Super Tuesday. It's just funny that a number of polls were much worse than some fake pollster.

I mean, a fake pollster *should* do better than quite a few real pollsters assuming they simply put out polls that are close the current polling average. Assuming that you did that, you'll always do better than whoever was on the "wrong" side of the average. I think one of the things that got people initially sniffing around Research 2000 was that their polls were <i>never</i> "out there." In fact, they virtually all tracked extremely close to the average, or in lieu of that to whatever you'd expect according to the conventional wisdom. In real polling even slight mistakes in weighting or sampling in real polls have the potential to produce wildly incorrect results, so it's expected for there to be occasional junkers (especially given the generally low standard of work of all American pollsters, but I digress.)

PPP has a history of having their the scale, so that isn't an issue limited to fake pollster.

http://fivethirtyeight.com/features/heres-proof-some-pollsters-are-putting-a-thumb-on-the-scale/

Are you kidding me? I wrote a long reply and then it told me I was logged out when I tried to post it and I lost it. >_> Anyways, I'll just summarize what I wrote and say that while that's true there's obviously a huge difference between putting out real data, that's still completely usable to statisticians or really anyone, and putting your thumb on the scale during weighting to hedge your bets and putting out fake, made up data that's totally useless and pollutes real statistical analysis. After all, anyone can re-weight those PPP polls themselves and the result is the same as if PPP had done it. The data is still totally usable.

Also, I pointed out that the reason PPP does this is understandable and sympathetic. There's no way to reliably get a representative sample, and outliers are inevitable, especially when you poll as often as PPP does. Yet all it takes is one crazy outlier, like that Chicago Tribune poll you posted, and you get branded a bad pollster by morons who don't understand statistics (I'm not saying Chicago Tribune is a good pollster mind you -- just that one crazy result doesn't prove they're a bad one.) I mean, PPP gets their money from the candidates who hire them, and those candidates use PPP's polls at fundraisers and such to argue that their candidacies are viable. PPP can't afford to get a bad reputation from one outlier. And what they do is still good. They do a lot of polling pro bono and that's perfectly good data, even if PPP's in house weighting is bad. It's actually quite an unfortunate situation all around, and I don't think it's totally or even mostly the fault of pollsters who do this like PPP. It's mainly the fact that there's a lot of misinformation about polling out there, and it's one of those topics that everyone thinks they understand and few people actually do.
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.024 seconds with 13 queries.