ARG...... no longer argh ? (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 27, 2024, 10:43:13 PM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2008 Elections
  2008 U.S. Presidential General Election Polls
  ARG...... no longer argh ? (search mode)
Pages: [1]
Author Topic: ARG...... no longer argh ?  (Read 13485 times)
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« on: November 07, 2008, 01:22:36 PM »

I'm working on this right now. I'm only including polls taken during the last week. A bit arbitrary but I have to make the cut-off somewhere.

Anyway, so far:

Zogby: Standard error: 2.66%, McCain bias of 0.73%

ARG: Standard error: 4.17%, Obama bias of 1.78%

SUSA: Standard error: 3.73%, McCain bias of 0.95%

Strategic Vision: Standard error: 1.82%, McCain bias of 2.48%

I note that Strategic Vision does have a bit of a GOP bias but was still pretty accurate. Not based on a lot of polls though. To be continued...
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #1 on: November 07, 2008, 02:56:11 PM »

Are those results normalized at all by which states the polls were done in?

For example, I would suspect a pollster which happened to skip over Nevada, Iowa, and Alaska to have a much better average than one which polled all three. 

No, because there isn't really any objective way to normalize in this way. The assumption of what states were hard to poll are based on the polls themselves. I see your point but it's hard to account for in an effective way.

Overall, the polling was pretty good. They did however blew up in the states you mention and also in Arkansas. The main battlegrounds were polled good however.

Anyway, here is my preliminary list, from best to worse:

Strategic Vision
Quinnipac
Zogby
CNN/Time
PPP
SUSA
ARG
R2000
Rasmussen
Mason-Dixon

The most surprising result is obviously Ras and MD in last.

In terms of bias:

Republican bias:

SV 2.48%
MD 1.71%
SUSA 0.95%
Zogby 0.73%
Ras 0.68%

Democratic bias:

ARG 1.78%
R2000 1.52%
CNN/Time 0.97%
Quinnipac 0.81%
PPP 0.53%



Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #2 on: November 07, 2008, 02:59:54 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #3 on: November 07, 2008, 03:04:48 PM »

There's an objective way to normalize for that, I'd think.  Average error relative to average state error.  Creates a good deal of noise, though, and rewards screw-ups in small states.

That would work if every pollsted polled the same state or maybe if the overall number was big enough. But when I say some pollster are bad that is based on those pollsters doing badly in some states. I can of course say that some states are hard to poll because some pollsters did badly in those states. But it is a little bit too circular for my liking. And I think the regression anaysis that could be made would have too little data. Anyway, I feel my work with this stuff is done. If you want to do this analysis though, be my guest. Smiley
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #4 on: November 07, 2008, 03:06:25 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.

Yeah.  Looking at it in a qualitative way, I think Rasmussen was the big winner.  SV continues to be a very strong pollster with a fairly regular GOP bias.  SurveyUSA is OK but too volatile.  ARG sucks even when they're stealing other people's results.

Same old story as '04, it seems, except Mason-Dixon sucks now.  Sad

I think Zogby is a pretty big story though. He blew Indiana but otherwise did a pretty good job.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #5 on: November 07, 2008, 05:59:00 PM »

We can do a head to head comparison between any pair of pollsters by focusing just on the states where both competed.  For example, Pollster 1 would beat Pollster 2 by 0.35 if he averaged 0.35% closer to the actual margin in states where both 1 and 2 released a poll in the last week.  This somewhat mitigates the effect of different states being harder to poll. 

What would the results table look like for such a league? 

Why don't you tell me? Wink
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #6 on: November 07, 2008, 06:04:59 PM »

We can do a head to head comparison between any pair of pollsters by focusing just on the states where both competed.  For example, Pollster 1 would beat Pollster 2 by 0.35 if he averaged 0.35% closer to the actual margin in states where both 1 and 2 released a poll in the last week.  This somewhat mitigates the effect of different states being harder to poll. 

What would the results table look like for such a league? 

Ok, I may do a few of those, even though it's a lot more work than my original approach.

Rasmussen v SUSA
2.44%            2.29%
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #7 on: November 11, 2008, 01:30:51 PM »

RCP was the real winner in my book, or at least the theory of averaging.  IN was about the only state that was a bit off.  There is no magical single pollster, but when you average them out, you get a pretty good picture of whats going on.

538 did sort of the same thing, but I don't like Silver's arbitrary, Candidate A has a % chance stuff.

I agree fully.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #8 on: November 11, 2008, 03:01:15 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #9 on: November 11, 2008, 04:57:46 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Can you give me an example?

He definitely did a few weights I disagree with (his undecided break-down and pollster quality weights were based on the primary -- yuck), but I still think his methodology is better than averaging the last three polls.

And R2K's arbitrary exclusion of Research2000 polls, no matter how many complaints they got, was not just disagreeable, but indefensible.  (not the biggest deal in the world, but still, wtf?)

Oh, I'm not saying the Atlas model is the best either. Far from it. But my basic issue with this stuff is that we want to know how people will vote. Polls ask EXACTLY that. Mixing polls with other stuff supposedly capturing voter intention is suspect in my opinion. Now, I haven't lived long, though I believe it's a little bit longer than you. Wink But in that life-time my definite experience is that JJ's election rules are actually pretty good: factors not captured by polls seldom exist. I can sympathize with Silver's intentions but polls should be enough in theory and in practice it seems to me that they usually are. So I prefer a somewhat objective weighting together of polls. Of course, bias tends to enter into that as well.

Bottom line is that I like to use a rigid model which has not been subjected to any sort of bias or valuing from the constructor of a model (such as "pollster A is 1.345 times better than pollster B").

I don't have specific examples to offer you. But any kind of weighting would be somewhat arbitrary. If you take Factors A, B, C and say "A is 50%, B is 35% and C is 15%" that is a bit arbitrary. Reading through how he constructed his model and weighted my mental note was "interesting, but seems a bit arbitrary". This was like 6 months ago or something so I don't recall exactly what I was thinking about. I could look it up, but maybe you get my point anyway? Please note that I'm not specifically disagreeing with something, saying "A should be 42.5% instead" merely that I have no idea what weight A should have and I'm suspicious of assigning a weight to it.

I'm not sure whether this entire rant makes sense or anything...if it doesn't I will have to read the damn model again. Sad
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #10 on: November 11, 2008, 06:36:16 PM »

I think you're misunderstanding what the regression model was for.  The regression model ended up worth less than one crappy poll in most states.  It wasn't for picking up trends the polls didn't pick up. It was for picking up the slack when a state wasn't being polled.  For instance, the assumption was the Dakotas -- being highly similar -- would move together.  So if SD didn't have a poll for a month, but ND had a few, SD would be adjusted to move along with ND.  In some cases, like Indiana, Silver's model was skeptical because the demographics suggested that it shouldn't be close.  But eventually the IN regression came into step, and even when it was out of step, it was the difference of a point and a half at most.  Why?  Because Indiana did have polls, and the model ceded to them.

Silver also provided a raw, trend-adjusted polling model.  That included only subsequent national poll movement (downweighted), pollster quality (see previous caveat), sample size.  True, how much sample size is weighed is arbitrary.  But we do that kind of things in our mind, and it always ends up more arbitrary that way.

The RCP average is really the same thing as the Atlas, a "dumb" average.

Ah, it's all coming back to me now. Point accepted on non-poll factors. But I think was a bit sceptical of the weighting of the polls. I guess it was a general impression of a lot of weights and stuff that didn't seem solid enough. I like to keep as close to raw data as possible.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,779


Political Matrix
E: 0.39, S: -0.70

« Reply #11 on: November 12, 2008, 05:50:47 AM »

I half-agree with you.  I could have lived without the pollster quality weights.  But I like the sample size weights, and the time weights.  Those are things that are hard to mentally quantify on-the-fly.  On those alone, I think it was superior to the RCP average.  Purity is not necessarily optimal.  Besides, both present the pure underlying data.  They just calculate it in different ways ("dumb" average vs. maybe-overcomplicated quantified analysis.)

It's a trade-off between reliability and relevance (yeah, I'm taking an accounting course, right now...bleh). Weighting should make it more relevant/accurate but it does force you to make various judgement calls on issues and thus help introduce bias. I guess the bottom line is, to what extent did 538 over-perform RCP?
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.037 seconds with 14 queries.