ARG...... no longer argh ? (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 27, 2024, 05:24:08 PM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2008 Elections
  2008 U.S. Presidential General Election Polls
  ARG...... no longer argh ? (search mode)
Pages: [1]
Author Topic: ARG...... no longer argh ?  (Read 13480 times)
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« on: November 07, 2008, 02:57:47 PM »

There's an objective way to normalize for that, I'd think.  Average error relative to average state error.  Creates a good deal of noise, though, and rewards screw-ups in small states.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #1 on: November 07, 2008, 03:01:40 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.

Yeah.  Looking at it in a qualitative way, I think Rasmussen was the big winner.  SV continues to be a very strong pollster with a fairly regular GOP bias.  SurveyUSA is OK but too volatile.  ARG sucks even when they're stealing other people's results.

Same old story as '04, it seems, except Mason-Dixon sucks now.  Sad
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #2 on: November 07, 2008, 03:05:37 PM »

I think you're right on every count and I'm way too lazy to try other methods.  Tongue  No complaints from here.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #3 on: November 11, 2008, 02:51:41 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #4 on: November 11, 2008, 03:04:34 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Can you give me an example?

He definitely did a few weights I disagree with (his undecided break-down and pollster quality weights were based on the primary -- yuck), but I still think his methodology is better than averaging the last three polls.

And R2K's arbitrary exclusion of Research2000 polls, no matter how many complaints they got, was not just disagreeable, but indefensible.  (not the biggest deal in the world, but still, wtf?)
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #5 on: November 11, 2008, 05:44:03 PM »

I think you're misunderstanding what the regression model was for.  The regression model ended up worth less than one crappy poll in most states.  It wasn't for picking up trends the polls didn't pick up. It was for picking up the slack when a state wasn't being polled.  For instance, the assumption was the Dakotas -- being highly similar -- would move together.  So if SD didn't have a poll for a month, but ND had a few, SD would be adjusted to move along with ND.  In some cases, like Indiana, Silver's model was skeptical because the demographics suggested that it shouldn't be close.  But eventually the IN regression came into step, and even when it was out of step, it was the difference of a point and a half at most.  Why?  Because Indiana did have polls, and the model ceded to them.

Silver also provided a raw, trend-adjusted polling model.  That included only subsequent national poll movement (downweighted), pollster quality (see previous caveat), sample size.  True, how much sample size is weighed is arbitrary.  But we do that kind of things in our mind, and it always ends up more arbitrary that way.

The RCP average is really the same thing as the Atlas, a "dumb" average.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #6 on: November 11, 2008, 07:46:02 PM »

I half-agree with you.  I could have lived without the pollster quality weights.  But I like the sample size weights, and the time weights.  Those are things that are hard to mentally quantify on-the-fly.  On those alone, I think it was superior to the RCP average.  Purity is not necessarily optimal.  Besides, both present the pure underlying data.  They just calculate it in different ways ("dumb" average vs. maybe-overcomplicated quantified analysis.)
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.033 seconds with 14 queries.