ARG...... no longer argh ?
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
March 29, 2024, 09:19:49 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2008 Elections
  2008 U.S. Presidential General Election Polls
  ARG...... no longer argh ?
« previous next »
Pages: [1] 2
Author Topic: ARG...... no longer argh ?  (Read 13369 times)
big bad fab
filliatre
Atlas Icon
*****
Posts: 13,344
Ukraine


Show only this user's posts in this thread
« on: November 06, 2008, 08:48:23 AM »

I'm a bit afraid ARG wasn't so bad after all, at least in its last polls in big swing states:
VA, OH, NC, MO, IN, FL, CO.

Either its absolute numbers were good or the gap between Obama and McCain was right....

CNN Opinion Research had the best (OH, FL, CO, NV) and some bads (VA, NC, IN).

Rasmussen hasn't been very good this time. Mason-Dixon and Quinnipiac neither.
Survey USA not very convincing.

Isn't this the BIG SURPRISE of this election: ARG rather accurate in the end !!!
Wake me up, I must be in the middle of a weird nightmare...

538.com will probably give us an analysis on this, but maybe some home experts here have more precise facts than this vague topic ?
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #1 on: November 07, 2008, 01:22:36 PM »

I'm working on this right now. I'm only including polls taken during the last week. A bit arbitrary but I have to make the cut-off somewhere.

Anyway, so far:

Zogby: Standard error: 2.66%, McCain bias of 0.73%

ARG: Standard error: 4.17%, Obama bias of 1.78%

SUSA: Standard error: 3.73%, McCain bias of 0.95%

Strategic Vision: Standard error: 1.82%, McCain bias of 2.48%

I note that Strategic Vision does have a bit of a GOP bias but was still pretty accurate. Not based on a lot of polls though. To be continued...
Logged
kevinatcausa
Rookie
**
Posts: 196
United States


Political Matrix
E: -1.94, S: -5.04

Show only this user's posts in this thread
« Reply #2 on: November 07, 2008, 01:24:29 PM »

Are those results normalized at all by which states the polls were done in?

For example, I would suspect a pollster which happened to skip over Nevada, Iowa, and Alaska to have a much better average than one which polled all three. 
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #3 on: November 07, 2008, 02:56:11 PM »

Are those results normalized at all by which states the polls were done in?

For example, I would suspect a pollster which happened to skip over Nevada, Iowa, and Alaska to have a much better average than one which polled all three. 

No, because there isn't really any objective way to normalize in this way. The assumption of what states were hard to poll are based on the polls themselves. I see your point but it's hard to account for in an effective way.

Overall, the polling was pretty good. They did however blew up in the states you mention and also in Arkansas. The main battlegrounds were polled good however.

Anyway, here is my preliminary list, from best to worse:

Strategic Vision
Quinnipac
Zogby
CNN/Time
PPP
SUSA
ARG
R2000
Rasmussen
Mason-Dixon

The most surprising result is obviously Ras and MD in last.

In terms of bias:

Republican bias:

SV 2.48%
MD 1.71%
SUSA 0.95%
Zogby 0.73%
Ras 0.68%

Democratic bias:

ARG 1.78%
R2000 1.52%
CNN/Time 0.97%
Quinnipac 0.81%
PPP 0.53%



Logged
Alcon
Atlas Superstar
*****
Posts: 30,867
United States


Show only this user's posts in this thread
« Reply #4 on: November 07, 2008, 02:57:47 PM »

There's an objective way to normalize for that, I'd think.  Average error relative to average state error.  Creates a good deal of noise, though, and rewards screw-ups in small states.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #5 on: November 07, 2008, 02:59:54 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,867
United States


Show only this user's posts in this thread
« Reply #6 on: November 07, 2008, 03:01:40 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.

Yeah.  Looking at it in a qualitative way, I think Rasmussen was the big winner.  SV continues to be a very strong pollster with a fairly regular GOP bias.  SurveyUSA is OK but too volatile.  ARG sucks even when they're stealing other people's results.

Same old story as '04, it seems, except Mason-Dixon sucks now.  Sad
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #7 on: November 07, 2008, 03:04:48 PM »

There's an objective way to normalize for that, I'd think.  Average error relative to average state error.  Creates a good deal of noise, though, and rewards screw-ups in small states.

That would work if every pollsted polled the same state or maybe if the overall number was big enough. But when I say some pollster are bad that is based on those pollsters doing badly in some states. I can of course say that some states are hard to poll because some pollsters did badly in those states. But it is a little bit too circular for my liking. And I think the regression anaysis that could be made would have too little data. Anyway, I feel my work with this stuff is done. If you want to do this analysis though, be my guest. Smiley
Logged
Alcon
Atlas Superstar
*****
Posts: 30,867
United States


Show only this user's posts in this thread
« Reply #8 on: November 07, 2008, 03:05:37 PM »

I think you're right on every count and I'm way too lazy to try other methods.  Tongue  No complaints from here.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #9 on: November 07, 2008, 03:06:25 PM »

MD blew KY (underpolled McCain) and NV (overpolled McCain) really bad. They also did pretty bad in OH, PA and AZ (pro-Obama in AZ, pro-McCain in the other 2).

Rasmussen is really pulled down by blowing Nevada, Arkansas and Alaska. Other than that they were more or less allright in most places.

Yeah.  Looking at it in a qualitative way, I think Rasmussen was the big winner.  SV continues to be a very strong pollster with a fairly regular GOP bias.  SurveyUSA is OK but too volatile.  ARG sucks even when they're stealing other people's results.

Same old story as '04, it seems, except Mason-Dixon sucks now.  Sad

I think Zogby is a pretty big story though. He blew Indiana but otherwise did a pretty good job.
Logged
Lief 🗽
Lief
Atlas Legend
*****
Posts: 44,876


Show only this user's posts in this thread
« Reply #10 on: November 07, 2008, 03:44:06 PM »

I've been looking at the averages, and in swing state polls taken within the last 2 weeks (haven't done non-swing states yet, but with the large margins there, I think there's probably greater room for error), ARG, Zogby and CNN/Time were actually some of the best. Rasmussen and Mason-Dixon had on average the highest amount of error, and they both also had the largest error in McCain's favor. Meanwhile, PPP, which was supposed to be a Democratic hack firm, didn't seem to err in one direction or another.

Final Polls in CO, NV, NM, MT, MN, WI, MI, IA, MO, IN, OH, PA, VA, NC, GA, FL, NH, AZ, CA, NY, NJ.

Average Absolute Error
Rassmussen: 3.42      
Survey USA: 2.60      
PPP: 2.54      
Quinnipiac: 1.08      
Research2000: 3.03   
Mason-Dixon: 3.14      
ARG: 2.04      
CNN/Time: 2.18      
Zogby: 1.775

Average Non-Absolute Error (McCain +, Obama -)
Rasmussen: 2.56      
Survey USA: 0.9      
PPP: -0.16      
Quinnipiac: -0.43      
Research2000: -0.69      
Mason-Dixon: 2.68      
ARG: -0.44      
CNN/Time: -0.74      
Zogby: 1.35
Logged
kevinatcausa
Rookie
**
Posts: 196
United States


Political Matrix
E: -1.94, S: -5.04

Show only this user's posts in this thread
« Reply #11 on: November 07, 2008, 04:43:07 PM »
« Edited: November 07, 2008, 04:48:15 PM by kevinatcausa »

We can do a head to head comparison between any pair of pollsters by focusing just on the states where both competed.  For example, Pollster 1 would beat Pollster 2 by 0.35 if he averaged 0.35% closer to the actual margin in states where both 1 and 2 released a poll in the last week.  This somewhat mitigates the effect of different states being harder to poll. 

What would the results table look like for such a league? 
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #12 on: November 07, 2008, 05:59:00 PM »

We can do a head to head comparison between any pair of pollsters by focusing just on the states where both competed.  For example, Pollster 1 would beat Pollster 2 by 0.35 if he averaged 0.35% closer to the actual margin in states where both 1 and 2 released a poll in the last week.  This somewhat mitigates the effect of different states being harder to poll. 

What would the results table look like for such a league? 

Why don't you tell me? Wink
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #13 on: November 07, 2008, 06:04:59 PM »

We can do a head to head comparison between any pair of pollsters by focusing just on the states where both competed.  For example, Pollster 1 would beat Pollster 2 by 0.35 if he averaged 0.35% closer to the actual margin in states where both 1 and 2 released a poll in the last week.  This somewhat mitigates the effect of different states being harder to poll. 

What would the results table look like for such a league? 

Ok, I may do a few of those, even though it's a lot more work than my original approach.

Rasmussen v SUSA
2.44%            2.29%
Logged
kevinatcausa
Rookie
**
Posts: 196
United States


Political Matrix
E: -1.94, S: -5.04

Show only this user's posts in this thread
« Reply #14 on: November 07, 2008, 07:10:14 PM »
« Edited: November 07, 2008, 08:24:47 PM by kevinatcausa »

I'm in the process of doing one right now, though possibly based off of a different data set (I included the last poll from every firm which had one with a median date after 10/27).  I'll edit this post once the results are finished. 

______________________________________

hmm...the problem I'm running into here is that some pairs only have one or two states in common, leading to very noisy results (quite a few cycles where A beats B beats C beats A).  The straight averages are below (Positive means the firm in the row is on average that much closer to the mean, negative means the firm in the column)

Code:
          ARG       Mason/D   PPP       Quinn     Rasmuss   R2000     S.USA     SV       Zogby     Joe
ARG                 1.05      -0.42     -2.5*     0.07      -1.2      -0.87     -1.33*   -0.37     -0.85
Mason/D   -1.05               -0.53     -3*      -0.78      -0.75*    -1.71     -0.33*   -2.83     -1.77
PPP        0.42     0.53                -1*       1.17       0.2       0.9       0.25*   -0.07     -0.14   
Quinn      2.5*     3*         1*                 2.67*      N/A       0.67*     2.67*    0         0.99*
Rasmuss   -0.07     0.78      -1.17     -2.67*              -0.83     -0.12     -0.67    -0.67     -0.99
R2000      1.2      0.75*     -0.2       N/A      0.83                 0.2      -1.5*    -0.2*     -0.2       
S. USA     0.87     1.71      -0.9      -0.67*    0.12      -0.2                 0.8      0.8      -0.88
SV         1.33*    0.33*     -0.25*    -2.67*    0.67       1.5*     -0.8               -2.67*    -0.73*
Zogby      0.37     2.83       0.07      0*       0.67       0.2*     -0.8       2.67*              0.19
Joe        0.85     1.77       0.14     -0.99*    0.99       0.2       0.88      0.73    -0.19

A couple notes:
-As mentioned above, I took all polls with median day after 10/27 (including a couple 10/27-10/28 polls).  If more than one poll came from the same firm, I took the last.   This does leave out the 10/23-10/28 CNN/Time polls. 

-The entries marked with a * in the table above were based on less than 5 states and likely to be rather noisy.  R2000 and Quinnipac never polled the same state. 

-Joe Average is the poll gotten by averaging all of the included polls.  Since its effectively a poll with a larger sampler size, it will normally be better performing than an average individual component. 

Quinnipac only polled two states, but they were within half a point in Florida and Pennsylvania.  They were one of only two polls to on average perform better than Joe Average, the other being...Zogby.
Logged
kevinatcausa
Rookie
**
Posts: 196
United States


Political Matrix
E: -1.94, S: -5.04

Show only this user's posts in this thread
« Reply #15 on: November 07, 2008, 08:47:11 PM »
« Edited: November 07, 2008, 08:51:26 PM by kevinatcausa »

One more set of numbers:

For each pollster I took the average of

[(Average Absolute Pollster Error in state)-(Given Pollster's Absolute Error in state)] over all states in which the pollster polled.  Again positive numbers are good.  From top to bottom.

Quinnipac (1.49, 3 states)
Zogby (0.66, 7 states)
PPP (0.49, 15 states)
Research 2000 (0.43, 11 states)
Strategic Vision (0.06, 7 states)
Survey USA (0.05, 14 states)
Rasmussen (-0.29, 19 states)
ARG (-0.43, 14 states)
Mason Dixon (-0.97, 11 states)

This still isn't perfect, and it gives states like Alaska where the pollsters vary widely a disproportionate impact over states where the pollsters are close to each other. 

(EDIT: This is particularly critical in the case of Zogby, who gets a huge bump just from getting Nevada about 5 points closer than anyone else did...)
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,073
United States


Show only this user's posts in this thread
« Reply #16 on: November 08, 2008, 12:45:58 AM »

ARG was really really really terrible before Super Tuesday.  But then after Super Tuesday, they either magically became decent pollsters, or they just started copying other pollsters' numbers.  Also, ARG has always produced much crazier numbers in polls taken several weeks/months before the election, rather than just before the election, and all of these pollster ratings only assess the final polls from each pollster, rather than the earlier ones.

This is a problem that was pointed out on pollster.com recently.  We would ideally like to be able to accurately track the election, not just produce a good poll the day or two before.  Yet we have no way of testing the accuracy of polls taken weeks in advance of the election, because there's no objective number to compare them against.  A lot of the polls show very divergent results early on, but then converge towards the final numbers on election day.

Logged
JohnCA246
mokbubble
Jr. Member
***
Posts: 639


Show only this user's posts in this thread
« Reply #17 on: November 11, 2008, 10:10:44 AM »

RCP was the real winner in my book, or at least the theory of averaging.  IN was about the only state that was a bit off.  There is no magical single pollster, but when you average them out, you get a pretty good picture of whats going on.

538 did sort of the same thing, but I don't like Silver's arbitrary, Candidate A has a % chance stuff.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #18 on: November 11, 2008, 01:30:51 PM »

RCP was the real winner in my book, or at least the theory of averaging.  IN was about the only state that was a bit off.  There is no magical single pollster, but when you average them out, you get a pretty good picture of whats going on.

538 did sort of the same thing, but I don't like Silver's arbitrary, Candidate A has a % chance stuff.

I agree fully.
Logged
Bacon King
Atlas Politician
Atlas Icon
*****
Posts: 18,822
United States


Political Matrix
E: -7.63, S: -9.49

Show only this user's posts in this thread
« Reply #19 on: November 11, 2008, 01:44:28 PM »

RCP was the real winner in my book, or at least the theory of averaging.  IN was about the only state that was a bit off.  There is no magical single pollster, but when you average them out, you get a pretty good picture of whats going on.

538 did sort of the same thing, but I don't like Silver's arbitrary, Candidate A has a % chance stuff.

Agree with you that both painted a pretty good picture of the state of the race, but I didn't like RCP's arbitrarily removing certain pollsters and stuff.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,867
United States


Show only this user's posts in this thread
« Reply #20 on: November 11, 2008, 02:51:41 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #21 on: November 11, 2008, 03:01:15 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Logged
Alcon
Atlas Superstar
*****
Posts: 30,867
United States


Show only this user's posts in this thread
« Reply #22 on: November 11, 2008, 03:04:34 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Can you give me an example?

He definitely did a few weights I disagree with (his undecided break-down and pollster quality weights were based on the primary -- yuck), but I still think his methodology is better than averaging the last three polls.

And R2K's arbitrary exclusion of Research2000 polls, no matter how many complaints they got, was not just disagreeable, but indefensible.  (not the biggest deal in the world, but still, wtf?)
Logged
Lief 🗽
Lief
Atlas Legend
*****
Posts: 44,876


Show only this user's posts in this thread
« Reply #23 on: November 11, 2008, 04:54:04 PM »

Pollster > RCP.
Logged
Gustaf
Moderators
Atlas Star
*****
Posts: 29,770


Political Matrix
E: 0.39, S: -0.70

Show only this user's posts in this thread
« Reply #24 on: November 11, 2008, 04:57:46 PM »

Silver's % to Win wasn't arbitrary.  It was based on a statistical operation.  It probably wasn't reflective of the actual chance to win, but that wasn't really the intention.

And, yeah, RCP's punting of R2K was lame.

Well, it was arbitrary in the sense that it weighed different factors according to his, somewhat arbitrary, judgement calls. If I'm thinking of the right site, which model I read about.

Can you give me an example?

He definitely did a few weights I disagree with (his undecided break-down and pollster quality weights were based on the primary -- yuck), but I still think his methodology is better than averaging the last three polls.

And R2K's arbitrary exclusion of Research2000 polls, no matter how many complaints they got, was not just disagreeable, but indefensible.  (not the biggest deal in the world, but still, wtf?)

Oh, I'm not saying the Atlas model is the best either. Far from it. But my basic issue with this stuff is that we want to know how people will vote. Polls ask EXACTLY that. Mixing polls with other stuff supposedly capturing voter intention is suspect in my opinion. Now, I haven't lived long, though I believe it's a little bit longer than you. Wink But in that life-time my definite experience is that JJ's election rules are actually pretty good: factors not captured by polls seldom exist. I can sympathize with Silver's intentions but polls should be enough in theory and in practice it seems to me that they usually are. So I prefer a somewhat objective weighting together of polls. Of course, bias tends to enter into that as well.

Bottom line is that I like to use a rigid model which has not been subjected to any sort of bias or valuing from the constructor of a model (such as "pollster A is 1.345 times better than pollster B").

I don't have specific examples to offer you. But any kind of weighting would be somewhat arbitrary. If you take Factors A, B, C and say "A is 50%, B is 35% and C is 15%" that is a bit arbitrary. Reading through how he constructed his model and weighted my mental note was "interesting, but seems a bit arbitrary". This was like 6 months ago or something so I don't recall exactly what I was thinking about. I could look it up, but maybe you get my point anyway? Please note that I'm not specifically disagreeing with something, saying "A should be 42.5% instead" merely that I have no idea what weight A should have and I'm suspicious of assigning a weight to it.

I'm not sure whether this entire rant makes sense or anything...if it doesn't I will have to read the damn model again. Sad
Logged
Pages: [1] 2  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.066 seconds with 14 queries.