538 Model Megathread
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 26, 2024, 02:09:53 PM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2016 U.S. Presidential Election
  538 Model Megathread
« previous next »
Pages: 1 ... 24 25 26 27 28 [29] 30 31 32 33 34 ... 49
Author Topic: 538 Model Megathread  (Read 83298 times)
‼realJohnEwards‼
MatteKudasai
Jr. Member
***
Posts: 1,867
United States


Political Matrix
E: -6.19, S: -4.87

Show only this user's posts in this thread
« Reply #700 on: October 10, 2016, 04:10:19 PM »

Clinton's currently at her best ever in most, if not all, swing states
Iowa???
Logged
Mallow
Jr. Member
***
Posts: 737
United States


Show only this user's posts in this thread
« Reply #701 on: October 10, 2016, 04:18:16 PM »
« Edited: October 10, 2016, 04:20:07 PM by Mallow »

He never thought Rhode Island was a swing state. That one poll showing Clinton up by only 3 (with no other polls showing otherwise) did cause Trump to have over a 10 percent chance of winning Rhode Island for a while, which while seemingly ridiculous given past results there, is probably about right given a lack of any other evidence.

That same pollster, Emerson, now has Clinton up by 20 in their latest poll, thereby "fixing" the Rhode Island odds.

It was 25% on the now-cast. lmao. That's completely embarrassing.

Well I'm glad you know better than everyone else that Trump wouldn't have had a 25% chance to win Rhode Island at that exact moment (which is the variable the Now-Cast "predicts").

Seriously, though... without subjectively manipulating the data, you're going to get a wonky result every once in a while in a state without much polling. What would you have them do, make sh**t up?
Logged
IceSpear
Atlas Superstar
*****
Posts: 31,840
United States


Political Matrix
E: -6.19, S: -6.43

Show only this user's posts in this thread
« Reply #702 on: October 10, 2016, 04:20:55 PM »

He never thought Rhode Island was a swing state. That one poll showing Clinton up by only 3 (with no other polls showing otherwise) did cause Trump to have over a 10 percent chance of winning Rhode Island for a while, which while seemingly ridiculous given past results there, is probably about right given a lack of any other evidence.

That same pollster, Emerson, now has Clinton up by 20 in their latest poll, thereby "fixing" the Rhode Island odds.

It was 25% on the now-cast. lmao. That's completely embarrassing.

Well I'm glad you know better than everyone else that Trump wouldn't have had a 25% chance to win Rhode Island at that exact moment (which is the variable the Now-Cast "predicts").

Any person with a brain cell knew/knows that, but thanks for being glad for me anyway. Wink

Hillary Clinton had a 70% chance of winning Alabama on August 21st, 2016 at 8:03 AM. Prove me wrong.
Logged
Mallow
Jr. Member
***
Posts: 737
United States


Show only this user's posts in this thread
« Reply #703 on: October 10, 2016, 04:30:01 PM »

He never thought Rhode Island was a swing state. That one poll showing Clinton up by only 3 (with no other polls showing otherwise) did cause Trump to have over a 10 percent chance of winning Rhode Island for a while, which while seemingly ridiculous given past results there, is probably about right given a lack of any other evidence.

That same pollster, Emerson, now has Clinton up by 20 in their latest poll, thereby "fixing" the Rhode Island odds.

It was 25% on the now-cast. lmao. That's completely embarrassing.

Well I'm glad you know better than everyone else that Trump wouldn't have had a 25% chance to win Rhode Island at that exact moment (which is the variable the Now-Cast "predicts").

Any person with a brain cell knew/knows that, but thanks for being glad for me anyway. Wink

Hillary Clinton had a 70% chance of winning Alabama on August 21st, 2016 at 8:03 AM. Prove me wrong.

Yes, I understand that we're talking in circles. My point is that their model is based on data points only. If you want a model that has subjective "checks," don't use 538.
Logged
Nym90
nym90
Atlas Icon
*****
Posts: 16,260
United States


Political Matrix
E: -5.55, S: -2.96

P P P
Show only this user's posts in this thread
« Reply #704 on: October 10, 2016, 04:52:28 PM »

There is a way to check to see if he's right or not; if events that he says has x% chance of happening actually do end up happening x% of the time.

His predictions have ended up being quite accurate in that regard.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,073
United States


Show only this user's posts in this thread
« Reply #705 on: October 10, 2016, 04:55:02 PM »
« Edited: October 10, 2016, 04:58:26 PM by Mr. Morden »

The now-cast relies solely on polls, and not at all on voting history or demographics*, correct?  In that case, of course it's going to give some wonky results for states where there are very few polls, and the polls that do exist there are of poor quality.  I don't see why that's a big problem though.  Who really cares how good the model is at predicting Rhode Island?  The swing states are a lot more important.

In any case, there seems to be some confusion that keeps coming up here about probabilities.  `Is "the probability of X" correct or not?' is a question that doesn't even make any sense in a vacuum.  The probability depends on the set of information that you're considering.  The model is saying, XX% of the time that the polls say this, we would see Candidate Y win.  That's pretty much it, since there's very little non-polling information that goes into it.  But because we have access to additional information, like voting history, candidate quality, recent scandals, etc., that the model doesn't have, the potential exists for us to "beat the models".

It's like in a football game, you could have a point where the commentator says "28% of the time when a team is down by this much with this much time left, with this field position, they win."  That can be a useful stat to have.  The viewer instinctively knows that the team in question is the underdog, given the score, the clock, and the field position, but translating that into a probability on the fly isn't something that a normal person can do.  But that doesn't mean that you should treat that 28% as gospel.  If you think team quality, weather or other factors are also important, then you can mentally shift that 28% up or down in your head to get a more "realistic" assessment of the probability.  But that doesn't mean that the initial probability given based on just a few observables isn't a useful thing to know.

Same thing with 538's model.  If you think there's additional information that shifts the probabilities higher or lower, that isn't included in the model, then good for you.  You're free to mentally shift the probabilities up or down.  But that doesn't make the model estimates useless.

*Except to the extent that demographics are involved in computing the correlations between different states' votes, but that only takes you so far.
Logged
GeorgiaModerate
Moderators
Atlas Superstar
*****
Posts: 32,703


Show only this user's posts in this thread
« Reply #706 on: October 10, 2016, 04:58:10 PM »

There is a way to check to see if he's right or not; if events that he says has x% chance of happening actually do end up happening x% of the time.

His predictions have ended up being quite accurate in that regard.

This is not measurable.  It's impossible to verify whether (for example) a 92% chance of Obama winning in 2012 was accurate.  (I think the final number was close to that.)  We have a sample of ONE.  To get close to checking whether 92% was accurate, you'd need to rerun the 2012 election dozens of times, which is obviously impossible.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,073
United States


Show only this user's posts in this thread
« Reply #707 on: October 10, 2016, 05:05:09 PM »

There is a way to check to see if he's right or not; if events that he says has x% chance of happening actually do end up happening x% of the time.

His predictions have ended up being quite accurate in that regard.

This is not measurable.  It's impossible to verify whether (for example) a 92% chance of Obama winning in 2012 was accurate.  (I think the final number was close to that.)  We have a sample of ONE.  To get close to checking whether 92% was accurate, you'd need to rerun the 2012 election dozens of times, which is obviously impossible.

"Checking" whether the probability of a single event was "correct" or not is nonsensical.  The point is that if you run the same model on many different elections, then you should be able to check if the model is any good by seeing if 92% favorites do indeed win 92% of the time.  The problem is, there are only so many presidential elections to look at since the advent of polling, so you're checking with small number statistics.  You can overcome that problem by looking at each individual state separately, but the states are correlated with each other, so it's a bit messy.

I assume you can also do it for Senate races, which are presumably going to be less correlated.
Logged
Erc
Junior Chimp
*****
Posts: 5,823
Slovenia


Show only this user's posts in this thread
« Reply #708 on: October 10, 2016, 05:10:10 PM »

He never thought Rhode Island was a swing state. That one poll showing Clinton up by only 3 (with no other polls showing otherwise) did cause Trump to have over a 10 percent chance of winning Rhode Island for a while, which while seemingly ridiculous given past results there, is probably about right given a lack of any other evidence.

That same pollster, Emerson, now has Clinton up by 20 in their latest poll, thereby "fixing" the Rhode Island odds.

It was 25% on the now-cast. lmao. That's completely embarrassing.

Well I'm glad you know better than everyone else that Trump wouldn't have had a 25% chance to win Rhode Island at that exact moment (which is the variable the Now-Cast "predicts").

Any person with a brain cell knew/knows that, but thanks for being glad for me anyway. Wink

Hillary Clinton had a 70% chance of winning Alabama on August 21st, 2016 at 8:03 AM. Prove me wrong.

Yes, I understand that we're talking in circles. My point is that their model is based on data points only. If you want a model that has subjective "checks," don't use 538.

There still is some element of subjectivity in the baseline (which has some value in their model when there are few/no polls) and in the polls-plus model; of course they set this subjectivity earlier in the year based on demographics / previous election results.

My model-based issue with this (and it's a small issue) is that this subjectivity should have a larger weight (i.e. there should be a narrower prior), so that a crappy poll or two shouldn't affect the probability so much (as in Rhode Island).

The issue with that is you'll be slower to pick up on actual shifts (e.g. Utah, though that's been polled enough now, or potentially Alaska).  But I think that's a fair tradeoff.
Logged
GeorgiaModerate
Moderators
Atlas Superstar
*****
Posts: 32,703


Show only this user's posts in this thread
« Reply #709 on: October 10, 2016, 05:17:43 PM »

There is a way to check to see if he's right or not; if events that he says has x% chance of happening actually do end up happening x% of the time.

His predictions have ended up being quite accurate in that regard.

This is not measurable.  It's impossible to verify whether (for example) a 92% chance of Obama winning in 2012 was accurate.  (I think the final number was close to that.)  We have a sample of ONE.  To get close to checking whether 92% was accurate, you'd need to rerun the 2012 election dozens of times, which is obviously impossible.

"Checking" whether the probability of a single event was "correct" or not is nonsensical.  The point is that if you run the same model on many different elections, then you should be able to check if the model is any good by seeing if 92% favorites do indeed win 92% of the time.  The problem is, there are only so many presidential elections to look at since the advent of polling, so you're checking with small number statistics.  You can overcome that problem by looking at each individual state separately, but the states are correlated with each other, so it's a bit messy.

I assume you can also do it for Senate races, which are presumably going to be less correlated.


All right, that's a fair point as far as it goes; but as you noted, the number of Presidential elections since such models existed is far too small a sample to be useful.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,073
United States


Show only this user's posts in this thread
« Reply #710 on: October 10, 2016, 05:27:16 PM »

All right, that's a fair point as far as it goes; but as you noted, the number of Presidential elections since such models existed is far too small a sample to be useful.

The election doesn't have to have happened "since such models existed".  You can go back to elections from long before your model existed, and apply the model and see how it performs, as long as you have the polling data from those old elections.  But you're still stuck using elections since *polling* existed.  And more than that, elections for which large numbers of state polls existed.  And I guess that takes you back a few decades, but not more than that.  So yes, a limited sample.

To be clear, going back to past elections to see how the polls predict the final outcome is how the models were constructed in the first place.  However, since you're again stuck with a limited sample, you have the potential for over-fitting.
Logged
NOVA Green
Oregon Progressive
Atlas Icon
*****
Posts: 11,450
United States


Show only this user's posts in this thread
« Reply #711 on: October 10, 2016, 05:28:43 PM »

^ You do realize stuff like that is based on the past relationship between poll results (including apparent outlier polls) and actual results?  He's not just making it up.  You can have a serious conversation about how to handle uncertainty, and how to weight new data vs. fundamental data, but it's kind of lame to criticize a counterintuitive model outcome without providing any real criticism of the underlying assumptions or methodology.

This reminds me of the people who argued "Nate wasn't wrong about Michigan...Sanders winning was just the 0.01% event actually occuring!" In other words, Nate and his model can never be wrong under any circumstances. How convenient.

As for the methodology, you don't need a Ph.D. in statistics or political science to realize that giving Donald Trump a 25% chance of winning the state of Rhode Island is absurd. Common sense certainly prevailed over all those complex models that showed a Romney landslide.

Spear---- Dude we didn't see eye-to-eye much at all during the Dem Primaries, and although Nate's model does over emphasize results from states that get rarely polled, there was a reality at the time where this polled occurred and the race was close to tied nationally, among one of Trump's best NE/CA groups Italian-Americans, along with many Bernie holdouts in a state where he won a huge upset during the primaries.

Obviously, the Bernie indies have come home, and the Anglo-Ethnic Europeans and WASPs are done briefly flirting with the Trump train.

Clinton has consolidated and expanded her base, and what was a real/surreal and ephemeral moment a month ago where Trump was keeping it much closer than expected because many Democrats couldn't support Clinton in RI, Indies were assessing the scene, and Republicans were sticking together in a state where only 25% of the population is registered Rep.
Logged
nclib
Atlas Icon
*****
Posts: 10,304
United States


Show only this user's posts in this thread
« Reply #712 on: October 10, 2016, 05:34:38 PM »

I haven't read through this thread, but it seems like the 538 model hasn't really been updated since pussygate, as it still has Utah at a 99.2% chance for Trump.
Logged
GeorgiaModerate
Moderators
Atlas Superstar
*****
Posts: 32,703


Show only this user's posts in this thread
« Reply #713 on: October 10, 2016, 05:44:23 PM »

All right, that's a fair point as far as it goes; but as you noted, the number of Presidential elections since such models existed is far too small a sample to be useful.

The election doesn't have to have happened "since such models existed".  You can go back to elections from long before your model existed, and apply the model and see how it performs, as long as you have the polling data from those old elections.  But you're still stuck using elections since *polling* existed.  And more than that, elections for which large numbers of state polls existed.  And I guess that takes you back a few decades, but not more than that.  So yes, a limited sample.

To be clear, going back to past elections to see how the polls predict the final outcome is how the models were constructed in the first place.  However, since you're again stuck with a limited sample, you have the potential for over-fitting.


*concedes gracefully*
Logged
Attorney General, LGC Speaker, and Former PPT Dwarven Dragon
Dwarven Dragon
Atlas Politician
Atlas Superstar
*****
Posts: 31,718
United States


Political Matrix
E: -1.42, S: -0.52

P P P

Show only this user's posts in this thread
« Reply #714 on: October 10, 2016, 05:54:00 PM »
« Edited: October 10, 2016, 06:06:44 PM by Dwarven Dragon »

The now-cast relies solely on polls, and not at all on voting history or demographics*, correct?  In that case, of course it's going to give some wonky results for states where there are very few polls, and the polls that do exist there are of poor quality.  I don't see why that's a big problem though.  Who really cares how good the model is at predicting Rhode Island?  The swing states are a lot more important.

In any case, there seems to be some confusion that keeps coming up here about probabilities.  `Is "the probability of X" correct or not?' is a question that doesn't even make any sense in a vacuum.  The probability depends on the set of information that you're considering.  The model is saying, XX% of the time that the polls say this, we would see Candidate Y win.  That's pretty much it, since there's very little non-polling information that goes into it.  But because we have access to additional information, like voting history, candidate quality, recent scandals, etc., that the model doesn't have, the potential exists for us to "beat the models".

It's like in a football game, you could have a point where the commentator says "28% of the time when a team is down by this much with this much time left, with this field position, they win."  That can be a useful stat to have.  The viewer instinctively knows that the team in question is the underdog, given the score, the clock, and the field position, but translating that into a probability on the fly isn't something that a normal person can do.  But that doesn't mean that you should treat that 28% as gospel.  If you think team quality, weather or other factors are also important, then you can mentally shift that 28% up or down in your head to get a more "realistic" assessment of the probability.  But that doesn't mean that the initial probability given based on just a few observables isn't a useful thing to know.

Same thing with 538's model.  If you think there's additional information that shifts the probabilities higher or lower, that isn't included in the model, then good for you.  You're free to mentally shift the probabilities up or down.  But that doesn't make the model estimates useless.

*Except to the extent that demographics are involved in computing the correlations between different states' votes, but that only takes you so far.


The now-cast and the polls-only basically rely solely on polls. They use state polls, but also use national polls and some demographics to "guess" what state polls would say in places that don't get polled very often. The difference between the two models is the polls-only model gives a few extra % to Trump (or to Clinton if Trump is/was the favorite) because of an uncertainty adjustment - basically the possibility that the race could completely change between now and the election, while the nowcast only accounts for uncertainty of the exact level of accuracy of the polls, since its told that the election is today, all polls have been released, and therefore nothing can change.
Logged
Mallow
Jr. Member
***
Posts: 737
United States


Show only this user's posts in this thread
« Reply #715 on: October 10, 2016, 06:01:21 PM »

Some good discussion in this here thread. Smiley
Logged
Gass3268
Moderators
Atlas Star
*****
Posts: 27,531
United States


Show only this user's posts in this thread
« Reply #716 on: October 10, 2016, 06:02:27 PM »

Clinton up to 45% national for the first time since August 9th in the polls-only.
Logged
emailking
Atlas Icon
*****
Posts: 14,371
Show only this user's posts in this thread
« Reply #717 on: October 10, 2016, 06:11:14 PM »

This reminds me of the people who argued "Nate wasn't wrong about Michigan...Sanders winning was just the 0.01% event actually occuring!" In other words, Nate and his model can never be wrong under any circumstances. How convenient.

He can't be wrong about an individual prediction, that's true. He can't be right either, so it's not exactly "convenient."
Logged
Boston Bread
New Canadaland
YaBB God
*****
Posts: 3,636
Canada


Political Matrix
E: -5.00, S: -5.00

Show only this user's posts in this thread
« Reply #718 on: October 10, 2016, 07:46:16 PM »

Made another swing map based on 538's polls only model. Red = Clinton does better than Obama in 2012. Blue = worse

Logged
Sorenroy
Jr. Member
***
Posts: 1,701
United States


Political Matrix
E: -5.55, S: -5.91

P P
Show only this user's posts in this thread
« Reply #719 on: October 10, 2016, 08:28:34 PM »

Made another swing map based on 538's polls only model. Red = Clinton does better than Obama in 2012. Blue = worse



Margin or percentage?
Logged
Figueira
84285
Atlas Icon
*****
Posts: 12,175


Show only this user's posts in this thread
« Reply #720 on: October 10, 2016, 09:08:22 PM »

I haven't read through this thread, but it seems like the 538 model hasn't really been updated since pussygate, as it still has Utah at a 99.2% chance for Trump.

That's not 538's fault. It's just that there haven't been any Utah-specific polls from after the video was released.
Logged
Figueira
84285
Atlas Icon
*****
Posts: 12,175


Show only this user's posts in this thread
« Reply #721 on: October 10, 2016, 09:45:17 PM »

For me the biggest surprise on that map is West Virginia.
Logged
Xing
xingkerui
Atlas Superstar
*****
Posts: 30,307
United States


Political Matrix
E: -6.52, S: -3.91

P P P
Show only this user's posts in this thread
« Reply #722 on: October 10, 2016, 09:53:10 PM »

For me the biggest surprise on that map is West Virginia.

That's probably because polls tend to underestimate eventual margins of victory, thus why most Safe Trump states will trend D, according to that map, and most of the trend R states are Clinton states.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,073
United States


Show only this user's posts in this thread
« Reply #723 on: October 10, 2016, 10:08:43 PM »

For me the biggest surprise on that map is West Virginia.

That's probably because polls tend to underestimate eventual margins of victory, thus why most Safe Trump states will trend D, according to that map, and most of the trend R states are Clinton states.

The third party vote being bigger this time could also decrease the margin of victory slightly in blowout states.  E.g., suppose you have a state where Trump would otherwise beat Clinton 60%-40%.  If 10% of both candidates' supporters shifted to Johnson+Stein, then it becomes 54%-36%, moving it from a 20 point margin to an 18 point margin.
Logged
Boston Bread
New Canadaland
YaBB God
*****
Posts: 3,636
Canada


Political Matrix
E: -5.00, S: -5.00

Show only this user's posts in this thread
« Reply #724 on: October 10, 2016, 10:14:23 PM »

RE: Sorenroy, I only did margin. If I did vote% then Clinton would underperform Obama in most states.
Logged
Pages: 1 ... 24 25 26 27 28 [29] 30 31 32 33 34 ... 49  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.071 seconds with 10 queries.