Welcome, Guest. Please login or register.
Did you miss your activation email?
August 23, 2019, 11:40:57 pm
News: 2020 Presidential Predictions (General) are now active.

  Atlas Forum
  Election Archive
  Election Archive
  2016 U.S. Presidential General Election Polls (Moderators: AndrewTX, Likely Voter)
  NH-UNH: Clinton +11, Clinton +10 (4-way)
« previous next »
Pages: 1 2 3 [4] Print
Author Topic: NH-UNH: Clinton +11, Clinton +10 (4-way)  (Read 6195 times)
Alcon
Moderators
YaBB God
*****
Posts: 30,912
United States



Show only this user's posts in this thread
« Reply #75 on: November 07, 2016, 03:19:32 am »

You're wary of dismissing a model that shows a counterintuitive result, but you uncritically accept outliers.

I'm honestly trying to figure out whether you do know what you're talking about and I'm misunderstanding, or you're just spouting jargony nonsense.  I'm not saying that to be mean...I'm having trouble telling, and you may be making a great point.

Even if I were to "uncritically accept outliers," I'm not sure why that's mutually exclusive or at tension with accepting that good statistical modeling can sometimes generate counterintuitive results.  If anything, the premise behind including outliers, assuming that's what you mean by "uncritically accepting them, is that it's better to throw valid data points (or apparently valid points) together in the pot and hopefully let methodological quirks, sampling error, etc., cancel each other out.  I also have no earthly idea what point you're making with the Upshot link.

Polling results are extraordinarily noisy under the best of circumstance.  If you want empiricism, simulate an election in which underlying voter intentions remain at 52-48, and commission one poll per day for 100 days, each with a 3 MoE and no nonrandom error. It looks like an EKG in tachycardia, except less regular. Feel free to adjust those trendlines after every new survey, but you're just chasing noise.

I'm not an idiot.  I know how statistical distributions work Tongue

Are you somehow under the impression that Silver is just extrapolating trendlines, and not accounting for the obvious fact that small movements are oftentimes simply fluctuations based on statistical noise?  If so, what do you base this belief on?  And, before you ask me on what basis I assume that Silver isn't tempering the pitch of his trendlines based on the (often-likely) possibility they're simply statistical noise, here's why:

1. Because that would require assuming Silver selectively doesn't account for margin of error and the likelihood of statistical noise in this component of his model, while he frequently writes about this elsewhere, and accounts for much more complex "unknown unknowns" like the historical probability of systematic polling error.

2. Because this would make no sense in light of his claim that there's empirical basis to his trendline adjustments model, unless you're arguing that his data set of past trendlines vs. final outcomes coincidentally happened to match the modeling he's now doing that you deem overly aggressive.

Being that neither of those seem particularly plausible to me, I don't think I can agree with the assumptions you seem to be making about Silver's model.

And of course, no poll is free of nonrandom error, which means that very favorable condition simulations like the above are less noisy than real world conditions. To start, much polling "movement" is actually differential nonresponse:

http://www.stat.columbia.edu/~gelman/research/published/swingers.pdf

http://www.columbia.edu/~rse14/Erikson_Panagopoulos_Wlezien.pdf

When you control for nonresponse, you find that polling margins are much more stable than an entertainment website would have you believe:

https://today.yougov.com/news/2016/11/01/beware-phantom-swings-why-dramatic-swings-in-the-p/

http://fivethirtyeight.com/features/most-voters-havent-changed-their-minds-all-year/

Differential nonresponse is an interesting topic, and I have some thoughts on Silver's approach to it.  I don't have time to write them out now.  (Trust me -- this isn't a dodge.  Look at my post history.  I'm a dork and would do it!)

TLDR: Unsophisticated people are impressed by the bells and whistles in a model. But all bells and whistles really do is make noise.

I'm a staunch opponent of overfitting and complex models meant to hide weak logic.  I just don't think that all complication is inedible number garnish.
Logged
Phony Moderate
Obamaisdabest
YaBB God
*****
Posts: 12,334
United States


Show only this user's posts in this thread
« Reply #76 on: November 07, 2016, 05:20:38 am »
« Edited: November 07, 2016, 05:41:09 am by Phony Moderate »

Even if this poll turns out to be completely on the nose (and maybe it will), this thread is yet another classic Atlas overreaction. Just another reminder that overenthusiasm combined with not-completely-taking-the-facts-on-board isn't just a Trumpster/BernieBro thing, it's an American thing.

And to repeat, it could be right - just to avoid being called a spoilsport.
Logged
rafta_rafta
YaBB God
*****
Posts: 926


Show only this user's posts in this thread
« Reply #77 on: November 07, 2016, 05:30:35 am »

It's an outlier , but shows that HRC is pulling away from the orange fascist
Logged
Fmr President & Senator Polnut
polnut
YaBB God
*****
Posts: 19,527
Australia


Political Matrix
E: -2.71, S: -5.22


Show only this user's posts in this thread
« Reply #78 on: November 07, 2016, 05:47:49 am »

It's an outlier , but shows that HRC is pulling away from the orange fascist

I think is an outlier, but I don't think it's a huge one. I never bought the tie idea.
Logged
Desroko
Sr. Member
****
Posts: 346
Show only this user's posts in this thread
« Reply #79 on: November 07, 2016, 06:27:53 am »


538 is less likely to be predictive than its peers because it's the outlier, regardless of the methodological problems we'll get into. You don't throw the outlier away, but you have to recognize it for what it is instead of accepting it at face value - which is a problem that 538 itself struggles with, as we'll see below.

The model is a black box when it comes to exact methodology, so no one knows exactly what Silver is doing. But it is very naive, which can be fairly easily demonstrated by polling database and probability updates.

1. The trend line adjustments are heavily affected by outliers, even those that clearly did not presage an actual trend. This is visible even among highly rated outfits with low house effects, which in theory will be affected mostly or entirely by the trendline adjustments. Move to excel, sort by date, and find the rolling average - there are sharp changes to the adjustment average that are precipitated by outlier polls that did not actually presage a trend. The USC/LA Times poll did it in nearly every time. Prudent trendline adjustments shouldn't fall for these, but the 538 model is too naive and accepts them at face value - likely because it's mean-based and/or has a short memory, and ends up amplifying polling noise that smooths out over the long term and in the median.

2. Single surveys in thinly-polled states produce large swings in probability. See Nov. 4 at 5:18 pm, Nov. 3 at 11:17 am, Nov. 2 at 7:04 pm, Nov. 1 at 1:41 pm, and many more - though my personal favorite is Oct. 23 at 4:41 pm. An Oklahoma result of R+30/33 - almost exactly the 2012 margin - was enough to move the estimate a point and a half by itself. Lol, bullsh**t. Likely because the state correlation is assumed to be too high, and because the model lacks information in non-battlegrounds and thus places too much importance on individual surveys - which is exactly the sort of thing 538 is supposed to prevent. (We won't touch the fact that Trump +30/33 in OK is actually a fairly neutral or pro-Clinton result.) Better modeling would have this information priced in, and would not derive national trends from individual state surveys.

3. Some believe that the model is double-counting national and state polling. So essentially, if we have a national trend of Trump +1 over a week, and a state trend of Trump +1, the model counts it as +2 instead of +1. 538 says it only uses national polling to inform the adjustments and produce the projected vote margin, though outlier polls have swung the probability harder than seems reasonable if that were true. I'm not entirely sure what's going on there, but it would explain a lot.

4. And of course, as pointed out earlier, trends are mostly artifacts of nonrandom polling error and methodological changes (including but not limited to herding), not changes in voter intention among the population. You can ask a single panel over the course of an election, or use control questions, or simply look at changes in demo response rates in raw polling data, and that becomes abundantly clear. 538 doesn't have an "approach" to this beyond literally modeling this noise.

As for why his methods all seem designed to amplify noise and increase variance - clicks. 538 is owned by ESPN, which is under severe financial pressure and which has already shut down Grantland, the closest thing to 538 under its umbrella. Silver is trying to keep his vertical economically viable, and I don't blame him for that much.

Sorry for the late reply. My clients are APAC-based, and I work when they do.
Logged
Erich Maria Remarque
LittleBigPlanet
YaBB God
*****
Posts: 3,654
Sweden


Show only this user's posts in this thread
« Reply #80 on: November 07, 2016, 07:20:50 am »
« Edited: November 07, 2016, 07:24:03 am by Little Big BREXIT »


538 is less likely to be predictive than its peers because it's the outlier, regardless of the methodological problems we'll get into. You don't throw the outlier away, but you have to recognize it for what it is instead of accepting it at face value - which is a problem that 538 itself struggles with, as we'll see below.

The model is a black box when it comes to exact methodology, so no one knows exactly what Silver is doing. But it is very naive, which can be fairly easily demonstrated by polling database and probability updates.

1. The trend line adjustments are heavily affected by outliers, even those that clearly did not presage an actual trend. This is visible even among highly rated outfits with low house effects, which in theory will be affected mostly or entirely by the trendline adjustments. Move to excel, sort by date, and find the rolling average - there are sharp changes to the adjustment average that are precipitated by outlier polls that did not actually presage a trend. The USC/LA Times poll did it in nearly every time. Prudent trendline adjustments shouldn't fall for these, but the 538 model is too naive and accepts them at face value - likely because it's mean-based and/or has a short memory, and ends up amplifying polling noise that smooths out over the long term and in the median.

2. Single surveys in thinly-polled states produce large swings in probability. See Nov. 4 at 5:18 pm, Nov. 3 at 11:17 am, Nov. 2 at 7:04 pm, Nov. 1 at 1:41 pm, and many more - though my personal favorite is Oct. 23 at 4:41 pm. An Oklahoma result of R+30/33 - almost exactly the 2012 margin - was enough to move the estimate a point and a half by itself. Lol, bullsh**t. Likely because the state correlation is assumed to be too high, and because the model lacks information in non-battlegrounds and thus places too much importance on individual surveys - which is exactly the sort of thing 538 is supposed to prevent. (We won't touch the fact that Trump +30/33 in OK is actually a fairly neutral or pro-Clinton result.) Better modeling would have this information priced in, and would not derive national trends from individual state surveys.

3. Some believe that the model is double-counting national and state polling. So essentially, if we have a national trend of Trump +1 over a week, and a state trend of Trump +1, the model counts it as +2 instead of +1. 538 says it only uses national polling to inform the adjustments and produce the projected vote margin, though outlier polls have swung the probability harder than seems reasonable if that were true. I'm not entirely sure what's going on there, but it would explain a lot.

4. And of course, as pointed out earlier, trends are mostly artifacts of nonrandom polling error and methodological changes (including but not limited to herding), not changes in voter intention among the population. You can ask a single panel over the course of an election, or use control questions, or simply look at changes in demo response rates in raw polling data, and that becomes abundantly clear. 538 doesn't have an "approach" to this beyond literally modeling this noise.

As for why his methods all seem designed to amplify noise and increase variance - clicks. 538 is owned by ESPN, which is under severe financial pressure and which has already shut down Grantland, the closest thing to 538 under its umbrella. Silver is trying to keep his vertical economically viable, and I don't blame him for that much.

Sorry for the late reply. My clients are APAC-based, and I work when they do.


Lol, stop it. At least read their models description or something. It does not amply noise, no.
It gives not double trends state + national, no.









About this poll. TN Volunteer cherry picked it, lol. Hillary won't win it in landslide, not even close, unless there will be a polling error across all states & nationally, lol
Logged
GeorgiaModerate
YaBB God
*****
Posts: 10,565


Show only this user's posts in this thread
« Reply #81 on: November 07, 2016, 07:40:20 am »


TLDR: Unsophisticated people are impressed by the bells and whistles in a model. But all bells and whistles really do is make noise.


I love this statement.  It applies to most engineering projects as well.
Logged
Secret Cavern Survivor
Antonio V
YaBB God
*****
Posts: 49,999
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #82 on: November 07, 2016, 07:49:25 pm »

ANGRY WOMEN WITH A VENGEANCE
Logged
Seriously?
YaBB God
*****
Posts: 3,032
United States


Show only this user's posts in this thread
« Reply #83 on: November 09, 2016, 11:27:26 pm »

Pile of state poll junk.
Logged
Congressman Dwarven Dragon
Wulfric
Atlas Politician
YaBB God
*****
Posts: 21,621
United States


Political Matrix
E: -1.68, S: 1.22

P P P

Show only this user's posts in this thread
« Reply #84 on: November 09, 2016, 11:39:19 pm »

Logged
Pages: 1 2 3 [4] Print 
« previous next »
Jump to:  


Login with username, password and session length
Logout

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

© Dave Leip's Atlas of U.S. Elections, LLC