Nate Bashes Rassy (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 27, 2024, 11:04:24 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Presidential Elections - Analysis and Discussion
  Presidential Election Process
  Polling (Moderator: muon2)
  Nate Bashes Rassy (search mode)
Pages: [1]
Author Topic: Nate Bashes Rassy  (Read 5692 times)
muon2
Moderator
Atlas Icon
*****
Posts: 16,802


« on: July 25, 2010, 12:00:27 AM »

What I glean from Silver is that he has deduced the likely activities that Rassy's respondents are engaged in when they take the call. The undertone of the article seems to be that Rassy's sample set might have inherent bias, but there is no evidence of that. The only conclusion I can draw is that the sample does not have the same activity profile as the adult population as a whole.

On the surface I don't see the problem. I get the concept that by restricting call backs and time period, the number of unwanted variables is reduced. I presume that Rassy has appropriate weights to compensate for any skew in the sample due to these attempts to control unwanted variables. If the models that provide the weights are well tested, then the polls should be statistically sound.
Logged
muon2
Moderator
Atlas Icon
*****
Posts: 16,802


« Reply #1 on: August 03, 2010, 10:22:01 PM »

At it's core, there is almost a religious issue here.

Most pollsters, myself included, were raised in "The Church of the Random Sample" to use Mark Blumenthal's phrase.

Painfully, kicking and screaming, most pollsters have realized even a decent approximation of a true random sample is essentially impossible to achieve in the modern world.

Many phone pollsters (indeed just about everybody but Gallup) has essentially given up even trying to generate a random sample and has gone to some variation of quota calling. - Even Gallup is employing a lot more post sample weighting than they did before.

Approaches vary, but what ABC news (TNS is their actual call center) does is conceptually pretty typical.

The US census bureau divides the population into 48 census "cells" based upon age, income, race, education, geography, etc...  ABC "quota calls" - they screen respondents to see which "cell" they fit into and they then keep calling till they have filled their "quota" in all the cells...

What you end up with looks like a random sample, though if it actuially behaves like one is a matter of substantial mathematical debate....

ABC uses selection criteria when calling (ie asking for the youngest male initially) but this is done for economic reasons so as to try to fill their quota with the fewest possible calls, it is not part of the polling methodology per se.

I guess the question to be asked is once you have crossed the Rubicon (metaphorically speaking) does it really matter HOW you fill the quota? - It's not a random sample anymore, its a constructed stratified sample.

I was deeply skeptical of Rasmussen at the beginning, but the bottom line is his results, as defined by predicting actual outcomes in actual elections, are exceptionally good.

The "bot" also has the huge advantage of doing sooooo many surveys that he is at the point he has, in the gigantic aggregate, sooooo much information that his random error is basically zero, and all that is left is systemic error that he can (at least conceptually) weight away.  To quote "Uncle Joe Stalin..." - Quantity has a quality all it's own.

Rasmussen does +/- (if you count his separate economic and employment indexes) about half a million completed calls a year... he knows exactly how many people he actually reaches in every demographic, race, income, and other category... he can weight his data to fix it...

For those of us (Nate and Mark B are certainly on the list) raised in the" church of the random sample" Rasmussen is an atheist, and its hard for them to contemplate that maybe there actually is no God.



It fascinates me that notions of weighting that have long been recognized in scientific experiments have been anathema to many in the polling biz. For instance, in particle physics there are way too many individual particle collisions to process in searching for rare phenomena. The experiment will set up the equivalent of a screen that biases the sample towards the events one is looking for, thus minimizing the processing time. Other control samples are used to determine precise weights for different parts of the screened sample so that a correct measurement results.
Logged
muon2
Moderator
Atlas Icon
*****
Posts: 16,802


« Reply #2 on: August 04, 2010, 07:52:04 AM »
« Edited: August 04, 2010, 01:29:31 PM by The Vorlon »


It fascinates me that notions of weighting that have long been recognized in scientific experiments have been anathema to many in the polling biz. For instance, in particle physics there are way too many individual particle collisions to process in searching for rare phenomena. The experiment will set up the equivalent of a screen that biases the sample towards the events one is looking for, thus minimizing the processing time. Other control samples are used to determine precise weights for different parts of the screened sample so that a correct measurement results.


Have the folks over at CERN found the God Particle yet....?

They had some problems with their magnets and will be taking a year and a half to fix them. Who knows, maybe we'll see it at Fermilab while they're shut down. Smiley

Logged
muon2
Moderator
Atlas Icon
*****
Posts: 16,802


« Reply #3 on: August 05, 2010, 08:57:14 AM »


It fascinates me that notions of weighting that have long been recognized in scientific experiments have been anathema to many in the polling biz. For instance, in particle physics there are way too many individual particle collisions to process in searching for rare phenomena. The experiment will set up the equivalent of a screen that biases the sample towards the events one is looking for, thus minimizing the processing time. Other control samples are used to determine precise weights for different parts of the screened sample so that a correct measurement results.


Have the folks over at CERN found the God Particle yet....?

They had some problems with their magnets and will be taking a year and a half to fix them. Who knows, maybe we'll see it at Fermilab while they're shut down. Smiley



Fermi Lab can do just under 2 TEV as I recall...? versus 7 (currently) at CERN?

The Higgs Bosom might be below the 2 TEV range as I recall..?  Been a while since I have looked at this area in any depth.

The mass of the Higgs is well below 2 TeV if it follows anything like the Standard Model. The advantage of higher energy is a higher rate of production.
Logged
muon2
Moderator
Atlas Icon
*****
Posts: 16,802


« Reply #4 on: August 06, 2010, 11:35:41 PM »


The mass of the Higgs is well below 2 TeV if it follows anything like the Standard Model. The advantage of higher energy is a higher rate of production.

~~assuming~~ no intermediate particles 1.4 TEV is more or less the upper limit as I recall.

Wouldn't it be amazing if the standard model ended up totaly wrong?

That that would be fun science!

That's why we do the experiments. Smiley

Now if we could get the pollsters to appreciate variable control and cancellation of systematics in their sampling as well ...
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.029 seconds with 14 queries.