Subsamples have to abide by the same laws of statistics as the overall sample. Subsamples are not immune from criticism.
If this just a pole of the 18-24 year olds, I'd probably agree. Newsflash, it isn't.
If you've paid attention, newsflash, I'm the only guy that defends you somewhat when it comes to this issue. Polls with dozens and dozens of samples make outliers d
ozens and dozens of times more likely.
it *IS UNFAIR* to pick through a poll and find things that seem a little off to validate pre-conceived beliefs about the poll. Every poll with have something a little off. No poll with a statistically sound methodology will have McCain leading the 18-24 year olds like this poll has, given the 100 person sample size. Well, maybe one poll every thousand years.
I'm not a fan at looking at subsamples either. But if they show something that is basically impossible for them to show, it reveals a methodological weakness. If McCain was winning 5 of the 8 black people (63%) in an Iowa poll or whatever, that'd be no big deal. But we can look at sample size and account for the normal error that is involved with looking at subsamples and adjust our criticism to account for the reasons why we shouldn't critique subsamples under most situations.
I agree with you that, in general, criticizing subsamples is bad. Of course they average out in any good poll and some in every poll with appear off.
But this poll is one in a gazillion or it has flawed methodology. You can't dismiss this fact under the above rule-of-thumb. Subsamples are generally useless, but rarely they are incredibly revealing, and this is one of those cases. This is partly why polls publish their subsamples, so we can understand the inner workings of the poll a little bit better.
I think this should go a step further. What I haven't seen in this discussion is the recognition that the poll is weighted. When a poll weights raw values or preselects samples to fill, the intent is to create a more accurate poll. Those adjustments are made at the subsample level and as a result can skew those or other subsamples.
For example, lets assume that there is weighting of the sample by age. That does not imply that there is any mechanism to weight within that subsample. Quoting results for the age subsamples, is now equivalent to posting a raw, unadjusted poll. Since the pollster has reason to believe that the raw numbers would be skewed and therefore performs some weighting, there is every reason to believe that the subsample would show a higher likelihood of skewing based on the reduced statistical size in the subsample.
As another example, suppose that the weighting is by party ID and not by age. Now if the age subsample is broken out, it could carry the same party ID weight as the larger sample. If so, there would be clear methodological bias. This could be corrected by applying different party ID weights to different age samples, but I'd want to know that from a pollster before concluding that a subsample was free from this type of bias.
To conclude, I tend to give less import to subsamples than the statistical weight might suggest, unless I know how any weighting or sample preselection works at the level of the reported subsamples.