What I gather from this is that the only difference between PPP's methodology and the hocus-pocus of traditional poll weighting practices is that PPP has the opportunity to correct their numbers into something somewhat sensible whenever they realize they have an outlier. Every pollster has an assumption about how they expect things will/should look, and that certainly comes into play when adjusting demographic ratios, or establishing likely voter screens, or merging their cellphone sample or internet panel with the rest of the poll.
It's not scientifically rigorous, no, but claiming to be so is a sham in the first place when you're dealing with response rates in the single digits.