I sucked at statistics when I took the required courses at university. But I also developed an appreciation for the kind of bent thinking statisticians have to do sometimes when looking at the data they analyze. Yesterday I saw a remarkably fun example, which I probably wouldn't have posted about frankly, that's related to an older problem first proposed by Martin Gardner called the Two Children Problem. There's an excellent article about it here (warning: contains a light coating of math and fun thinking about sampling biases).
The reason I'm posting at all is because /. then had a link to a fantastic report on Daily Kos about rigged poll results and how the company they had contracted for polling services was defrauding them. That one contains scads of statistical analysis. It's also fascinating to me because tests for randomness are fascinating to me (and seeing human biases creep in because of our assumptions about randomness is even cooler). It's good reading just to see what bad data looks like and how even simple tests (the first test is so simple you could teach it to junior high school students, I think) can make you skeptical.
The reason I'm posting at all is because /. then had a link to a fantastic report on Daily Kos about rigged poll results and how the company they had contracted for polling services was defrauding them. That one contains scads of statistical analysis. It's also fascinating to me because tests for randomness are fascinating to me (and seeing human biases creep in because of our assumptions about randomness is even cooler). It's good reading just to see what bad data looks like and how even simple tests (the first test is so simple you could teach it to junior high school students, I think) can make you skeptical.