PolMeth 2012 Round-Up, Part 2

A Map from Drew Linzer’s Votamatic

Yesterday I discussed Thursday’s papers and posters from the 2012 Meeting of the Political Methodology Society. Today I’ll describe the projects I saw on Friday, again in the order listed in the program. Any attendees who chose a different set of panels are welcome to write a guest post or leave a comment.

First I attended the panel for Jacob Montgomery and Josh Cutler‘s paper, “Computerized Adaptive Testing for Public Opinion Research.” (pdf; full disclosure: Josh is a coauthor of mine on other projects, and Jacob graduated from Duke shortly before I arrived) The paper applies a strategy from educational testing to survey research. On the GRE if you get a math problem correct, the next question will be more difficult. Similarly, when testing for a latent trait like political sophistication a respondent who can identify John Roberts likely also recognizes Joe Biden. Leveraging this technique can greatly reduce the number of survey questions required to accurately place a respondent on a latent dimension, which in turn can reduce non-response rates and/or survey costs.

Friday’s second paper was also related to survey research: “Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate” by Stephen Ansolabehere and Eitan Hersh (pdf). This was the first panel I attended that provoked a strong critical reaction from the audience. There were two major issues with the paper. First, the authors contracted out the key stage in their work–validating data by cross-referencing other data sets–to a private, partisan company (Catalist) in a “black box” way, meaning they could not explain much about Catalist’s methodology. At a meeting of methodologists this is very disappointing, as Sunshine Hillygus pointed out. Second, their strategy for “validating the validator” involved purchasing a $10 data set from the state of Florida, deleting a couple of columns, and seeing whether Catalist could fill those columns back in. Presumably they paid Catalist more than $10 to do this, so I don’t see why that would be difficult at all. Discussant Wendy Tam Cho was my favorite for the day, as she managed to deliver a strong critique while maintaining a very pleasant demeanor.

In the afternoon, Drew Linzer presented on “Dynamic Bayesian Forecasting of Presidential Elections in the States” (pdf). I have not read this paper, but thoroughly enjoyed Linzer’s steady, confident presentation style. The paper is also accompanied by a neat election forecast site, which is the source of the graphic above. As of yesterday morning, the site predicted 334 electoral votes for Obama and 204 for Romney. One of the great things about this type of work is that it is completely falsifiable: come November, forecasters will be right or wrong. Jamie Monogan served as the discussant, and helped to keep the mood light for the most part.

Jerry Reiter of the Duke Statistics Department closed out the afternoon with a presentation on “The Multiple Adaptations of Multiple Imputation.” I was unaware that multiple imputation was still considered an open problem, but this presentation and a poster by Ben Goodrich and Jonathan Kropko (“Assessing the Accuracy of Multiple Imputation Techniques for Categorical Variables with Missing Data”) showed me how wrong I was. Overall it was a great conference and I am grateful to all the presenters and discussants for their participation.