Conferences archive > 2016 > SPEAKERS & ABSTRACTS

Gary King


Gary King is the Albert J. Weatherhead III University Professor at Harvard University, based in the Department of Government (in the Faculty of Arts and Sciences). He also serves as Director of the Institute for Quantitative Social Science. King and his research group develop and apply empirical methods in many areas of social science research, focusing on innovations that span the range from statistical theory to practical application.

King's work is widely read across scholarly fields and beyond academia. He was listed as the most cited political scientist of his cohort; among the group of "political scientists who have made the most important theoretical contributions" to the discipline "from its beginnings in the late-19th century to the present"; and on ISI's list of the most highly cited researchers across the social sciences. His work on legislative redistricting has been used in most American states by legislators, judges, lawyers, political parties, minority groups, and private citizens, as well as the U.S. Supreme Court. His work on inferring individual behavior from aggregate data has been used in as many states by these groups, and in many other practical contexts. His contributions to methods for achieving cross-cultural comparability in survey research have been used in surveys in over eighty countries by researchers, governments, and private concerns. King led an evaluation of the Mexican universal health insurance program, which included the largest randomized health policy experiment to date. He has reverse engineered Chinese censorship, and worked on a wide range of other projects. The statistical methods and software he develops are used extensively in academia, government, consulting, and private industry.  He is a founder, and an inventor of the original technology for, Learning Catalytics (acquired by Pearson), Crimson Hexagon, Perusall, among others.

Big Data is Not About the Data!

The spectacular progress the media describes as "big data" has little to do with the data.  Data, after all, is becoming commoditized, less expensive, and an automatic byproduct of other changes in organizations and society. More data alone doesn't generate insights; it often merely makes data analysis harder. The real revolution isn't about the data, it is about the stunning progress in the statistical and other methods of extracting insights from the data. We illustrate these points with a wide range of examples from his research, including forecasting the solvency of Social Security; reverse engineering Chinese censorship; estimating causes of death in developing countries; automated text analysis of billions of social media posts; how humans are horrible at choosing keywords and what to do about it, among others.

torna top