Jensen Series Lecture - Dr. Brian Uzzi

Friday, November 9, 2018 - 1:15pm

Speaker: 
Dr. Brian Uzzi

The Department of Sociology is happy to present our Jensen Speaker, Brian Uzzi, for Friday, November 9, 2018.  His talk entitled: “An Artificial and Human Intelligence Approach to the Replication Problem in Science” will be at 1:15 pm in the Zener Auditorium (130 Soc/Psych).

The Abstract of his talk:

Artificial intelligence (AI) systems have been shown to have superhuman learning capabilities1,2 and the ability to alter human performance in complex decision-making situations3.  However, (AI) systems have yet to be applied to pressing scientific problems that have eluded traditional methods4.  Here, we test the ability of an AI system to address the replication problem in science.  While complete replicability is neither expected nor necessary for advancing science5, replication research has found a disturbing result.  In top journals across disciplines, significantly more papers fail than pass replication tests, and non-replicating results are cited as frequently as replicating results, creating concern that the literature may be unduly affected by weak ideas6.  We developed a novel method for estimating replicability that combines human and artificial intelligence.  Using data on 100 studies that underwent rigorous manual replication tests, we trained a neural network and machine learning AI model to estimate a paper’s likelihood of replicability based only on a paper’s text—the part of a paper hardest for humans to quantify.  We then tested our model’s generalizability on 245 of out-of-sample studies from over 100 journals and diverse disciplines.  Our analysis finds: (i) The AI model yields robustly generalizable estimates of replicability across disciplines that are more accurate and have higher confidence than prevailing approaches.  (ii) While statistics are typically used to evaluate a paper’s replicability, we find that the narrative text has more explanatory power than reported statistics, suggesting that our AI model detects unique and pertinent scientific information that humans overlook or find hard to process.  (iii) Analysis of the mechanisms behind the model’s expositional power indicate that conspicuous features of scientific research—such as word or persuasion phrase frequencies, writing style, discipline, journal, authorship, or topics—do not explain the results.  Rather, the network of linguistic relationships contains information that predicts replicability, implying that the combinations of human and machine intelligence can advance theory and inform many scientific problems.

Photo of Dr. Brian Uzzi

Zener Auditorium, Rm 130 Soc/Psych

Contact

Dr. Craig Rawlings