Trinity Communications
“I study what makes or breaks science,” said Daniel Scott Smith, assistant professor of Sociology.
Smith studies the social foundations of science, focusing on peer review — the evaluation process of a scientific paper before it's accepted for publication in a journal.
Is this research new? Is it accurate? Does it make a valuable contribution to the literature? Does it generate new ideas and open new paths of research?
These are all questions one might ask when evaluating new science for publication. But when peer-reviewers have conflicting — or complementary — views about the qualities of a paper, it can result in varying degrees of certainty about the validity of the science. Traditionally, that’s been framed as a problem.
“Many people might think that, if the science is valid, that should be in plain sight for all to see, but I explore the ways through which notions of validity, accuracy and novelty are diversely interpreted,” Smith said. “How do we as people, as practitioners of science, deal with diversity or plurality in what otherwise seems to be a rather standardized, abstract realm of evaluation?”
As a sociologist, Smith is interested in how different viewpoints can lead to uncertainty and bias, but also better-quality science and even learning among scientists. He’s developing this line of work as a co-principal investigator of a new project funded by the NSF that traces the evolution of technology as inventors and examiners interact over the course of the patent review process at the United States Patent and Trademark Office.
Smith’s research on evaluation in science and technology also interfaces with another major arena of his research: generative AI and machine learning, which Smith uses to classify texts, reconstruct scientific arguments, and generate data to train open-source models.
“Some may turn to generative AI as a solution to the overburden of peer reviewers. It's more efficient and less expensive,” he said.
But Smith doesn’t believe that AI can replace human judgment, and he highlights the need for interaction in scientific evaluation.
“I think when we resort to full automation, we lose the transformative and self-determining capacity of humans in making impactful decisions and in engaging the self-correcting process that defines us as scientists. You can prompt different bots or agents to take-on different perspectives and offer complimentary evaluations of a given piece, but should we think that those are as good as peers — as your own colleagues pushing you to do better work that has greater value for society? Who should define that, if not us?”
These questions go far beyond Smith’s immediate research agenda: they are central to the future of science.
In addition to Smith, the department of Sociology is also welcoming Assistant Professor Wenhao Jiang to the faculty this year.