This piece was originally printed in the Tempe Normal Noise magazine at ASU. It presents a small part of a debate going on within contemporary science. Discussions are being held in many different mediums, from public press, to workshops and conferences, and publications. Here, some opinions in this debate are voiced through a conversation between fictionalized scientists, all of whom are in the early stages of their careers. The conference, studies and numbers mentioned in this piece are real. While this specific conversation never happened in Tempe, it could have easily taken place at any scientific conference in the world.
It was 11:30 pm, on a Thursday. “I should probably be heading home soon anyways,” Phil thought, as he order another Coffee Kolsch at the bar. He had another busy day at the conference tomorrow. The Conference on Complex Systems, or the CCS, had descended onto the industrial southwest corner of Tempe, bringing with it more than 600 scientists from all over the world. While 600 researchers would be a barely noticeable blip on the scale of ASU’s 83,000 enrolled students, the intellectual impact of the conference was difficult to understate. Complex systems science represents a huge conglomerate of new research directions from many disparate fields. With keynote talks about the nature of social norms, the connection between the global food distribution network and the Arab spring, and recent progress towards a predictive theory of evolutionary biology, the conference was interdisciplinary to the core. Complex systems is a field of science defined in the negative, it strives to be what traditional scientific disciplines cannot be, it strives to remove the borders between schools of thought, to integrate knowledge from seemingly unrelated fields into a more comprehensive view of the world. It has ambitious goals.
The patio at Casey Moore’s was surprisingly deserted for a Thursday night. Phil took a second to enjoy the cool breeze as he stepped back outside. It was the first of October, and it felt like the first night of fall, maybe the summer was finally ending. When he got back to the table his friends and fellow conference goers were debating the quality of American craft brews compared to traditional Belgian ales. The collective thoughts of the conversation meandered a bit, there was a brief mention of the refugee crisis in Europe, and some questions about what exactly Donald Trump means for American politics. Eventually a topic of substance emerged: the pervading notion that you must “publish or perish” in contemporary science. It is an issue that can be seen in almost every branch of academia, but in science it is particularly prevalent, or so it seemed to this group of scientists. Doing “good” science is really difficult, and a single scientific project, could, and sometimes should, take years to complete. For an early career scientist, her/his publication count is perceived not only as a measure of her/his success but also as a measure of her/his value to the scientific community. Despite the general consensus that publication count is not the only measure of success for a scientist, most professional opportunities, from post-doctoral positions, fellowships and grants, to tenure applications still depend heavily on it.
“It’s all bullshit anyways,” said Roth, “look at █████ ██████, he publishes a bunch of noise, really fast, and everyone thinks he’s brilliant because one in twenty of his publications is good.” Roth was making a sound point. Not only does a publication count ignore the quality of the science being done, it isn’t even necessarily a good representation of how much work was done by a particular scientist. Publications typically have multiple authors and there’s no universal metric for how much work you need to do in order to be listed as an author on a paper. This means that a well-connected scientist can contribute to many different projects and be listed as an author on all of them, even if she/he didn’t do much at all for the project. This wasn’t too much of an issue though. The real problem, the one Roth was concerned with, was more practical. When the funding sources care more about quantity of publications, rather than quality of results, there’s a massive incentive to spread good results over as many publications as possible. This ultimately dilutes the content of any single journal article while simultaneously creating many more articles to read.
Recently, Physical Review E (the journal specifically designated for statistical and nonlinear physics) announced that it has published more than 50,000 articles since its inception in 1993. It begs the question, what does that number mean for human understanding? In 20 years, a single journal (of which there are many more) has generated more content than any human could read and comprehend in a lifetime—let alone over the course of a graduate program. How did science end up with this kind of system?
The first scientific journals were formed over 350 years ago. Before that time scientific knowledge was embodied in either massive tomes hidden in university libraries or in personal letters written between colleagues. At the time of their inception, and for a long time after that, scientific journals were extremely impressive endeavors. Collecting manuscripts from scientists all over the world (i.e. Europe[1]), redistributing them for peer review, and then publishing and distributing the results. Each step in the process was no easy task in the days before steam engines, railways or automobiles. As such, these publishers charged a steep price for access to their knowledge distribution network. Depending on the field, publishers today can charge anywhere between a few hundred dollars to upwards of 5000 dollars for a single title. Before the advent of the internet, this kind of pricing might not have seemed so extreme. In fact, for serious researchers and institutions, it was good deal. However, you would have expected that scientific publishing experienced a massive restructuring as the internet emerged and began contributing to almost every human endeavor. After all, scientists were some of the earliest adopters of the Internet[2].
Unfortunately, the basic business model from 300 years ago is still alive and well in the information age. The premise is this: scientists work diligently on their area of expertise, when they find results that are new, exciting, contradictory or otherwise interesting, they compose a manuscript, describing the experiment, and results. This manuscript is then submitted to a journal which sends it out to other specialists in that field. These peer specialists review the article based on its clarity, scientific merit, and importantly, the perceived relevance to the larger field. If the article is deemed clear, scientifically sound, and interesting, it is accepted and published in the journal. Often articles will be revised several times before being accepted at a given journal. Once a piece is accepted, it becomes the intellectual property of the publisher, and other scientists must pay the publisher for access to those results. At no point in this process does the publisher pay any of the scientists involved. In any other industry this model would not only fail, it wouldn’t even make it past to board room of any respectable company.
You might think, perhaps these scientific publishers are rather generous. They perform all these essential functions for the scientific community and only seek to recoup their costs. Unfortunately that is not the case. According to a study published in PLoS One, the largest scientific publisher, Reed-Elsiver, makes a profit margin of about 39%. That is a higher margin than the Industrial & Commercial Bank of China (29%) and Hyundai Motors (10%), in fact, it is on par with Pfizer (42%). Those companies represent the highest margin organizations in the banking, automobile, and drug industries respectively. This is a company which sells access to content they don’t create, which has outsourced their quality control to their users, and which has no marginal cost! The total cost of sharing one .pdf is the same whether it is downloaded 10 times or 10,000,000 times. In the 21st century, journals are still able to charge a premium for access to their articles. Profits margins like this are driven in predominantly by the fact that scientists are not the customers in this model, university libraries are. University and institution libraries are essentially captive audiences in this system. Researchers desperately need access to current publications in order to make progress, which is a big reason to associate with an institution in the first place. University libraries don’t make decisions about which journals they purchase based on traditional supply and demand, but instead, the decisions are made based on budget allocations, which are typically independent of both demand and supply. “Publishing has become too large of a process,” complained Louie, “one of the original goals of journals publishers, besides implementing peer review, was to sort the relevant discoveries out of all the unimportant ones. But with the number of sub fields these days, the opinions of editors are less and less important. Particularly for interdisciplinary scientists, the opinion of their peers should matter more than editors, isn’t that the point of peer review?”
“One of the motivations for the whole Open Access push was to remove that responsibility from the publishers,” responded Sonya, “other scientists, not journal editors, should be deciding what is and isn’t relevant to the field.” Open Access is a model pioneered by PLoS (Public Library of Science) an online only journal. This means that all the articles in PLoS are open to the public, for free. The catch here is that PLoS (and other Open Access journals) are operating on a pay-to-play model. Rather than charging users to view their content, they charge scientists to publish their work. PLoS One has quickly become one of the biggest journals in the world, but their model is still extremely controversial. Importantly PLoS does not judge articles based on their perceived importance, only on their academic merit, and clarity. PLoS argues that this helps create an unbiased forum, where the scientific community, not the publishers, determine what work is important and what is not. Others, like Harvard biologist John Bohannon, argue that this model is profit driven, that accepting more, less significant manuscripts is only a means to make more money and that it ultimately slows down progress. “I definitely think that interdisciplinary fields should get behind a lot of the open access ideas,” said Rosco, “ but it’s crazy to expect early career scientists to pay $1500 just to publish a paper! Considering you might have 3-5 before you graduate, how is anyone suppose to afford that?”
“It’s not just about us though,” retorted Sandy, “there’s a moral aspect to this too. Open Access makes sure that everyone has access to information. That might not be important for physics or geology, but for medicine and science that can affect public policy, it’s really critical that people have access to the most up-to-date science.” While the Open Access model has been heralded by some, like the Cambridge mathematician Timothy Growler[3], as an appropriate model for the information age, it has some serious legitimacy issues. Despite being one of the largest journals in the world, PLoS publications are still seen as suspect by some researchers. Early career scientists often worry about having too many PLoS publications on their CV. In some specializations, PLoS is seen as place where papers are submitted after being rejected at most traditional outlets. To make matters worse, in 2013 John Bohannon published a study in Science (an academic publisher which is not open access) showing that almost half of open access journals had little to no serious peer review process. The study generated papers which seemed consistent, but were actually flawed in very obvious ways. The papers were submitted to 307 open access journals all over the world, and were accepted in 147. Typically the papers were accepted without revision or comments from editors. While many larger journals rejected the paper outright—PLoS included—the study demonstrated the real danger in creating journals where authors pay for access, rather than readers.
Using publication records to measure academic success has worked for over 300 years, but it seems to be slowly taking control out of the hands of scientists and putting it instead in the hands of publishers. The success of publishers as corporations doesn’t contradict the success of science; after all, technology and human understanding have advanced remarkably in those three centuries, but it doesn’t seem to be the right model in the age of Wikipedia. It doesn’t make sense for contemporary science to be so beholden to organizations, which don’t seem necessary in the information age. In the last 40 years, there has been a dramatic consolidation of academic publishers, the top five largest publishers in the natural and medical sciences now control over 50% of the yearly citations, compared with less than 20% in 1970, according to “The Oligopoly of Academic Publishers in the Digital Era,” a study published in PLoS. This has occurred coincident with increased profit margins for those publishers and rising pressure on scientists to publish. “It’s messed up when Nobel Laureates like Paul Higgs, admit that they wouldn’t have been able to survive their early careers in today’s academic climate,” said Louie, “he thinks the pressure to constantly publish would’ve prevented him from coming up with the Higgs field!”
“Does science even proceed on journal publications?” Phil asked to group. Everyone glanced around, some shoulders shrugged. He wasn’t even exactly sure what he meant by that question, but it seemed that any metric for the success and impact of a scientist should be connected in a clear way to the advance of scientific understanding. “Science proceeds by tutorials!” exclaimed Rosco, “I don’t think anyone understood maximum entropy or information theory until Simon gave that tutorial!” He was referencing a tutorial earlier in the week of the conference, which helped researchers from all different fields understand the core concepts of information theory and how simple aspects of it can be applied to almost any field. Roth chimed in, “Even if we all agree publications are not the right benchmark, what else are we supposed to use? Twitter followers?” Phil sighed and replied, “I’m sick of that being the end of the conversation! Computer science uses ‘invited talks’[4] as a metric, why don’t other fields?”
“Computer science does that because it is a very young field, relatively speaking, and because historically it has been about developing algorithms, the proof was always in the software, a paper wasn’t necessary,” said Sonya, “it wouldn’t change anything, though. Computer science conferences have all the same problems as publishers in other fields.”
“Well then can’t we use some kind of combination of papers, talks, and public engagement?” responded Phil
“It’s pointless to be mad about this,” said Louie, “people who have tenure decide these sorts of things, and the people with tenure got there by getting published, so they won’t want to change this system, even if it’s broken.”
Phil slammed his fist down on the fake wood of the table, “You should be mad about this! If you think something is wrong about science, then you should be mad about it!” There was a brief silence. Everyone stared at him. Sonya and Louie started laughing, “Wow man! You’re so passionate, I love it!” said Louie. Phil took a deep breath and looked over his shoulder, he needed to be careful, he had seen people kicked out of this bar for less. “My point is just that ‘publish or perish’ disproportionately affects interdisciplinary researchers, who must spend more time synthesizing facts, and concepts rather than generating new ones. If we, as interdisciplinary researchers, think there is a better way to go about doing interdisciplinary science, we should at least be talking about it, if not yelling about it.”
“That’s all conditioned on the fact that we are interdisciplinary researchers,” said Roth, “Complexity science has its own legitimacy issues. We need to prove it’s worth something before we go all crazy trying to reform the entire scientific community.” Phil kept his mouth shut; he completely disagreed with everything Roth had just said, but talking about it more would get him so rallied up that he could get kicked out of the bar. He finished his beer and stood up, “Anyone want another round?” he asked as he walked towards the bar. Sandy raised her hand, “What are you drinking?” Phil asked. “I forget, just order me an IPA, they are better here than in London.” He walked to the bar and ordered two Lagunitas, thinking that she should taste a California classic before she left the States. By the time he sat back down it was 1:40 a.m., and the conversation had meandered away, Sandy and Roth were now pouring over the beauty of the AdS/CFT duality[5].
Phil didn’t bring up any other issues with science that night, but inside he was still fuming. He couldn’t understand how anyone could see these issues and not be mad about them. 2 a.m. rolled around, and Sandy needed help finishing her beer as they were herded out of the bar. Phil caught an Uber home from there. He considered trying to carry a more normal conversation with the driver, but it was beyond him. He reflected on how insane science seemed. How is it that science distinguished itself from other human endeavors? How different could it really be from art, literature, or business? After all it was a flawed system, built out of the work flawed individuals. On the face, it seemed absurd that any human endeavor would ever attempt to make objective claims about reality. Peer review isn’t perfect and it never has been. On the other hand, he thought as he pulled out his cell phone, science really seems be different than other schools of thought. The people who put a man on the moon, and the people looking for cures for cancer don’t consider themselves artists, or lawyers while they are doing that kind of work.The current system, in spite of all of it’s flaws, has allowed the world to progress in unprecedented ways. People today are healthier, more productive, and more connected than any other point in human history, in large part because of scientific and technological breakthroughs which would’ve been impossible without the peer review process. Phil wondered if there was actually a better way to proceed with science. Was it possible that all the issues he and his peers had with the current model were simply the unavoidable cost of any system of peer review?
[1] It is impossible to separate the history of science from the history of western society.
[2] In fact, the world wide web as we know it was pioneered by Tim Berners-Lee while working for CERN, the same organization which discovered the Higgs Boson a couple years ago.
[3] Growler has famously lead a boycott against Elsiver Publishing Group, which is still ongoing. Most of his supporters are mathematicians. Elsiver has never owned had a prestigious mathematical journal.
[4] Invited talks are typically keynote presentations at conferences or workshops, where the speaker has been specifically invited by the organizers.
[5] AdS/CFT duality is a unifying result from the last decade that connected quantum field theory to string theory.