Publishers: we may do stupid things sometimes, but give us scientists some credit

Yesterday evening, I attended a Meetup organized by Impact of Science at the Alexander von Humboldt Institut for Internet and Society in Berlin. The topic was the highly publicized Projekt DEAL negotiations with Elsevier et al. and the future of Open Access research in Germany and beyond.

The speakers were Dr. Nick Fowler, Chief Academic Officer and Managing Director of Research Networks at Elsevier, and Prof. Dr. Gerard Meijer, Director of the Fritz Haber Institute of the Max Planck Society and a member of the German science council.

Each speaker gave their perspective on the past, present, and future of the negotiations. To me, it was refreshing hearing the “other side’s” take on the matter. Dr. Fowler highlighted some important issues that made the negotiations more complicated and difficult. Before that, I admit I had mostly been exposed to the viewpoint of the proponents of Projekt DEAL. So I started the evening thinking “OK, this is good. I’m learning some new stuff and some of these counter-arguments make sense.”

That feeling didn’t last long. After Prof. Meijer’s talk, we went into the discussion round. I asked Dr. Fowler to comment on what kind of value the big publishers add to an article. The issue had been brought up briefly by the speakers before this, and the consensus from both sides (including, it seemed, at least the general position of Projekt DEAL albeit not that of Prof. Meijer, as he explained later), was that publishers do add value to articles. I wanted to know what kind of value was added, how this is quantified, and how it relates to the price that researchers and institutes pay publishers for the processing of gold open access articles or subscribing to paywalled ones.

Do publishers improve the articles that are submitted to their journals?

Dr. Fowler explained how they have evidence that articles published by Elsevier are of higher quality than the rest of the published literature. He used that as proof that their publishing system improves articles, and casually swept aside the idea of added value – the difference between the quality of submitted manuscripts and their published counterparts.

Hold on, you might say. Did he just try to pull a fast one on a room full of scientists? I mean, scientists may have stuck with a flawed and exploitative publishing system for decades, but we know a potential confounding factor (and an inappropriate surrogate outcome) when we see one. Elsevier is the largest, most well-known scientific publisher on the planet. Of course they publish the highest-quality research, because they probably get the highest quality submissions.

Scientists know a potential confounding factor when they see one.

Whether or not value is added during the submission and review process is not a minor detail, it’s at the core of the problem, and therein lies the solution. If the system that publishers provide does add value, how much of that is attributable to peer review, something that researchers do for free (a point emphasized during the discussion by Prof. Meijer)?

I’m not saying publishers don’t add value, but as a scientist, I want to see some evidence. Mind you, I wasn’t asking Dr. Fowler for a randomized controlled trial of their article processing system versus some generic processing-type intervention. I probably would have been satisfied with some As-Seen-On-TV-style before-and-after photos. Instead, the audience got a response that proves nothing.

Sense and simplicity in science

I recently finished Atul Gawande’s book The Checklist Manifesto, which I highly recommend. It’s all about how very simple measures can have profound outcomes in fields as diverse as aviation, construction, and surgery.

What struck me the most about it wasn’t the author’s endorsement of using basic checklists to ensure things are done right in complex scenarios. Instead, it’s Dr Gawande’s insistence on testing the influence of everything, including a piece of paper with 4 or 5 reminders stuck to his operating theatre wall, that I found inspiring.

Why bother collecting evidence for something so apparently simple, so clearly useful, at all?

Talk of the town

Ischemic stroke, caused by the blockage of an artery in the brain by a blood clot, is as complex as anything in medicine. In fact, for such a common and debilitating illness, we have surprisingly few treatments at hand. Until recently, only two had been proven to help patients who suffered a stroke: giving them a drug that dissolves the clot and keeping them in a “stroke unit” where they receive specialised care that goes beyond what is offered in a general neurology ward.

But that all changed last year. The lectures and posters at the 2015 European Stroke Organisation conference in Glasgow, which I attended, were dominated by one thing. A new treatment for acute ischemic stroke had emerged – mechanical thrombectomy.

In the four months leading up to the conference, a number of large clinical trials had proven that this intervention worked wonderfully. Literally everyone at the conference was talking about it.

Isn’t that obvious?

Mechanical thrombectomy involves guiding a tube through a blood vessel (usually an artery in the groin) all the way up through the neck and into the brain, finding the blocked artery, and pulling out the clot. Just let that sink in for a moment. In the midst of stupendous amounts of research since the mid-90s into convoluted pathways leading to brain damage after stroke, fancy molecules that supposedly protect tissue from dying, and stem cells that we’ve banked on repairing and replacing what’s been lost, the only thing that’s worked so far is going in there and fishing out the clot. That’s all it takes.

After returning to Berlin, I told a former student of mine about the news. “Well, duh?”, she responded, just a bit sheepishly. My first instinct was to roll my eyes or storm out yelling “You obviously know nothing about how science works!”. But is this kind of naïveté all that surprising? Not really. Somehow we’re wired to believe that if something makes sense it has to be true (here’s a wonderful article covering this). As a scientist, do I have any right to believe that I’m different?

Science is not intuitive.

To paraphrase part of a speech given recently by Dr Gawande, what separates scientists from everyone else is not the diplomas hanging on their walls. It’s the deeply ingrained knowledge that science is not intuitive. How do we learn this? Every single day common sense takes a beating when put to the test of the scientific method. After a while, you just kind of accept it.

The result is that we usually manage to shove aside the temptation to follow common sense instead of the evidence. That’s the scientific method, and scientists are trained to stick to it at all costs. But we don’t always – I mean if it makes such clear and neat sense, it just has to be true, doesn’t it?

Never gonna give you up

The first few clinical trials showed that thrombectomy had no benefit to patients, which just didn’t make sense. If something is blocking my kitchen pipes, I call a plumber, they reach for their drain auger and pull it out, and everything flows nicely again. Granted, I need to do so early enough that the stagnant water doesn’t permanently damage my sink and pipes, but if I do, I can be reasonably sure that everything will be fine. But in this case, the evidence said no, flat out.

It works, I’ve seen it work and I don’t care what the numbers say.

Despite these initial setbacks, the researchers chased the evidence for the better part of a decade and spent millions of dollars on larger trials with newer more sophisticated equipment. I’m wondering if what kept them going after all those disappointing results was this same flawed faith in common sense. It works, I’ve seen it work and I don’t care what the numbers say – you hear such things from scientists pretty often.

Another important flaw in the way researchers sometimes think is that we tend to do is explain the outcomes of “negative” studies in retrospect by looking for mistakes far more scrupulously than before the studies started. I don’t mean imperfections in the technique itself (there’s nothing wrong with improving on how a drug or surgical tool works, then testing it again, of course). I’m talking about things that are less directly related to the outcome of an experiment, like the way a study is organised and designed. These factors can be tweaked and prodded in many ways, with consequences that most researchers rarely fully understand. And this habit tends to, in my opinion, propagate the unjustified faith in the authority of common sense.

There’s good evidence to suggest that the earlier mechanical thrombectomy trials were in some ways indeed flawed. But I still think this example highlights nicely that the way scientists think is far from ideal. Of course, in this case, the researchers turned out to be right – the treatment made sense and works marvellously. It’s hard to overemphasise what a big deal this is for the 15 million people who suffer a stroke each year.

Deafening silence

More than a year has passed since the Glasgow conference and this breakthrough received little attention from the mainstream media. Keep in mind, this isn’t a small experiment of some obscure and outrageously complex intervention that showed a few hints here and there of being useful. It is an overwhelming amount of evidence proving that thrombectomy is by far the best thing to happen to the field of stroke for almost two decades. And not a peep. In fact, if you’re not a stroke researcher or clinician, you’ve probably never even heard of it.

Now, if you read this blog regularly, I know what you’re thinking. I rant a lot about how the media covers science, now I’m complaining that they’re silent? But doesn’t it make you wonder why the press stayed away from this one? I suppose it’s extremely difficult to sell a story about unclogging a drain.

 The best thing to happen to the field of stroke for almost two decades.

 

The thin line between art and science

Over the years, my mind has been moulded to think that when it comes to how living things work, there are no absolute truths. In medical practice, this mentality is usually revered, however, the same cannot necessarily be said about basic science. I sometimes find myself having to defend why I question everything – even the ‘facts’ that I am taught come with a mental post-it note placed in the back of my mind that the ubiquity of exceptions is the only rule I accept. Today’s fact is tomorrow’s misguided observation, and what applies to one person may be irrelevant for the next. This post is my personal attempt to find the reason behind this difference in thinking between those who practice clinical medicine and basic science.

When I was in medical school, countless professors would tell us how medicine is more art than science. We were told how, although we did not understand the difference then, we would soon realize this fact after we graduated and began to practice. Now, one year into my masters in medical neuroscience course, buried under a never-ending pile of experimental data, I find myself thinking about what they said.
Clinical medicine involves much more than memorizing facts or remembering the details of drugs and diseases. It requires a special type of skill – an analytical and critical mind-set which is unique to the profession. Recalling long lists of symptoms and signs is one thing, but the ability to form connections between various observations, synthesize this knowledge and turn it into something practically useful and beneficial to patients is what lies at the heart of medical practice. Akin to solving a puzzle, going from the first encounter with a patient to being able to manage them as a whole is why medicine is often described as an art.
Experimental medicine is entirely different. I’m not saying that it lacks creativity or relies less on the analysis and criticism of observations at hand, but the discipline is firmly based on hypothesis testing – it depends on the use of statistics to make somewhat rigid yes or no decisions. I am currently reading Claude Bernard’s An Introduction to the Study of Experimental Medicine – Bernard’s representation of physiology and its study is entirely distinct from the way clinical medicine works. Observations are either present or not, they always mean something specific and can be manipulated to influence other observations in a predictable and consistent manner. Clinical medicine to me seems much more, for want of a better word, flexible – symptoms and signs are often out of place and there are always exceptions to every rule (an idea that Bernard criticizes in his book, referring to practicing physicians in particular).
Of course, the way that experimental medicine works is by simplifying everything to its most basic components – cause and effect are isolated and studied in a bubble so to speak, excluding other factors that influence the processes under study. I wouldn’t dare criticize this approach, but it is one that has no place in medical practice. The human body is an infinitely complex interplay of systems that interact with the external environment (an idea proposed and propagated by none other than Bernard himself). Thus, it is easy to imagine that those who deal with human beings as a whole and those who aim to dissect the intricate details of how the body works in health and disease do so in completely different ways. Let’s not forget that the immediate aims of these disciplines are quite different, and so it only makes sense that their approaches differ too.
The question comes up – does its complex nature make medicine an art? There are other, possibly more obvious reasons, to consider. Depending on their specialty, doctors rely to varying degrees on manual skills – be it something as simple as giving an intravenous injection or performing delicate surgery. This may be a part of what places medicine in the realm of art. However, one may argue that this also applies to basic scientists – stabbing a neuron with a tiny micropipette without damaging it probably requires at least as much manual dexterity than removing a tumour from the spinal cord (I have attempted neither, but that’s my impression!). Of course, we have to consider that the stakes are higher in the latter.
Thus far, I have only practiced clinical medicine for one year, but my memories of my time working in the hospital bring me to the same conclusion as my former professors. Sitting in the emergency room as patient after patient pile in, what I learned from lectures was of limited benefit. What truly made a difference was my experience – the further I was into a specialty rotation, the more able I was to take appropriate action and manage patients effectively. Recalling medical facts becomes less of an issue when faced with a clinical problem – second nature kicks in and takes over, usually for the better.
Here are a few quotes that I found relevant to the topic:

“Take care not to fancy that you are physicians as soon as you have mastered scientific facts; they only afford to your understandings an opportunity of bringing forth fruit, and of elevating you to the high position of a man of art.”
“Medicine consists of science and art in a certain relationship to each other, yet wholly distinct. Science can be learned by anyone, even the mediocre. Art, however, is a gift from heaven.”
Armand Trousseau

“No human investigation can be called true science without passing through mathematical tests.”
“Those who are enamoured of practice without science are like a pilot who goes into a ship without rudder or compass and never has any certainty where he is going. Practice should always be based upon a sound knowledge of theory.”

Leonardo Da Vinci (he was both a scientist and an artist)

“Even in the hands of the greatest physicians, the practice of medicine is never identified with scientific (laboratory) medicine, but is only an application of it.”
Rudolf Virchow

“The main cause of this unparalleled progress in physiology, pathology, medicine and surgery has been the fruitful application of the experimental method of research.”
William H Welsh