Why most of “clinical” imaging research is methodological – and why that’s OK

When people ask me what kind of research I did during my PhD (and indeed what kind I do now), I tell them I did MRI methods research. But what I do is very different to the image that comes up in people’s minds when I tell them this. I don’t build radiofrequency coils for MRI scanners, nor do I write MR sequences or even develop new analysis methods. I spent the majority of my PhD making small, incremental changes to the way MRI data is acquired and analyzed, and then testing how these changes affect how we measure certain things in the brain.

This type of research exists within what I consider a spectrum of phases of clinical research (NB: this has nothing to do with the phases of clinical trials of interventions – it’s only “clinical” in the sense that the research is being done on humans):

1. New methods are developed

2. They are validated by testing how well they perform in certain settings and improvements are made accordingly (followed by more validation).

3. Then, when they’re good and ready (this can take years), they’re used to answer clinical or biological questions.

People often neglect the second phase – the validation, improvement, and re-validation. It’s sometimes completely overlooked, but arguably the bigger problem is that it’s often conflated with the latter stage – the testing of clinical or biological hypotheses. The line between these phases is often blurred and when, as a researcher, you try to emphasize the separation of the two, it’s considered pedantic and dull.

Several types of scenarios exist – for example, you can have a method that measures a phenomenon X in a different way to an established method or you can have an operationalized measurement of phenomenon X (i.e. an indirect measurement, almost like a surrogate marker). The key question has to be: am I measuring what I think I’m measuring? This can be done by directly comparing the performance of the new method  to a more established method, or by testing to see if that method gives you the results you would expect in a biological or clinical situation that has been previously well studied and described.

For the record, I think the second option, although indirect, is completely valid – taking a method that’s under development and testing if it reflects a well-established biological phenomenon (if that’s what it’s meant to reflect) – that still counts as validation (I’ve done this myself on several occasions – e.g. here, here, and here). But the key thing is that it has to be well-established. Expecting a method you’re still trying to comprehensively understand to tell you something completely – or even mostly – new makes no sense.

Unfortunately, that’s often what’s expected of this kind of research. It’s expected from the researchers doing the work themselves, from their colleagues and/or supervisors, from manuscript reviewers, as well as from funding agencies. The reason is simple (albeit deeply misguided), and it confronts researchers working on improving and validating methods in clinical research very often: people want you tell them something new about biology or pathophysiology. They very often don’t want to hear that you’re trying to reproduce something established, even if it is with a new method that might be better for very practical reasons (applicability, interpretability, etc). This has presented itself to me over the years in many ways – reviewers bemoaning that my studies “provide no new biological insights” or well-meaning colleagues discouraging me from writing my PhD dissertation in a way that makes it sound “purely methodological” (“you need to tell the reader something new, something previously unknown”).

The irony is that, in the years I’ve spent doing (and reading) imaging research, I’ve become fairly convinced that the majority of clinical imaging studies  should fall into the second category mentioned above. However, it’s often mixed up with, and presented as though it belongs to, the third category. Researchers use new method Y to test a “novel” hypothesis, and interpret the results assuming (without proper evidence) that method Y is indeed measuring what it’s supposed to be. I notice this when I read papers – the introduction talks about the study as if its aim is to test and validate method Y, and the discussion ends up focusing on all the wonderful new insights we’ve learned from the study.

To be clear, I’m in no way saying that the ultimate goal of a new method shouldn’t be to be taken to studies of the third category. Validate, improve, validate, then apply with the hope of learning something new – that’s should clearly be the goal of any new method meant for clinical use. But we shouldn’t expect both to be done simultaneously. Instead, we need to acknowledge the clear separation between the types of clinical research and their respective goals, and to recognize that not all research is new and exciting in terms of what it tells us about biology or pathophysiology.

Sense and simplicity in science

I recently finished Atul Gawande’s book The Checklist Manifesto, which I highly recommend. It’s all about how very simple measures can have profound outcomes in fields as diverse as aviation, construction, and surgery.

What struck me the most about it wasn’t the author’s endorsement of using basic checklists to ensure things are done right in complex scenarios. Instead, it’s Dr Gawande’s insistence on testing the influence of everything, including a piece of paper with 4 or 5 reminders stuck to his operating theatre wall, that I found inspiring.

Why bother collecting evidence for something so apparently simple, so clearly useful, at all?

Talk of the town

Ischemic stroke, caused by the blockage of an artery in the brain by a blood clot, is as complex as anything in medicine. In fact, for such a common and debilitating illness, we have surprisingly few treatments at hand. Until recently, only two had been proven to help patients who suffered a stroke: giving them a drug that dissolves the clot and keeping them in a “stroke unit” where they receive specialised care that goes beyond what is offered in a general neurology ward.

But that all changed last year. The lectures and posters at the 2015 European Stroke Organisation conference in Glasgow, which I attended, were dominated by one thing. A new treatment for acute ischemic stroke had emerged – mechanical thrombectomy.

In the four months leading up to the conference, a number of large clinical trials had proven that this intervention worked wonderfully. Literally everyone at the conference was talking about it.

Isn’t that obvious?

Mechanical thrombectomy involves guiding a tube through a blood vessel (usually an artery in the groin) all the way up through the neck and into the brain, finding the blocked artery, and pulling out the clot. Just let that sink in for a moment. In the midst of stupendous amounts of research since the mid-90s into convoluted pathways leading to brain damage after stroke, fancy molecules that supposedly protect tissue from dying, and stem cells that we’ve banked on repairing and replacing what’s been lost, the only thing that’s worked so far is going in there and fishing out the clot. That’s all it takes.

After returning to Berlin, I told a former student of mine about the news. “Well, duh?”, she responded, just a bit sheepishly. My first instinct was to roll my eyes or storm out yelling “You obviously know nothing about how science works!”. But is this kind of naïveté all that surprising? Not really. Somehow we’re wired to believe that if something makes sense it has to be true (here’s a wonderful article covering this). As a scientist, do I have any right to believe that I’m different?

Science is not intuitive.

To paraphrase part of a speech given recently by Dr Gawande, what separates scientists from everyone else is not the diplomas hanging on their walls. It’s the deeply ingrained knowledge that science is not intuitive. How do we learn this? Every single day common sense takes a beating when put to the test of the scientific method. After a while, you just kind of accept it.

The result is that we usually manage to shove aside the temptation to follow common sense instead of the evidence. That’s the scientific method, and scientists are trained to stick to it at all costs. But we don’t always – I mean if it makes such clear and neat sense, it just has to be true, doesn’t it?

Never gonna give you up

The first few clinical trials showed that thrombectomy had no benefit to patients, which just didn’t make sense. If something is blocking my kitchen pipes, I call a plumber, they reach for their drain auger and pull it out, and everything flows nicely again. Granted, I need to do so early enough that the stagnant water doesn’t permanently damage my sink and pipes, but if I do, I can be reasonably sure that everything will be fine. But in this case, the evidence said no, flat out.

It works, I’ve seen it work and I don’t care what the numbers say.

Despite these initial setbacks, the researchers chased the evidence for the better part of a decade and spent millions of dollars on larger trials with newer more sophisticated equipment. I’m wondering if what kept them going after all those disappointing results was this same flawed faith in common sense. It works, I’ve seen it work and I don’t care what the numbers say – you hear such things from scientists pretty often.

Another important flaw in the way researchers sometimes think is that we tend to do is explain the outcomes of “negative” studies in retrospect by looking for mistakes far more scrupulously than before the studies started. I don’t mean imperfections in the technique itself (there’s nothing wrong with improving on how a drug or surgical tool works, then testing it again, of course). I’m talking about things that are less directly related to the outcome of an experiment, like the way a study is organised and designed. These factors can be tweaked and prodded in many ways, with consequences that most researchers rarely fully understand. And this habit tends to, in my opinion, propagate the unjustified faith in the authority of common sense.

There’s good evidence to suggest that the earlier mechanical thrombectomy trials were in some ways indeed flawed. But I still think this example highlights nicely that the way scientists think is far from ideal. Of course, in this case, the researchers turned out to be right – the treatment made sense and works marvellously. It’s hard to overemphasise what a big deal this is for the 15 million people who suffer a stroke each year.

Deafening silence

More than a year has passed since the Glasgow conference and this breakthrough received little attention from the mainstream media. Keep in mind, this isn’t a small experiment of some obscure and outrageously complex intervention that showed a few hints here and there of being useful. It is an overwhelming amount of evidence proving that thrombectomy is by far the best thing to happen to the field of stroke for almost two decades. And not a peep. In fact, if you’re not a stroke researcher or clinician, you’ve probably never even heard of it.

Now, if you read this blog regularly, I know what you’re thinking. I rant a lot about how the media covers science, now I’m complaining that they’re silent? But doesn’t it make you wonder why the press stayed away from this one? I suppose it’s extremely difficult to sell a story about unclogging a drain.

 The best thing to happen to the field of stroke for almost two decades.

 

Nostalgia

One of my friends recently asked me whether I missed working with patients. He also asked if the MSc program I’m currently enrolled in at the University of Edinburgh helped remind me of what it’s like to be a doctor.
The first question was easy for me to answer – I do miss working with patients – research is great, and to this day I stand by the reasons why I chose to pursue a few years of research-based training after finishing my primary medical qualifications. However, bedside practice is irreplaceable and unique. For me, it’s the perfect mix of challenge and reward. I’m enjoying all the new things I’m learning about neuroscience, but I still look forward to the day I hang my stethoscope around my neck and return to the wards and clinic once more.
As for the second question, I had to think about that for a little while. It’s true that I joined the program in Edinburgh to keep my clinical medicine knowledge up to date, but I truly feel that it has surpassed my expectations. To be completely honest, before joining the program I had thought (like many other people do) that ‘part-time’ and ‘flexible’ translated to ‘easy’. In fact, however, the opposite is true. Not only does a part-time program like this require an enormous amount of motivation and time management, but it also tests an individual’s resiliency.
I frequently find myself solving cases at three in the morning – just like I used to do while I was a medical intern – after a long day of lectures or lab work. Now, I don’t know if the program is designed to work in this way, but this kind of pressure does in fact remind me of what it’s like to be a doctor. Most, if not all, of my colleagues enrolled in the program at Edinburgh are practicing clinicians, so perhaps they don’t see the program the same way as I do. Being a doctor is a challenge, and working under pressure is a part of everyday life for any clinician. I think the program does a great job of reminding me of that.

The thin line between art and science

Over the years, my mind has been moulded to think that when it comes to how living things work, there are no absolute truths. In medical practice, this mentality is usually revered, however, the same cannot necessarily be said about basic science. I sometimes find myself having to defend why I question everything – even the ‘facts’ that I am taught come with a mental post-it note placed in the back of my mind that the ubiquity of exceptions is the only rule I accept. Today’s fact is tomorrow’s misguided observation, and what applies to one person may be irrelevant for the next. This post is my personal attempt to find the reason behind this difference in thinking between those who practice clinical medicine and basic science.

When I was in medical school, countless professors would tell us how medicine is more art than science. We were told how, although we did not understand the difference then, we would soon realize this fact after we graduated and began to practice. Now, one year into my masters in medical neuroscience course, buried under a never-ending pile of experimental data, I find myself thinking about what they said.
Clinical medicine involves much more than memorizing facts or remembering the details of drugs and diseases. It requires a special type of skill – an analytical and critical mind-set which is unique to the profession. Recalling long lists of symptoms and signs is one thing, but the ability to form connections between various observations, synthesize this knowledge and turn it into something practically useful and beneficial to patients is what lies at the heart of medical practice. Akin to solving a puzzle, going from the first encounter with a patient to being able to manage them as a whole is why medicine is often described as an art.
Experimental medicine is entirely different. I’m not saying that it lacks creativity or relies less on the analysis and criticism of observations at hand, but the discipline is firmly based on hypothesis testing – it depends on the use of statistics to make somewhat rigid yes or no decisions. I am currently reading Claude Bernard’s An Introduction to the Study of Experimental Medicine – Bernard’s representation of physiology and its study is entirely distinct from the way clinical medicine works. Observations are either present or not, they always mean something specific and can be manipulated to influence other observations in a predictable and consistent manner. Clinical medicine to me seems much more, for want of a better word, flexible – symptoms and signs are often out of place and there are always exceptions to every rule (an idea that Bernard criticizes in his book, referring to practicing physicians in particular).
Of course, the way that experimental medicine works is by simplifying everything to its most basic components – cause and effect are isolated and studied in a bubble so to speak, excluding other factors that influence the processes under study. I wouldn’t dare criticize this approach, but it is one that has no place in medical practice. The human body is an infinitely complex interplay of systems that interact with the external environment (an idea proposed and propagated by none other than Bernard himself). Thus, it is easy to imagine that those who deal with human beings as a whole and those who aim to dissect the intricate details of how the body works in health and disease do so in completely different ways. Let’s not forget that the immediate aims of these disciplines are quite different, and so it only makes sense that their approaches differ too.
The question comes up – does its complex nature make medicine an art? There are other, possibly more obvious reasons, to consider. Depending on their specialty, doctors rely to varying degrees on manual skills – be it something as simple as giving an intravenous injection or performing delicate surgery. This may be a part of what places medicine in the realm of art. However, one may argue that this also applies to basic scientists – stabbing a neuron with a tiny micropipette without damaging it probably requires at least as much manual dexterity than removing a tumour from the spinal cord (I have attempted neither, but that’s my impression!). Of course, we have to consider that the stakes are higher in the latter.
Thus far, I have only practiced clinical medicine for one year, but my memories of my time working in the hospital bring me to the same conclusion as my former professors. Sitting in the emergency room as patient after patient pile in, what I learned from lectures was of limited benefit. What truly made a difference was my experience – the further I was into a specialty rotation, the more able I was to take appropriate action and manage patients effectively. Recalling medical facts becomes less of an issue when faced with a clinical problem – second nature kicks in and takes over, usually for the better.
Here are a few quotes that I found relevant to the topic:

“Take care not to fancy that you are physicians as soon as you have mastered scientific facts; they only afford to your understandings an opportunity of bringing forth fruit, and of elevating you to the high position of a man of art.”
“Medicine consists of science and art in a certain relationship to each other, yet wholly distinct. Science can be learned by anyone, even the mediocre. Art, however, is a gift from heaven.”
Armand Trousseau

“No human investigation can be called true science without passing through mathematical tests.”
“Those who are enamoured of practice without science are like a pilot who goes into a ship without rudder or compass and never has any certainty where he is going. Practice should always be based upon a sound knowledge of theory.”

Leonardo Da Vinci (he was both a scientist and an artist)

“Even in the hands of the greatest physicians, the practice of medicine is never identified with scientific (laboratory) medicine, but is only an application of it.”
Rudolf Virchow

“The main cause of this unparalleled progress in physiology, pathology, medicine and surgery has been the fruitful application of the experimental method of research.”
William H Welsh

A prelude to Poland

In anticipation of my upcoming trip to Krakow, where I will be attending a neuroscience forum (NEURONUS 2013), I thought I would post about the rich history of medicine in Poland.
While I am tempted to begin with Copernicus (1473-1573), perhaps a slightly more recent review of Polish achievements in medicine is more relevant. In my field of neurology, two names immediately come to mind when Poland is mentioned – the German psychiatrist Alois Alzheimer who spent the later stages of his career at the University of Breslau (now Wroclaw) and French neurologist of Polish descent Joseph Babinski who described the abnormal plantar reflex occurring after damage to the pyramidal tract. Others who are more intimately linked to Poland are neurophysiologist Napoleon Cybulski, who discovered adrenaline and Samuel Goldflam who helped describe the autoimmune neuromuscular disorder myasthenia gravis in the late nineteenth century. Goldflam studied under neurology legends Karl Friedrich Otto Westphal (German) and Jean-Martin Charcot (French) but spent most of his life in Warsaw. Edward Flatau is another name worth mentioning, he studied in Moscow under such great names as Sergei Korsakoff and worked with famed anatomist Heinrich Von Waldeyer-Hartz. Flatau made major contributions to our knowledge of migraines, the spinal cord and pediatric neurology.
Although I always unconsciously tend to make things all about neurology, Poland’s contribution to medicine extends far and wide across all disciplines. A few of the most noteworthy pioneers include Albert Sabin (Polish-born American) who developed the now widely used oral polio vaccine, Andrew Schally (Polish-born American) who received the Nobel Prize in Medicine for his work on peptide hormones in the brain (he received an honorary doctorate from the Jagiellonian University which is hosting the NEURONUS forum) and Tadeusz Krwawicz, an ophthalmologist who pioneered the field of cataract surgery.
So, my next post will (hopefully) be from the exciting city of Krakow! 🙂

Introspection: How can neurology help anyone?

Neurology. An infinitely fascinating yet somewhat frustrating field of medicine. People often see neurologists as doctors who spend a lot of time thinking about what could be wrong with the patient, analyzing signs and symptoms and localizing the site of disease clinically with impressive accuracy, only to have little they can actually do to treat what they’ve just discovered. Thinkers but not doers, thats the general view among other doctors as well as patients sometimes. The question is, can you blame them? As the field of medicine which I have chosen to dedicate my life to, skimming through a neurology textbook or spending the day on the neurology ward can be depressing.
Now, I’m definitely no neurology expert, but it seems there are simply no curable neurological diseases. That’s ok, an optimist may say, since curable diseases in medicine are few overall. Infectious diseases aside of course. Endocrinology, cardiology, rheumatology, whichever specialty you pick, fully curable diseases are few and far between. But how about treatment? Alleviating patient symptoms and reducing the progression of disease are certainly not the same as cures, but at least these other specialties can say they do something to help. We don’t even have that in neurology.
Perhaps it’s the general complexity of the brain, and how little we know about the way it works that contribute to the lack of effective treatments to tackle it’s diseases. It’s an attractive concept to blame this fact, however the amount of knowledge that has been gained regarding the brain over the past few decades is vast, and the resulting advances in treatments have been relatively modest. It was thought before that the central nervous system cannot regenerate, and thus what is lost is lost forever. We now know that that’s not entirely true, or at least it’s not as simple as previously thought.
Ask a neurologist and they will say – how about stroke? Hmmm I thought so too until recently. In medical school I was taught about how thrombolysis (dissolving the blood clot which is blocking the brain vessels and essentially starving the dying brain tissue) is a revolutionary and effective treatment for stroke. Digging a little deeper it seems that’s not really the case. About 4% of people with stroke actually get thrombolytic therapy, once you sift out all the people who don’t receive it because they came to the hospital too late or because they have one or more of the many contraindications to the treatment. Four percent. The number of people you need to treat for one patient to benefit in a certain way from the treatment (calculated from early studies) seems to be around eight. Eight people for one person to benefit. Out of four percent. That’s really sad.
Please don’t get me wrong, I appreciate the efforts that were made by others for us to reach even this modest benefit from stroke treatment. It is, after all, better than nothing. But this is a recurring theme in neurology – treatments that are riddled with unwanted effects, or are simply not good enough in terms of combatting the disease. Parkinson’s disease? Drugs such as L-dopa that quickly lose their ability to improve symptoms, and eventually cause effects that can be worse than the disease itself. Multiple sclerosis? No standard therapy which reduces the number of attacks over a long period (Natalizumab is an exception, but it’s complicated – very complicated). I could go on.
Now, I’m not a neurologist and, like I said I’m therefore certainly no expert on the matter, but this is my general impression. I’m sure lots of people far more experienced than I can challenge me on these observations, but there’s no denying that neurology is far behind in treating its maladies relative to its fellow specialties. Anyway, I would hate to point out faults without talking about how we can perhaps change this in the future.
What needs to change? I’m not really sure, but it’s interesting to think about it. More research? Neuroscience can’t really claim to be a neglected field of research these days, but it had been for a long time. Maybe that’s why it’s so far behind. Do we need more research, just to make up for lost time? Perhaps the complexity of the brain in itself demands more of our attention. If that’s the answer, or part of it at least, then it seems I made the right choice by choosing to supplement my career in clinical medicine with scientific research. Do we need more neurologists or more researchers? Or do we need more neurologists who do a considerable amount of research? What can I, having chosen this career path, do to help?
It’s a cliché to say that I became a doctor to help people. I think it has even become a cliché to state that it’s a cliché. The truth is, that was a big part of my decision to enter medical school. Among other things (scientific curiosity, respectability, etc), the thought of helping people is perhaps the most rewarding and motivating aspect for everyone in the medical field. Treating patients and watching as they improve, sometimes dramatically, is something that every doctor needs in order to cope with the harsh mental and physical demands of our jobs.
The question now arises – why am I preoccupied with this? Am I losing faith in my career before it actually begins? Am I assuming prematurely that a life in neurology will be unrewarding and disheartening? I shudder at the thought of someone reading this blog years from now and using it as a reason not to give me that neurology training job (assuming that anyone at all reads it of course!). I would like to think, however, that this introspection on my behalf will make me better at my job. After all, logic dictates that the combination of curiosity and the appreciation of the faults of the status quo represents fertile ground upon which to strive for improvement. At least that’s what I hope.

Nature’s role in modern medicine

Whether as patients or healthcare workers, it’s easy to overlook the origins of the drugs used to treat common diseases. In the era of recombinant technology and generally complex ways to design, test and use medicines, it’s refreshing when a drug crosses our path which is derived from nature in a simple yet brilliant way. Now, there are countless examples of these stories described in scientific literature, as well as within the mainstream media. Below are a few examples which I found particularly memorable.
I distinctly remember during my pharmacology final oral exam in medical school being asked a question. It was the very last question which the examiner asked me, and I was taken slightly aback by how simple it was yet how little I had thought about it before. He had grilled me for a good fifteen minutes about angiotensin converting enzyme inhibitors, a class of drugs used for treating hypertension among other conditions. Then, as I felt the exam drawing to a close he asked ‘How were ACE inhibitors discovered?’. Now, for a pharmacy student this question would be no problem at all, as they focus a lot of their time in studying how drugs are derived and synthesized. For me however, being immensely preoccupied with a wide range of subjects, the more clinical aspects of medicinal use being just one of them, it was something which I had hardly ever thought of.
ACE inhibitors were derived in the 1960s from the venom of the Brazilian pit viper, Bothrops jararaca. The venom leads to a severe drop in blood pressure by blocking the renin angiotensin aldosterone system. It is noteworthy that this selective mechanism of action means that ACE inhibitors may not be effective for everyone in terms of lowering blood pressure. However, the drugs have several other benefits including protecting the kidneys in diabetes and improving heart function in patients with heart failure.
Another interesting drug discovery story is that of exenatide , an anti diabetic agent licensed for use in 2005. This drug was isolated from lizard (Gila monster) saliva and has been shown to stimulate insulin release from the pancreas. Unlike other drugs with the same action, exenatide only increases insulin secretion when glucose levels are high and therefore does not lead to hypoglycemia. It also has numerous other beneficial effects including promoting weight loss.
My favorite, however, is the story behind a new thrombolytic treatment for stroke. The drug, now called desmoteplase, is derived from the saliva of the vampire bat Desmodus rotundus. This new drug is still in the testing phases of development (phase III trials), but has already shown great promise. It stays in the body for a longer time than other thrombolytics, is more selective in its action and does not lead to neurotoxicity. It is possible that it may represent a breakthrough in the treatment of stroke, which is currently a highly debated and complicated issue.

20121228-123020.jpg

20121228-123015.jpg

20121228-123025.jpg

Complexities

A clinician is complex. He is part craftsman, part practical scientist, and part historian.

A quote by Thomas Addis, a pioneer in the field of nephrology who was born in Edinburgh and studied medicine there, as well as at the Charité in Berlin. One of his major contributions to clinical medicine was his emphasis on examining patients urine both with the naked eye and microscopically – which is now standard practice.