Why most of “clinical” imaging research is methodological – and why that’s OK

When people ask me what kind of research I did during my PhD (and indeed what kind I do now), I tell them I did MRI methods research. But what I do is very different to the image that comes up in people’s minds when I tell them this. I don’t build radiofrequency coils for MRI scanners, nor do I write MR sequences or even develop new analysis methods. I spent the majority of my PhD making small, incremental changes to the way MRI data is acquired and analyzed, and then testing how these changes affect how we measure certain things in the brain.

This type of research exists within what I consider a spectrum of phases of clinical research (NB: this has nothing to do with the phases of clinical trials of interventions – it’s only “clinical” in the sense that the research is being done on humans):

1. New methods are developed

2. They are validated by testing how well they perform in certain settings and improvements are made accordingly (followed by more validation).

3. Then, when they’re good and ready (this can take years), they’re used to answer clinical or biological questions.

People often neglect the second phase – the validation, improvement, and re-validation. It’s sometimes completely overlooked, but arguably the bigger problem is that it’s often conflated with the latter stage – the testing of clinical or biological hypotheses. The line between these phases is often blurred and when, as a researcher, you try to emphasize the separation of the two, it’s considered pedantic and dull.

Several types of scenarios exist – for example, you can have a method that measures a phenomenon X in a different way to an established method or you can have an operationalized measurement of phenomenon X (i.e. an indirect measurement, almost like a surrogate marker). The key question has to be: am I measuring what I think I’m measuring? This can be done by directly comparing the performance of the new method  to a more established method, or by testing to see if that method gives you the results you would expect in a biological or clinical situation that has been previously well studied and described.

For the record, I think the second option, although indirect, is completely valid – taking a method that’s under development and testing if it reflects a well-established biological phenomenon (if that’s what it’s meant to reflect) – that still counts as validation (I’ve done this myself on several occasions – e.g. here, here, and here). But the key thing is that it has to be well-established. Expecting a method you’re still trying to comprehensively understand to tell you something completely – or even mostly – new makes no sense.

Unfortunately, that’s often what’s expected of this kind of research. It’s expected from the researchers doing the work themselves, from their colleagues and/or supervisors, from manuscript reviewers, as well as from funding agencies. The reason is simple (albeit deeply misguided), and it confronts researchers working on improving and validating methods in clinical research very often: people want you tell them something new about biology or pathophysiology. They very often don’t want to hear that you’re trying to reproduce something established, even if it is with a new method that might be better for very practical reasons (applicability, interpretability, etc). This has presented itself to me over the years in many ways – reviewers bemoaning that my studies “provide no new biological insights” or well-meaning colleagues discouraging me from writing my PhD dissertation in a way that makes it sound “purely methodological” (“you need to tell the reader something new, something previously unknown”).

The irony is that, in the years I’ve spent doing (and reading) imaging research, I’ve become fairly convinced that the majority of clinical imaging studies  should fall into the second category mentioned above. However, it’s often mixed up with, and presented as though it belongs to, the third category. Researchers use new method Y to test a “novel” hypothesis, and interpret the results assuming (without proper evidence) that method Y is indeed measuring what it’s supposed to be. I notice this when I read papers – the introduction talks about the study as if its aim is to test and validate method Y, and the discussion ends up focusing on all the wonderful new insights we’ve learned from the study.

To be clear, I’m in no way saying that the ultimate goal of a new method shouldn’t be to be taken to studies of the third category. Validate, improve, validate, then apply with the hope of learning something new – that’s should clearly be the goal of any new method meant for clinical use. But we shouldn’t expect both to be done simultaneously. Instead, we need to acknowledge the clear separation between the types of clinical research and their respective goals, and to recognize that not all research is new and exciting in terms of what it tells us about biology or pathophysiology.

Clinical trials and the RRRR cycle

IMG_20180516_102631

I just got back from one of the world’s largest stroke meetings, the European Stroke Organisation Conference (ESOC), held this year in Gothenburg, Sweden. The overwhelming focus of the conference is on groundbreaking large clinical trials, reports of which dominate the plenary-like sessions of the schedule. One thing I’ve noticed about talks on clinical trials is how, every year, the speakers go to great lengths to emphasize some (positive) ethical, methodological, or statistical aspect of their study.

This is the result of something that I like to call the RRRR cyle (pronounced “ARRARRARRARR” or “rrrr” or “quadruple R” or whichever way won’t scare/excite those in your immediate vicinity at that moment in time). It usually starts with loud opposition (reprimand) to some aspect of how clinical trials are run or reported. This usually comes from statisticians, ethicists, or more methodologically inclined researchers. Eventually, a small scandal ensues, and clinical researchers yield (usually after some resistance). They change their ways (repentance) and, in doing so, become fairly vocal about what they’re now doing better (representation)*.

Examples that I’ve experienced in my career as a stroke researcher thus far are:

  • Treating polytomous variables as such instead of binarizing them (the term “shift analysis” – in the context of outcome variables – is now an indispensable part of the clinical stroke researcher’s lexicon).
  • Pre-specifying hypotheses, especially when it comes to analyzing subgroups.
  • Declaring potential conflicts of interest.

Most of these practices are quite fundamental and may have been standard in other fields before making their way to the clinical trial world (delays might be caused by a lack of communication across fields).  Still, it’s undoubtedly a good thing that we learn from our mistakes, change, and give ourselves a subtle pat on the back every time we put what we’ve learned to use.

The reason I bring it up is, maybe soon someone** could start making some noise about one of the following issues that come up way too often in my opinion:

  • (Mis-)interpreting p-values that are close to 0.05, and how this is affected by confirmation bias.

  • Testing if groups are “balanced” in terms of baseline/demographic variables in trials using “traditional” statistical methods instead of equivalence testing.

As the ESOC meeting keeps reminding me, a lot can be done in a year. So I’m pretty optimistic we can get some of these changes implemented by ESOC 2019 in Milan!


* If you think this particular acronym is unnecessary or a bit of a stretch, I fully agree. I also urge you to take a look at this paper for a list of truly ridiculous acronyms (all from clinical trials of course).

** I would, but I’m not really the type – I’d be glad to loudly bang the drums after someone gets the party started, though.

 

Introspection: How can neurology help anyone?

Neurology. An infinitely fascinating yet somewhat frustrating field of medicine. People often see neurologists as doctors who spend a lot of time thinking about what could be wrong with the patient, analyzing signs and symptoms and localizing the site of disease clinically with impressive accuracy, only to have little they can actually do to treat what they’ve just discovered. Thinkers but not doers, thats the general view among other doctors as well as patients sometimes. The question is, can you blame them? As the field of medicine which I have chosen to dedicate my life to, skimming through a neurology textbook or spending the day on the neurology ward can be depressing.
Now, I’m definitely no neurology expert, but it seems there are simply no curable neurological diseases. That’s ok, an optimist may say, since curable diseases in medicine are few overall. Infectious diseases aside of course. Endocrinology, cardiology, rheumatology, whichever specialty you pick, fully curable diseases are few and far between. But how about treatment? Alleviating patient symptoms and reducing the progression of disease are certainly not the same as cures, but at least these other specialties can say they do something to help. We don’t even have that in neurology.
Perhaps it’s the general complexity of the brain, and how little we know about the way it works that contribute to the lack of effective treatments to tackle it’s diseases. It’s an attractive concept to blame this fact, however the amount of knowledge that has been gained regarding the brain over the past few decades is vast, and the resulting advances in treatments have been relatively modest. It was thought before that the central nervous system cannot regenerate, and thus what is lost is lost forever. We now know that that’s not entirely true, or at least it’s not as simple as previously thought.
Ask a neurologist and they will say – how about stroke? Hmmm I thought so too until recently. In medical school I was taught about how thrombolysis (dissolving the blood clot which is blocking the brain vessels and essentially starving the dying brain tissue) is a revolutionary and effective treatment for stroke. Digging a little deeper it seems that’s not really the case. About 4% of people with stroke actually get thrombolytic therapy, once you sift out all the people who don’t receive it because they came to the hospital too late or because they have one or more of the many contraindications to the treatment. Four percent. The number of people you need to treat for one patient to benefit in a certain way from the treatment (calculated from early studies) seems to be around eight. Eight people for one person to benefit. Out of four percent. That’s really sad.
Please don’t get me wrong, I appreciate the efforts that were made by others for us to reach even this modest benefit from stroke treatment. It is, after all, better than nothing. But this is a recurring theme in neurology – treatments that are riddled with unwanted effects, or are simply not good enough in terms of combatting the disease. Parkinson’s disease? Drugs such as L-dopa that quickly lose their ability to improve symptoms, and eventually cause effects that can be worse than the disease itself. Multiple sclerosis? No standard therapy which reduces the number of attacks over a long period (Natalizumab is an exception, but it’s complicated – very complicated). I could go on.
Now, I’m not a neurologist and, like I said I’m therefore certainly no expert on the matter, but this is my general impression. I’m sure lots of people far more experienced than I can challenge me on these observations, but there’s no denying that neurology is far behind in treating its maladies relative to its fellow specialties. Anyway, I would hate to point out faults without talking about how we can perhaps change this in the future.
What needs to change? I’m not really sure, but it’s interesting to think about it. More research? Neuroscience can’t really claim to be a neglected field of research these days, but it had been for a long time. Maybe that’s why it’s so far behind. Do we need more research, just to make up for lost time? Perhaps the complexity of the brain in itself demands more of our attention. If that’s the answer, or part of it at least, then it seems I made the right choice by choosing to supplement my career in clinical medicine with scientific research. Do we need more neurologists or more researchers? Or do we need more neurologists who do a considerable amount of research? What can I, having chosen this career path, do to help?
It’s a cliché to say that I became a doctor to help people. I think it has even become a cliché to state that it’s a cliché. The truth is, that was a big part of my decision to enter medical school. Among other things (scientific curiosity, respectability, etc), the thought of helping people is perhaps the most rewarding and motivating aspect for everyone in the medical field. Treating patients and watching as they improve, sometimes dramatically, is something that every doctor needs in order to cope with the harsh mental and physical demands of our jobs.
The question now arises – why am I preoccupied with this? Am I losing faith in my career before it actually begins? Am I assuming prematurely that a life in neurology will be unrewarding and disheartening? I shudder at the thought of someone reading this blog years from now and using it as a reason not to give me that neurology training job (assuming that anyone at all reads it of course!). I would like to think, however, that this introspection on my behalf will make me better at my job. After all, logic dictates that the combination of curiosity and the appreciation of the faults of the status quo represents fertile ground upon which to strive for improvement. At least that’s what I hope.