So until a few years ago I reluctantly admit I was one of those who read the title, abstract, intro and quite diagonally went thru methods and results before starting to pay attention at the discussion. On one hand I got to read a lot more articles per unit time, but my analysis was rudimentary at best…
That was until a colleague, good friend, judoka and microbiologist extraordinaire Peter Barriga started to shine some light into my epidemiological darkness while teaching me some judo. So while waiting for his textbook to come out, here are a few principles that I’ve found very interesting, revealing, but also somewhat frightening: in part, they explain the lack of strength and consistency found in much of the medical literature…
So let’s look at the publication bias. This refers to the likelihood that a study will be published in a major medical journal. Not surprisingly, journals are generally more interested in positive studies than negative ones. After all, who would be interested in reading a journal where more than half the studies concluded with “well, this didn’t work…” It would feel like a waste of reading time.
Now, let’s look at our whole p value, a number (0.05) which we are culturally in love with. What does it really mean? It means that there is a 1 in 20 or less likelihood of the result being purely chance. So let’s say a popular drug for sepsis is studies by 20 teams, the same study done 20 times could yield 1 positive and 19 negative results – by chance alone.
The question then becomes, which study is picked by a big journal to be published… One of the 19 negative studies or…the positive one?
Fortunately nowadays due to the information age, a study registry exists where all studies – including negative ones – can be found, so that anyone interested enough in a particular topic can dig up all the data and have an accurate assessment, but is that the case for most physicians? Or do most pick up the big titles of the big journals…?
Hmmm… So I think it is incumbent on all of us to examine the main things we do in our practice, and make sure we have carefully looked at the available data surrounding it, and not just blindly applied guidelines, recipes or whatever our seniors and mentors are doing or have shown us.
more to come on how to make our practice GEBM (good evidence based medicine) rather than just EBM…
Philippe
Philippe,
you highlighted an extremely important issue.
You may be interested in this campaign I am supporting: http://www.alltrials.net/
Absolutely Marco and thanks for linking, that open-access to all trials, in this case especially “negative” ones, is extremely important!
P
The big journals do seem to like negative studies, when they are contrary to current practice; so really what they like are studies that may change practice. That is why systematic reviews are important tools and we should not just go by the last study published. The critical p-value is also a myth – the level of significance should be evaluated in context of the risk/harm ratio if that result were to be adopted.
Yes! Excellent comment! Thank you for clarifying, indeed they do look for “game-changers,” rather than a positive/negative. Systematic reviews are excellent so long as they can include data that was not necessarily published – e.g. http://www.alltrials.net – so as not to simply re-emphasise the bias.
The use of the risk/harm to benefit ratio is critical indeed.