Andrew Althouse Profile picture
Apr 30, 2018 76 tweets 12 min read Read on X
OK, folks. Let’s all sit down for a quick chat about this article the “Annals of Medicine” coughed up a few weeks ago about RCT’s: tandfonline.com/doi/full/10.10…
This will be my longest TWEETORIAL yet, but that’s because we have a LOT to talk about here
Several others have discussed this recently on Twitter and I welcome their thoughts as well (@statsepi, @stephensenn, @briandavidearp, @Prof_Livengood, @learnfromerror)
I suspect most of them would prefer not to give this article a second thought other than “Ugh, these tired old arguments again?”
But I do worry about the influence of a poorly-informed article with a flashy title (“Why all randomised controlled trials produce biased results”) going viral with the like-n-share treatment
And when I see this getting shared / covered / promoted by a credible source like BMJ
I’d like to issue some clarifications for those who may not spot the issues quite so easily
Doug Altman and @stephensenn already issued a brief comment in the BMJ, hoping to minimize the damage (bmj.com/content/361/bm…)
I’d like to expand on their response, though, and hopefully disseminate this across another audience of readers in #medtwitter
Most of what I say here will not be original content. Several of the foremost statisticians and trialists have written about most of this before.
One would hope that an LSE post-doc writing an article with such a definitive-sounding title would have read something of the history of the subject
Before we go further, let me say this: I do not know the author. He is probably a very nice person. Most people are. He is also probably a very smart person. I am not under the impression that LSE is a factory of dimwits
Unfortunately, being a very nice person, and even being a very smart person, does not guarantee that one is sufficiently qualified to offer an intelligent opinion on all subjects
And in this particular case, it appears that the author outside his particular bounds of expertise, and in doing so written a potentially inflammatory and damaging article based on misguided beliefs about the conduct and intent of RCT’s
So let’s go through a few key points:
NO ONE HAS EVER SAID THIS. WE ALL KNOW THAT TRIALS HAVE ASSUMPTIONS, BIASES AND LIMITATIONS!
Also: this is the “first study to examine that hypothesis”
Um, I am sorry to burst your bubble, but people have written about trial methodology once or twice before
The study identifies a number of “novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies”
Protip to all you kids out there
If you want to write an article with a provocative title in a major research area, it’s usually a good idea to read one or two things about that research area
Again: NO ONE SAYS TRIALS ARE EXEMPT FROM THESE THINGS! If anything, trialists are MORE STRICT about this stuff than anyone
Oh, good, let’s pick 10 trials and use that to make a sweeping broad conclusion about “Why all randomized controlled trials produce biased results”
Again, I think we’re arguing a strawman here. EVERYONE involved in trials thinks that their strengths and limitations should be carefully scrutinized and discussed.
Yes, we know randomization is largely infeasible for answering some scientific questions. That’s not an argument against using it where we CAN answer specific questions
It’s almost like the author hasn’t read anything about clinical trials since the 1990’s
But hey, why actually READ about such innovative clinical trials when you’re busy writing a piece that they can’t do something?
Kind of got a point here. But any experienced trialist knows to comment on this. Trial findings ALWAYS need to be kept in appropriate context of the patients/population that were actually enrolled in the trial
Oh dear. So many people believe this is a thing. Folks, this isn’t a thing. Take it from @stephensenn
No, randomization is not guaranteed to ensure a balanced distribution. Nor is that a condition for valid statistical inference from trials.
Oh, for the love of…re-randomization (the way it’s described here) is only possible if we have all of the units being randomized available
Raise your hand if you’ve ever been part of a trial where this was done, or even remotely feasible
Someone actually did give an example the other day; it CAN happen; it’s also EXTREMELY rare in medical trials
Remember, this is published in Annals of Medicine. And it brings up 10 trials from the field of medicine. Anyone care to guess how many of those 10 trials this would have been remotely feasible for?
The acute ischemic stroke trial enrolled from January 1991 to October 1994
Those poor people from January 1991. They’d have to wait three years to get their randomization!
That’s a long time to wait for stroke treatment
OK. This is, like, kind of half partway right. Underpowered trials can be a problem. But small trials aren’t necessarily more BIASED than large trials. They produce a less precise estimate of treatment effect. That is not the same thing as bias.
Um. No. Those are not the same thing. Those are not even close to the same thing.
The probability of 317 heads / 307 tails in 624 tosses is NOT EVEN CLOSE to the same calculation as the probability of the trial results described here
If you think they are the same, let me know and I’ll work out the math for each & post here
Yes. We should report the primary outcome and secondary outcomes. All good trials do this. If the incidence of adverse events counterbalances the other benefits, it will be commented on. Straw man.
Um. Some trials are “efficacy” trials (does this work under a tightly controlled setting) while some are “effectiveness” (does this work in the real world, the way it’s going to be used?) - and both have value.
Sigh. We have statistical methods to account for this.
While the average treatment effect is often the primary finding reported, a modern generation of trials is working on ways to provide tailored estimates of treatment effect to specific groups / patients
Further, I believe the ATE is generally transportable (@f2harrell is fond of saying)
“Here are a bunch of other words about things vaguely related to trials”
And we’re back to this again. Sigh. Zero of the 10 trials in the, uh, “systematic review” thingy here could have done this. And, remember kids, it’s still not a necessary condition for valid statistical inference from RCT’s.
OK. Now I’ve actually exhausted the other points I had. In summary:
Trials are not unimpeachable. They are indeed subject to assumptions, biases, and limitations. They also provide the strongest evidence we have for many medical questions.
If you want to “come at” trials, please, at least use actual valid criticisms. Inventing a term and putting in italics does not make it a valid criticism.
For a closing laugh, consider the circularity of someone calling out trials for small sample size & problems with generalizability that wrote a paper with “all trials” in the title and included 10 trials for critical review

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Andrew Althouse

Andrew Althouse Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ADAlthousePhD

Oct 4, 2018
(THREAD)

By @rwyeh request, I bring you this brief introduction of joint frailty models and their application in the #COAPT trial...
Please be advised that @graemeleehickey and others are more expert than I am in the direct, real-world application of such models, but here I am, so whatever. Read it, or don’t.
Suppose you’re just reading along in the #COAPT primary paper, found here:

nejm.org/doi/full/10.10…

when you encounter this bumfuzzle:
Read 26 tweets
Sep 26, 2018
(THREAD) As requested/discussed yesterday, here are a few thoughts on post-hoc power
It is not uncommon for reviewers to ask for a “post hoc power” calculation. The most common reasons people ask about this are:
i) the main findings aren’t significant, and they want to know either a) what was the “observed power” (which we’ll discuss in a moment) or b) “given the observed effect size, how large would your study have needed to be for significance”
Read 31 tweets
Sep 21, 2018
I've been a mere spectator to the Wansink scandal, but I think the cautionary tale is worth amplifying across the fields of statistics, medicine & the entire research community. Thus far, the discussion *seems* mostly confined to psychology researchers & some statisticians.
I think it’s important to spread this story across all research for those who may not be aware i) what has happened and ii) why it’s a big deal.
I’m going to link several tweets, threads, and articles at the end of the thread. In the first 11 Tweets, here is the “short” version for those unaware of the Wansink scandal:
Read 48 tweets
Sep 13, 2018
(THREAD)

A few months ago, the Annals of Medicine published a controversial piece titled “Why all randomized controlled trials produce biased results.” Topic: not a bad idea - we should examine trials carefully. Execution: left something to be desired.
We have penned a reply that covered some of the most problematic misstatements, with helpful input from @coachabebe, @GSCollins, and @f2harrell
After some emails between ourselves, Krauss, and the journal, he chose to revise the original piece in response to some of our comments
Read 50 tweets
Sep 12, 2018
While I am generally a fan of Ioannidis' & believe he raises valid points here & elsewhere, this piece is more than a little ironic. As of September 12, 2018, he has authored 58 papers published this year (and it's no fluke - 2017: 64, 2016: 78, 2015: 74, etc...)
I do think the article raises some valid points about authorship, and I have certainly seen abuses (in both directions: undeserved authorships granted to people barely involved in the work, and screwjobs that denied people who deserved an authorship their appropriate credit)
People that run up large numbers of authorships (excluding outright nefarious conduct, like publishing in sham journals) are likely senior members of a lab or group for which they led many studies being written up by subordinates.
Read 5 tweets
Sep 4, 2018
More problems with logistic regression in the medical literature:
(PubMed listing, if you prefer: ncbi.nlm.nih.gov/pubmed/27756470)
Read 28 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(