Please be advised that @graemeleehickey and others are more expert than I am in the direct, real-world application of such models, but here I am, so whatever. Read it, or don’t.
Suppose you’re just reading along in the #COAPT primary paper, found here:
“Analysis of the primary effectiveness end point of all hospitalizations for heart failure was performed with a joint frailty model to account for correlated events and the competing risk of death.”
You ask “What’s a *joint frailty* model? What’s wrong with regular old Kaplan-Meier curves and Cox models?”
Well: sometimes we’re interested in the effect our intervention has on a specific endpoint (hosp for heart failure) that a) patients may experience more than once and/or b) patients may cease to be at risk for because something else happened and they’re no longer at risk (death)
In traditional survival analyses with time-to-event data (Kaplan-Meier, Cox models) we follow the patient until i) they have the event or ii) are “censored” and no longer at risk for the event
Patients may be censored for several reasons: i) end study, ii) patient withdrawal or loss 2 follow-up, and most importantly iii) experienced some competing event which makes them no longer at risk (if HHF is primary endpoint, a patient who has died is no longer “at risk” of HHF)
“Doesn’t a regular survival analysis account for censoring though?”
Yes, but with an assumption of *non-informative* censoring, meaning that the censoring is not related to the probability of experiencing the event
With an outcome like “hospitalization for heart failure” there is a difference between a patient censored because they died (no longer at risk for HHF) versus a patient censored because the study ended (also no longer at risk for HHF, but with different implications than “death”)
Also, the aforementioned “regular” survival analysis only follows a patient until their first event, but that can throw away a lot of useful information & doesn’t capture the full burden when the endpoint is something like HHF (which can happen multiple times to some patients)
“What about Andersen-Gill models…” – KEEP YOUR SHIRT ON, we’ll get there
Anyways, that’s the groundwork for why we need something a little different than your traditional Kaplan-Meier-curves-with-a-regular-old-Cox-model approach for the #COAPT primary endpoint
Now, I’ll paraphrase (er, copy) a few tweets from @graemeleehickey describing the joint frailty model
We begin with a recurrent events model (Andersen-Gill) with a frailty term (random effect)
[important: the random effect / frailty term allows you to model correlations between events of the same patient by using a random component for the hazard function]
We also have a second model for the failure process (death) that includes the frailty term
The models are linked by sharing the random effect. It is a joint model for 2 separate but correlated event processes.
Does that confuse you? Probably! I don’t really have a better way to explain it without lots of mathiness and symbols. If anyone else does, I am delighted to hear/see it.
“Gee Andrew. Shouldn’t we use these instead of Kaplan-Meier / Cox models for basically any endpoint like hospitalization when mortality is a competing risk?”
It would seem so! Maybe there’s a good reason not to…again, would love to hear from someone who has studied these in more depth.
Here are some technical papers on joint frailty models:
Suggest consulting your friendly neighborhood statistician if you are studying an endpoint such as HHF or something else which has the conditions named above: a) multiple occurrences and b) competing risk likely to lead to noninformative censoring on your primary endpoint
(THREAD) As requested/discussed yesterday, here are a few thoughts on post-hoc power
It is not uncommon for reviewers to ask for a “post hoc power” calculation. The most common reasons people ask about this are:
i) the main findings aren’t significant, and they want to know either a) what was the “observed power” (which we’ll discuss in a moment) or b) “given the observed effect size, how large would your study have needed to be for significance”
I've been a mere spectator to the Wansink scandal, but I think the cautionary tale is worth amplifying across the fields of statistics, medicine & the entire research community. Thus far, the discussion *seems* mostly confined to psychology researchers & some statisticians.
I think it’s important to spread this story across all research for those who may not be aware i) what has happened and ii) why it’s a big deal.
I’m going to link several tweets, threads, and articles at the end of the thread. In the first 11 Tweets, here is the “short” version for those unaware of the Wansink scandal:
A few months ago, the Annals of Medicine published a controversial piece titled “Why all randomized controlled trials produce biased results.” Topic: not a bad idea - we should examine trials carefully. Execution: left something to be desired.
We have penned a reply that covered some of the most problematic misstatements, with helpful input from @coachabebe, @GSCollins, and @f2harrell
After some emails between ourselves, Krauss, and the journal, he chose to revise the original piece in response to some of our comments
While I am generally a fan of Ioannidis' & believe he raises valid points here & elsewhere, this piece is more than a little ironic. As of September 12, 2018, he has authored 58 papers published this year (and it's no fluke - 2017: 64, 2016: 78, 2015: 74, etc...)
I do think the article raises some valid points about authorship, and I have certainly seen abuses (in both directions: undeserved authorships granted to people barely involved in the work, and screwjobs that denied people who deserved an authorship their appropriate credit)
People that run up large numbers of authorships (excluding outright nefarious conduct, like publishing in sham journals) are likely senior members of a lab or group for which they led many studies being written up by subordinates.
(THREAD) There seems to be some...I'll call it tension between trialists and critics that occasionally burbles up on Twitter. As someone that kinda-sorta sits on both side of this fence, I have a few thoughts.
External Tweet loading...
If nothing shows, it may have been deleted
by @THilalMD view original on Twitter
1) I agree with the statement that you don't have to be a trialist to critique a trial. You do need to know what you're talking about, but you don't have to "be a trialist" to have methods/statistical knowledge that allows you to comment on trials.
2) However, I also sympathize with the trialist who points out the difficulty of actually designing, conducting, running, and analyzing the thing(s). They're challenging even when everything goes right. And everything goes right...well, never. That never happens.