Andrew Althouse Profile picture
Oct 4, 2018 26 tweets 6 min read Twitter logo Read on Twitter
(THREAD)

By @rwyeh request, I bring you this brief introduction of joint frailty models and their application in the #COAPT trial...
Please be advised that @graemeleehickey and others are more expert than I am in the direct, real-world application of such models, but here I am, so whatever. Read it, or don’t.
Suppose you’re just reading along in the #COAPT primary paper, found here:

nejm.org/doi/full/10.10…

when you encounter this bumfuzzle:
“Analysis of the primary effectiveness end point of all hospitalizations for heart failure was performed with a joint frailty model to account for correlated events and the competing risk of death.”
You ask “What’s a *joint frailty* model? What’s wrong with regular old Kaplan-Meier curves and Cox models?”
Well: sometimes we’re interested in the effect our intervention has on a specific endpoint (hosp for heart failure) that a) patients may experience more than once and/or b) patients may cease to be at risk for because something else happened and they’re no longer at risk (death)
In traditional survival analyses with time-to-event data (Kaplan-Meier, Cox models) we follow the patient until i) they have the event or ii) are “censored” and no longer at risk for the event
Patients may be censored for several reasons: i) end study, ii) patient withdrawal or loss 2 follow-up, and most importantly iii) experienced some competing event which makes them no longer at risk (if HHF is primary endpoint, a patient who has died is no longer “at risk” of HHF)
“Doesn’t a regular survival analysis account for censoring though?”
Yes, but with an assumption of *non-informative* censoring, meaning that the censoring is not related to the probability of experiencing the event
With an outcome like “hospitalization for heart failure” there is a difference between a patient censored because they died (no longer at risk for HHF) versus a patient censored because the study ended (also no longer at risk for HHF, but with different implications than “death”)
Also, the aforementioned “regular” survival analysis only follows a patient until their first event, but that can throw away a lot of useful information & doesn’t capture the full burden when the endpoint is something like HHF (which can happen multiple times to some patients)
“What about Andersen-Gill models…” – KEEP YOUR SHIRT ON, we’ll get there
Anyways, that’s the groundwork for why we need something a little different than your traditional Kaplan-Meier-curves-with-a-regular-old-Cox-model approach for the #COAPT primary endpoint
Now, I’ll paraphrase (er, copy) a few tweets from @graemeleehickey describing the joint frailty model
We begin with a recurrent events model (Andersen-Gill) with a frailty term (random effect)
[important: the random effect / frailty term allows you to model correlations between events of the same patient by using a random component for the hazard function]
We also have a second model for the failure process (death) that includes the frailty term
The models are linked by sharing the random effect. It is a joint model for 2 separate but correlated event processes.
Does that confuse you? Probably! I don’t really have a better way to explain it without lots of mathiness and symbols. If anyone else does, I am delighted to hear/see it.
“Gee Andrew. Shouldn’t we use these instead of Kaplan-Meier / Cox models for basically any endpoint like hospitalization when mortality is a competing risk?”
It would seem so! Maybe there’s a good reason not to…again, would love to hear from someone who has studied these in more depth.
Suggest consulting your friendly neighborhood statistician if you are studying an endpoint such as HHF or something else which has the conditions named above: a) multiple occurrences and b) competing risk likely to lead to noninformative censoring on your primary endpoint
All questions may be referred to @graemeleehickey
(also, #cardiotwitter and #medtwitter, you should probably be following Graeme if you’re not already)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Andrew Althouse

Andrew Althouse Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ADAlthousePhD

Sep 26, 2018
(THREAD) As requested/discussed yesterday, here are a few thoughts on post-hoc power
It is not uncommon for reviewers to ask for a “post hoc power” calculation. The most common reasons people ask about this are:
i) the main findings aren’t significant, and they want to know either a) what was the “observed power” (which we’ll discuss in a moment) or b) “given the observed effect size, how large would your study have needed to be for significance”
Read 31 tweets
Sep 21, 2018
I've been a mere spectator to the Wansink scandal, but I think the cautionary tale is worth amplifying across the fields of statistics, medicine & the entire research community. Thus far, the discussion *seems* mostly confined to psychology researchers & some statisticians.
I think it’s important to spread this story across all research for those who may not be aware i) what has happened and ii) why it’s a big deal.
I’m going to link several tweets, threads, and articles at the end of the thread. In the first 11 Tweets, here is the “short” version for those unaware of the Wansink scandal:
Read 48 tweets
Sep 13, 2018
(THREAD)

A few months ago, the Annals of Medicine published a controversial piece titled “Why all randomized controlled trials produce biased results.” Topic: not a bad idea - we should examine trials carefully. Execution: left something to be desired.
We have penned a reply that covered some of the most problematic misstatements, with helpful input from @coachabebe, @GSCollins, and @f2harrell
After some emails between ourselves, Krauss, and the journal, he chose to revise the original piece in response to some of our comments
Read 50 tweets
Sep 12, 2018
While I am generally a fan of Ioannidis' & believe he raises valid points here & elsewhere, this piece is more than a little ironic. As of September 12, 2018, he has authored 58 papers published this year (and it's no fluke - 2017: 64, 2016: 78, 2015: 74, etc...)
I do think the article raises some valid points about authorship, and I have certainly seen abuses (in both directions: undeserved authorships granted to people barely involved in the work, and screwjobs that denied people who deserved an authorship their appropriate credit)
People that run up large numbers of authorships (excluding outright nefarious conduct, like publishing in sham journals) are likely senior members of a lab or group for which they led many studies being written up by subordinates.
Read 5 tweets
Sep 4, 2018
More problems with logistic regression in the medical literature:
(PubMed listing, if you prefer: ncbi.nlm.nih.gov/pubmed/27756470)
Read 28 tweets
Jun 14, 2018
(THREAD) There seems to be some...I'll call it tension between trialists and critics that occasionally burbles up on Twitter. As someone that kinda-sorta sits on both side of this fence, I have a few thoughts.
1) I agree with the statement that you don't have to be a trialist to critique a trial. You do need to know what you're talking about, but you don't have to "be a trialist" to have methods/statistical knowledge that allows you to comment on trials.
2) However, I also sympathize with the trialist who points out the difficulty of actually designing, conducting, running, and analyzing the thing(s). They're challenging even when everything goes right. And everything goes right...well, never. That never happens.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(