Here are ten things I’ve changed my mind about in the last few years of being a scientist
1. P-values are bad.
Nope. P-values are good for what they’re designed to do. Just because they’re (often) misused doesn’t mean that we should abandon them.
2. Bayes factors will save us from the misuse of p-values
No. Bayes factors *can* be useful, but they’re not always the solution to p-value limitations.
3. Everything should pre-registered.
Uh-uh. A study isn’t inherently “bad” if it’s not pre-registered. But it’s more likely to be bad if *pretends* to be
4. Work your arse off and you’ll land papers and grants.
No siree. I’ve seen plenty of researchers with far more talent than me have a long string of rejections. LUCK + hard work = success
5. When your colleagues win, you lose.
Nah. Unless you’re competing for the exact same grant, a success for your lab mate is a success for you. Don’t you want to be in a successful lab?
6. Keep your ideas to yourself, people will steal them.
Probably not. Ideas are everywhere, but very few people have the resources and persistence to follow through and execute them. Get feedback early! Better to say something wrong in a preprint than a published paper...
7. Presentations that aren’t directly related to your research are a waste of time.
Incorrect. Some of my best research ideas have come from these ‘unrelated’ talks. There’s always *something* you can take away from a talk.
8. Don’t bother with Open Access articles.
Untrue. That frustration I get when I can’t find an article from an obscure journal my rich Uni doesn’t happen to have a subscription to is what most academics experience all time. If you can’t afford #OA, then preprint.
9. #Rstats is hard, don’t bother learning it if SPSS does everything you need.
False. One of my best academic decisions was taking the time to learn R. It’s flexibility and reproducibility far outweighs the occasional frustration.
Also, GIFs.
10. Twitter is a waste of time, you’re better off writing manuscripts
Nope. I changed my mind about these nine other things from stuff I read on Twitter, and now I write better manuscripts.
This thread seems to have caught some interest... Here’s a list of a *few* people that I’ve learnt lots from on Twitter:
You’ve worked hard putting together a presentation so why limit it to the people sitting in your talk?
Here are a few tips for repurposing your talk for sharing on social media
1. Add your twitter handle on your intro slide and encourage your audience to tweet. Limiting the text on your slides will also encourage tweets. If your text isn’t tweetable then there’s too much text.
Even if people are reading it, they’re not listening to you
What’s better than big blocks of text?
Images!
Use @unsplash for a huge library of free and high quality images that don’t require distracting attribution text at the bottom of your slide
If you’re an academic you need a website so that people can easily find info about your research and publications. Here’s how to make your own website for free in an under an hour using the blogdown package in #Rstats [THREAD]
So why use blogdown? Sure, there are several free options available to start your own blog (e.g., Medium). However, you generally can’t list your publications or other information easily on these services. Also, who knows where these services will be in a few years?
There are also some great point-and-click services available (e.g., Squarespace). However, you need to pay about $10 a month for these services, and they’re generally not well suited for academic webpages.
Every paper has open data if they present a scatterplot.
1) Download WebPlotDigitizer automeris.io/WebPlotDigitiz… 2) Load a scatterplot screenshot 3) Select each datapoint 4) Download the .CSV file with each datapoint
We used this tool in a recent meta-analysis to extract correlation coefficients from papers that didn't report coefficients (only scatterplots), which is a common issue in meta-analysis sciencedirect.com/science/articl…
We validated the use of WebPlotDigitizer in our sample by looking at studies in our meta-analysis that reported BOTH correlation coefficients and scatterplots, finding high precision
Funnel plots are often used to assess publication bias in meta-analysis, but these plots only visualise *small study* bias, which may or may not include publication bias. Here's a guide on making contour-enhanced funnel plots in #Rstats, which better visualise publication bias
First, some background.... Publication bias is a well-known source of bias. For instance, researchers might shelve studies that aren’t statistically significant, as journals are unfortunately less likely to publish these kind of results.
Researchers might also use questionable research practices — also known as p-hacking — to nudge an effect across the line to statistical significance
Here are ten things that I HAVE NOT changed my mind about in the past few years of being a scientist (thanks to @mareberl for the suggestion)
1. Meta-analysis is a useful means of synthesizing research
Meta-analysis cops a lot of flak. But like ANY statistical tool, meta-analysis needs to be correctly applied. It still sits on top of the evidence pyramid osf.io/yq59d/
2. Presentation skills are undervalued
It’s likely that one day your chances of landing a job/grant will ride on a presentation, so take EVERY OPPORTUNITY to practice. Your research won’t “speak for itself”, no matter how good it is.
Meta-analyses are often used as a gold standard measure of evidence. But how much trust should you place in a meta-analysis outcome? Here are a few things you should look out for next time you read one [THREAD]
1. Don’t just check whether the authors state they followed PRISMA/MARS reporting guidelines, check whether they ACTUALLY did.
SPOILER ALERT: Very few do. The ones that do *typically* include a checklist in the supplement
2. Was the analysis protocol pre-registered? There is SO much analytical flexibility in meta-analysis, so this is a crucial point. Sometimes all it takes is a small tweak of study exclusion criteria to tip a summary effect size over the line to p = .048 (or closer to p = .05)