23 Comments
Jul 26Liked by Tommy Blanchard

My fiancée's little sister was staying with us for a few months to get back on her feet. One day she tells me, "science says it's not healthy to eat dinner as late as we do [8:30 pm]".

As a scientist, it's just a quietly infuriating thing to be told for various reasons you touch upon. Probably originating from a facebook post with a headline of a news article that references a scientific study which almost certainly does not make any such claim. Even if the scientific study is properly constructed and well done (almost certainly not the case), it is doubtful it could establish any strong proof of a single health outcome as a function of time-of-dinner. It is even more doubtful the authors constructed some sort of generalized health metric by which several health indicators could be judged simultaneously (sleep quality, heart rate, BMI, etc) and then found a statistically valid relationship between time-of-dinner and this health metric. Even if they accomplished all of this; surely the overall effect size of eating dinner at 8:30pm vs. 6:30pm is absolutely miniscule in comparison to other health choices like what is being eaten or how much you exercise.

Expand full comment
author

Oof, yeah, great example of exactly the kind of thing I was thinking of when writing this

Expand full comment
Jul 25Liked by Tommy Blanchard

I keep trying to explain these things to my psychiatrist. I’m not sure why an actual doctor has so much trouble understanding that his patients are beautiful, unique snowflakes ( 😆 ) who might not be able to be satisfactorily flattened into a single DSM diagnosis (I’m a horse, not a chair!), who might not be able to enjoy the headline-level certainty that his prescription will work, or who fall outside the probability curve on a “very rare” side effect.

What I’m saying isn’t “checkmate scientists”—I have no doubt that psychiatry often works exactly as intended—it’s just frustration that my doctors so often categorically state that something unusual “can’t” be happening.

I guess they have to overcorrect for people using Dr Google and who make all of these errors… but it does nothing for my faith and trust in a profession that I desperately depend on for stable, healthy integration with society.

Expand full comment

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8684817/

"74.9% of psychiatric inpatients had at least one medical comorbidity, including 57.5% of people ages 18–24."

Note that these are inpatients, but the findings should lead one to expect at least a lesser comorbidity rate among outpatients, not zero.

Expand full comment

Can you tell me more about how this relates to what I wrote? I think co-morbidity means something similar to “more than one diagnosis,” but I’m not sure how that expands on my angry lil rant, and that’s making me think that I might be remembering incorrectly.

Expand full comment

Sure, sorry about that.

Basically just that physicians have no excuse for assuming a person only has one thing "wrong" with them. Even without getting into personal genetics that can affect the way a drug works, odds are pretty decent that a patient has another mental or physical disorder that could complicate treatment.

Expand full comment
Jul 25Liked by Tommy Blanchard

Great point and very well explained. Thank you.

Not exactly to your point, but brings to mind this website https://www.tylervigen.com/spurious-correlations that shows how, if you look through enough phenomena, you're almost bound to find high correlations. Reproducibility is much easier in the "hard" sciences vs. humatinities or even something like nutrition health, which makes it more likely for those random correlations to seem relevant, because they're all we have

Expand full comment

This is why mechanisms are so important.

Expand full comment

Holy wow could we somehow make it obligatory for the entire internet to read and internalize this. I feel it might significantly improve quality of average online debate.

Expand full comment

Probably not, and even if you did people wouldn't do it. And even if they did, it wouldn't affect their responses for very long, if at all. -- Alright, Alright, I know it was a rhetorical question and you knew that. But what we can do is improve the preschool and K-12 education system to be built around this and constantly drilling it in example by example, year after year. At first it wouldn't have a big effect, but over time, with consistency, I think it would have a meaningful effect. This is basically teaching kids how to think and I don't think it is unreasonable or naive to expect our educational system to teach kids how to think.

Expand full comment

Yes, absolutely. Even if we could just remove the elements of school that openly support a lack of logical thought, that would already help so much. The whole ‘just do it and don’t ask too many questions’ vibe of school definitely isn’t a great base for making future thinkers…

Expand full comment
Jul 25·edited Jul 25Liked by Tommy Blanchard

I immediately thought of this post on the topic of extreme weather events and climate change and how trends can be over-simplified and exaggerated https://www.liberalpatriot.com/p/turning-down-the-temperature-on-extreme

P.S. ‘Credences’ seems like another good word to invoke as a synonym for thinking in distributions.

Expand full comment
Jul 25Liked by Tommy Blanchard

Absolutely devoured this post. Seems to me humans deeply crave absolutes but the universe is mostly not built that way. Except math?

Expand full comment

Constructive proofs in math :)

I suspect the craving for absolutes is primarily a side effect of uncertainty avoidance (and "computational" cheapness given the human cognitive/emotional/sensory/etc. computational substrate and reward architecture).

Of course :) uncertainty avoidance loads fairly heavily (with what value of eta-squared or Pillai's trace, a first approximation multivariate effect size measure - like Cohen's d for locally near-linear effects).

I was trying to think of an experiment that might _distinguish_, with a decent effect size, between uncertainty avoidance and decision cost/complexity, with little initial success (hey, I'm an engineer, not a cognitive psych type).

Expand full comment

"I was trying to think of an experiment that might _distinguish_, with a decent effect size, between uncertainty avoidance and decision cost/complexity"

I think you'd want an unsolvable forced-choice dichotomy that is sold to the person asked to do it as very important, but allows them a personal opt out (if you can't do it we'll get someone else to figure it out). I guess that the uncertainty avoidant would be more likely to come to a decision instead of opting out. There would be some confounds, but something along these lines would probably be okay.

Expand full comment

That's really good experimental design. Clever!

Expand full comment
Jul 28Liked by Tommy Blanchard

Great post, Tommy! Informative for non-scientists. Geez, an R2 of 0.6 is like hitting the jackpot for a community ecologist. 0.3 is high for us in most cases. There is so much noise in ecology, particularly community ecology (interactions among species and the environment), that it's impossible to get nice clean fits to regression lines.

Expand full comment
author

Oh I know -- I would have been thrilled with a 0.6 in any of my research. With the single-neuron recording I did, we routinely had correlations so low we would collect ~400 data points to have a hope of getting the magic 0.05 significance threshold, which really makes you wonder if we were capturing anything important about what these neurons were doing.

Expand full comment
Jul 28Liked by Tommy Blanchard

I really appreciate your clarity. I used to teach Theory of Knowledge to high school students, and I really wish I had had this article to give them to read!

Expand full comment
author

Wow this is so nice to hear, thank you!

Expand full comment

This is partly why I argue for a move away from the over reliance on p-values we see in psychology journals. Researchers need a much better understanding of the tools that they use to communicate their findings.

Very interesting read! Thank you!

Expand full comment

Thanks for this. Could be a good post to assign undergraduates if I ever teach a critical thinking course.

Expand full comment

"We learn concepts over time through seeing lots of examples"

"But note how much information we lose when we boil everything down to those statements. We throw away everything we know about how variable and noisy those trends are in exchange for a simple statement that there is a trend."

I like clusters, and think they generally aren't used enough. It takes more words to describe clusters, but they also communicate more information.

"let's just try to appreciate that our knowledge often relies on messy data, and try to do more thinking in distributions."

Statistically speaking you're an outlier, so this blog doesn't exist.

-----

"Shout-out to the one person blasting 700mg of caffeine a day who falls asleep within 2 minutes every night."

Am I reading the graphs wrong or did no one have a zero reading on caffeine intake? Caffeine is a habituating substance. I'd assume that minor changes in caffeine intake would have noticeable effects on buzz. Timing might also be important, as delaying a standard caffeine break by 30 minutes might delay sleep onset by the same 30 minutes.

As an anecdote, back when I was working two jobs and going to school while averaging 3.5 hours of sleep per day*, the third 16 ounce energy drink of the day seemed to signal my body to go to sleep (as I became noticeably tired immediately after drinking it).

* - I used to have difficulty going to sleep before this period of my life, but during and even decades after it, only rarely any problem falling asleep.

Expand full comment