22 Comments
User's avatar
Geoffe's avatar

I keep trying to explain these things to my psychiatrist. I’m not sure why an actual doctor has so much trouble understanding that his patients are beautiful, unique snowflakes ( 😆 ) who might not be able to be satisfactorily flattened into a single DSM diagnosis (I’m a horse, not a chair!), who might not be able to enjoy the headline-level certainty that his prescription will work, or who fall outside the probability curve on a “very rare” side effect.

What I’m saying isn’t “checkmate scientists”—I have no doubt that psychiatry often works exactly as intended—it’s just frustration that my doctors so often categorically state that something unusual “can’t” be happening.

I guess they have to overcorrect for people using Dr Google and who make all of these errors… but it does nothing for my faith and trust in a profession that I desperately depend on for stable, healthy integration with society.

Expand full comment
Anonymous Skimmer's avatar

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8684817/

"74.9% of psychiatric inpatients had at least one medical comorbidity, including 57.5% of people ages 18–24."

Note that these are inpatients, but the findings should lead one to expect at least a lesser comorbidity rate among outpatients, not zero.

Expand full comment
Geoffe's avatar

Can you tell me more about how this relates to what I wrote? I think co-morbidity means something similar to “more than one diagnosis,” but I’m not sure how that expands on my angry lil rant, and that’s making me think that I might be remembering incorrectly.

Expand full comment
Anonymous Skimmer's avatar

Sure, sorry about that.

Basically just that physicians have no excuse for assuming a person only has one thing "wrong" with them. Even without getting into personal genetics that can affect the way a drug works, odds are pretty decent that a patient has another mental or physical disorder that could complicate treatment.

Expand full comment
Lidija P Nagulov's avatar

Holy wow could we somehow make it obligatory for the entire internet to read and internalize this. I feel it might significantly improve quality of average online debate.

Expand full comment
Gordon Raup's avatar

Probably not, and even if you did people wouldn't do it. And even if they did, it wouldn't affect their responses for very long, if at all. -- Alright, Alright, I know it was a rhetorical question and you knew that. But what we can do is improve the preschool and K-12 education system to be built around this and constantly drilling it in example by example, year after year. At first it wouldn't have a big effect, but over time, with consistency, I think it would have a meaningful effect. This is basically teaching kids how to think and I don't think it is unreasonable or naive to expect our educational system to teach kids how to think.

Expand full comment
Lidija P Nagulov's avatar

Yes, absolutely. Even if we could just remove the elements of school that openly support a lack of logical thought, that would already help so much. The whole ‘just do it and don’t ask too many questions’ vibe of school definitely isn’t a great base for making future thinkers…

Expand full comment
Jason's avatar

I immediately thought of this post on the topic of extreme weather events and climate change and how trends can be over-simplified and exaggerated https://www.liberalpatriot.com/p/turning-down-the-temperature-on-extreme

P.S. ‘Credences’ seems like another good word to invoke as a synonym for thinking in distributions.

Expand full comment
Beth Anderson's avatar

Absolutely devoured this post. Seems to me humans deeply crave absolutes but the universe is mostly not built that way. Except math?

Expand full comment
Bayesian's avatar

Constructive proofs in math :)

I suspect the craving for absolutes is primarily a side effect of uncertainty avoidance (and "computational" cheapness given the human cognitive/emotional/sensory/etc. computational substrate and reward architecture).

Of course :) uncertainty avoidance loads fairly heavily (with what value of eta-squared or Pillai's trace, a first approximation multivariate effect size measure - like Cohen's d for locally near-linear effects).

I was trying to think of an experiment that might _distinguish_, with a decent effect size, between uncertainty avoidance and decision cost/complexity, with little initial success (hey, I'm an engineer, not a cognitive psych type).

Expand full comment
Anonymous Skimmer's avatar

"I was trying to think of an experiment that might _distinguish_, with a decent effect size, between uncertainty avoidance and decision cost/complexity"

I think you'd want an unsolvable forced-choice dichotomy that is sold to the person asked to do it as very important, but allows them a personal opt out (if you can't do it we'll get someone else to figure it out). I guess that the uncertainty avoidant would be more likely to come to a decision instead of opting out. There would be some confounds, but something along these lines would probably be okay.

Expand full comment
Bayesian's avatar

That's really good experimental design. Clever!

Expand full comment
Jonathan Tonkin's avatar

Great post, Tommy! Informative for non-scientists. Geez, an R2 of 0.6 is like hitting the jackpot for a community ecologist. 0.3 is high for us in most cases. There is so much noise in ecology, particularly community ecology (interactions among species and the environment), that it's impossible to get nice clean fits to regression lines.

Expand full comment
Tommy Blanchard's avatar

Oh I know -- I would have been thrilled with a 0.6 in any of my research. With the single-neuron recording I did, we routinely had correlations so low we would collect ~400 data points to have a hope of getting the magic 0.05 significance threshold, which really makes you wonder if we were capturing anything important about what these neurons were doing.

Expand full comment
Heidi McDonald's avatar

I really appreciate your clarity. I used to teach Theory of Knowledge to high school students, and I really wish I had had this article to give them to read!

Expand full comment
Tommy Blanchard's avatar

Wow this is so nice to hear, thank you!

Expand full comment
Julio Gruñón's avatar

"This is a post about epistemology disguised as a statistics lesson. " Take my subscription already.

When reviewing research, effect size and representativeness of the sample are the two things I look for first.

Expand full comment
Daniel Reich's avatar

This is partly why I argue for a move away from the over reliance on p-values we see in psychology journals. Researchers need a much better understanding of the tools that they use to communicate their findings.

Very interesting read! Thank you!

Expand full comment
Pageturner's avatar

Thanks for this. Could be a good post to assign undergraduates if I ever teach a critical thinking course.

Expand full comment
Anonymous Skimmer's avatar

"We learn concepts over time through seeing lots of examples"

"But note how much information we lose when we boil everything down to those statements. We throw away everything we know about how variable and noisy those trends are in exchange for a simple statement that there is a trend."

I like clusters, and think they generally aren't used enough. It takes more words to describe clusters, but they also communicate more information.

"let's just try to appreciate that our knowledge often relies on messy data, and try to do more thinking in distributions."

Statistically speaking you're an outlier, so this blog doesn't exist.

-----

"Shout-out to the one person blasting 700mg of caffeine a day who falls asleep within 2 minutes every night."

Am I reading the graphs wrong or did no one have a zero reading on caffeine intake? Caffeine is a habituating substance. I'd assume that minor changes in caffeine intake would have noticeable effects on buzz. Timing might also be important, as delaying a standard caffeine break by 30 minutes might delay sleep onset by the same 30 minutes.

As an anecdote, back when I was working two jobs and going to school while averaging 3.5 hours of sleep per day*, the third 16 ounce energy drink of the day seemed to signal my body to go to sleep (as I became noticeably tired immediately after drinking it).

* - I used to have difficulty going to sleep before this period of my life, but during and even decades after it, only rarely any problem falling asleep.

Expand full comment
User's avatar
Comment deleted
Jul 26
Comment deleted
Expand full comment
Tommy Blanchard's avatar

Oof, yeah, great example of exactly the kind of thing I was thinking of when writing this

Expand full comment
User's avatar
Comment deleted
Jul 25
Comment deleted
Expand full comment
Anonymous Skimmer's avatar

This is why mechanisms are so important.

Expand full comment