Reader Questions: Politics, Dunning-Kruger, and Procrastination
Answering reader questions from a recent chat thread
Recently I solicited questions from readers in a Substack chat thread. It was cool to see what topics readers wanted to hear from me on, so I figured I would package my answers up a little more nicely for a post. This post is a bit more eclectic and "off the cuff" than my usual posts.
Jack Sezer: What's your gut reaction to the current US political landscape?
I tend to avoid politics on this newsletter for two reasons: 1) It's not what I want to write about, and 2) I don't have any particular insight into the machinations of the political world. I'm not a political scientist or economist. While I follow the news (or rather, I don't but it's impossible to avoid hearing about it if you want to interact with other humans in any form) and I have opinions, I don't have any insider information or expertise that any reasonably intelligent person reading the news wouldn't.
If you want political opinions without expertise, those are easy to find, just open any social media site or have a conversation with basically anyone.
Regardless, I've been asked for my reaction, so I'll give it. To be upfront with my leanings: things are very not good.
I'm not going to go into some of the things I think are going poorly (e.g. tariffs and trade-wars are quite clearly bad—you can read about that anywhere), but there is one area I might have more expertise than the average person, and that's with science funding. Of all the things going on, it might not be the most important, but to me it's indicative of the overall governing approach. Recently there have been three developments at the National Institutes of Health (NIH), the agency that funds basically all of US life sciences and medical research, that people might not be aware of (note, things are happening so fast and I can't keep up, so these might be out of date by the time this is published):
There have been massive layoffs at the NIH and other government science institutions. These have hit everything from researchers studying cancer to Alzheimer's. These have targeted "probationary employees", those who are new to their position—which includes those that were recently promoted. Over a thousand employees were laid off. These don't seem to have been well-thought-out, careful cuts, but a slash-and-burn cutting of anyone they can easily let go.
Grant money is being held up. Grants are the lifeblood of science—if you don't have money, you can't pay scientific staff or buy equipment or reagents. If grants are held up, science is held up. There have been multiple attempts to hold up the money by the Trump administration. They initially attempted to simply freeze all grants, which was challenged and reversed by a federal judge. The administration has instead blocked the NIH from holding grant review meetings, again blocking grant money. Amid all the uncertainty, universities have had to cut back and in some cases rescind PhD admissions because they're not sure they will have the funds to train the students. The uncertainty around funding itself is causing damage.
The administration attempted to dramatically reduce indirect costs paid to universities through grants—basically the part of the grant that pays the university room and board for the labs they host (you can read more about what they are and why they're important here from esteemed scientist and one of my PhD committee members, Dick Aslin). This has been put on hold for now through a judge's order, but introduces additional uncertainty into the future of science funding.
The overall picture is pretty clear: The Trump administration is trying to dramatically reduce the US investment in science, and they're doing it fast instead of carefully. The impact won't be noticeable to the government's budget deficit, but could decimate US science.
With the various cuts to other agencies, it seems like a similar thing is happening to everything from foreign aid to housing and urban development. I'm all for increasing government efficiency, but you don't make a machine more efficient by haphazardly tearing parts out. I won't dwell on it, but it appears this general approach of dramatic, uncareful changes is being taken to other parts of governance as well.
Cool Librarian: Is the Dunning-Kruger effect real?
Pheew, okay, we're back to something a bit more fun.
The Dunning-Kruger effect is that those with limited expertise in a domain tend to overestimate their abilities (fitting that I'm talking about this right after politics—interpret that comment as you will). It's often trotted out as a way of calling someone clueless but seeming scientific about it.
There is a real, reproducible effect that people who do worse on various tests have larger overestimations of how well they did. Interpreting this finding is harder.
The standard story is about metacognition—people with lower knowledge in an area are also ignorant of how little they know. This accords with a lot of real life experience, like how you don't really know how hard writing is until you've tried it enough and become good enough to objectively appraise how shitty your own writing is (I speak very much from experience).
But that metacognition interpretation is a controversial and probably wrong interpretation of the data in the studies that purport to show it.
Most studies of the Dunning-Kruger effect look at performance on some kind of objective test. Have people perform an exam, then ask them how they think they did. You get the classic effect shown in this plot for IQ:
However, we know people are imperfect at rating their abilities. There's an average correlation of about r=0.3 (1.0 being a perfect correlation and 0 being no correlation) between how people rate themselves and how they perform across a wide range of tasks.
We also know that people are overly optimistic about their own abilities. Only 5% of people rate themselves as below average on IQ, for example.
These two facts alone can explain the classic Dunning-Kruger observation, without introducing an additional "those who are bad at a skill also lack the ability to rate their skill" component! We know that because that's how the plot above was generated—on simulated data with just those two factors. This is just a statistical artifact due to regression to the mean.
To make this a bit more intuitive, imagine subjective ratings of IQ are just random numbers, so there's absolutely no relationship between objective IQ and subjective IQ. Since people overestimate their average IQ, we generate the random numbers so the average subjective IQ is 125. Now take the people with the lowest objective IQ. They will have the biggest overestimate in their IQ, because they will, on average, say their IQ is 125, but their average objective IQ is the lowest in the bunch. Those that have the highest IQ will seem to have a small difference between their objective and subjective IQ because their average estimates are also 125, but they have an objective IQ closer to the optimistic 125. When comparing Objective IQ to a random number, if we include some "over-optimism" in the random number, we'll reproduce the Dunning-Kruger effect.
In the actual research, this is basically what's happening—the relationship between subjective and objective IQ isn't completely random, but it's pretty weak because people of all levels are just bad at estimating their abilities. This randomness pulls the data into the pattern Dunning and Kruger made a big deal about. It isn't a lack of metacognition of those at the low ability end causing the effect, but just over-optimism and crappy subjective judgments.
A host of researchers have pointed out this statistical issue with the original research on the Dunning-Kruger, but it hasn't been taken as seriously as it should have been (psychologists really need better statistical training). There are statistical tests that can be done to take the issues above into account and look for the Dunning-Kruger effect. The only attempts I've seen of using it have concluded that "The Dunning-Kruger effect is (mostly) a statistical artefact".
Regardless of where you come down on the existence of the metacognitive component, the difficulty of clearly proving it in the lab suggests to me the effect at the very least is probably not as pervasive as popular culture would make you think. But it's still true we're pretty bad at estimating how good we are at stuff, and those that are bad at something will be most likely to overestimate their abilities, so maybe the interpretation doesn't matter that much.
Priscilla Zorrilla: Why might people be holding themselves back knowingly?
This is a pretty wide-open question with a lot of directions, but it made me think about procrastination. We all do it at some point. We often are putting off the thing we care most about. What's the deal?
There's no single answer for all procrastinating, but there's a general framework that's useful: it's an issue of emotional management and temporal motivation. You're asking yourself to do something boring, hard, uncertain, and/or anxiety-provoking, in order for some possible reward in the future. Instead you do something that is more certain and immediately rewarding, like daydreaming, scrolling your social feed, or eating delicious dill pickle corn puffs.
Think about the things you procrastinate on: maybe it's a personal writing project you care deeply about, a boring work task, or studying for a test. The result of your efforts are often unclear—will you actually do better on the test? Will anyone care about your writing? Either way, the outcome won't happen for a while, so the motivation to do it right now for some future possible reward can be difficult to motivate. Besides, writing might mean confronting that your writing isn't that good, and studying might make you realize you don't know the subject matter well.
We procrastinate to manage the negative feelings we have about the task at hand. Those negative feelings can arise from a lot of different things, like just the inherent unpleasantness of the task, but it can also come from self-doubt and anxiety.
I avoid getting into trite self-help stuff, but it feels weird to bring up the causes of procrastination and not at least gesture towards plausible ways to deal with it. Full disclaimer: I am human and procrastinate.
You're attempting to do a task that's hard or unpleasant now and comparing it to activities that would be fun right now. You can put some friction in front of the activities that are fun—for example, leaving your phone in another room or using an app to lock yourself out of social media. Now you've brought the attractiveness of the fun activities down. The other thing you can do is try to bring the attractiveness of the important task up.
The way to make an unattractive task more rewarding depends a lot on the context. For myself, a lot of my procrastination is due to the ambiguity of the steps in the task. Writing is self-directed, and it isn't always obvious what the next thing I have to do is—should I brainstorm more ideas, try to outline what I'm going to say next, or flesh out the connection between idea A and idea B? Sitting down to do a task without a clear next step is hard, because to even get started I am asking myself for an upfront tackling of a cognitively difficult task: figure out what needs to be done and do it, without being warmed up on what it was I was trying to do with the task. That's hard, so I avoid getting started.
I've found with this kind of procrastination, if I recognize that this is what's going on, I can recognize that my real next task is simple: figure out the next step. By explicitly making that my first task, it can give me focus and make me realize rereading what I've written so far is part of the task I'm doing. I find once I'm in the groove, it's easier to keep going, so as long as I can get over that hump of not knowing what to do next, my urge to procrastinate recedes.
That's it for now. Have another question you want me to consider covering? Let me know in the comments!
Please hit the ❤️ “Like” button if you enjoy this post, it helps others find this article.
You can also check out the subscriber chat:
On the Dunning-Kruger Effect - have you seen
The Dunning-Kruger Effect is Autocorrelation
https://economicsfromthetopdown.com/2022/04/08/the-dunning-kruger-effect-is-autocorrelation/
? Whatever the arguments, the process where one puts random signal through the statistical pipeline, and one recovers at the tail end of it the D-K effect, seems pretty damning to my mind.
To be certain I'd have to replicate run the scripts myself. The author seems to me of the kind that does that already. Does his own replication, and publishes open source open data. Am not motivated enough to put the effort into this. But given you were motivated enough yourself to blog about it - maybe you will be less lazy than myself, replicate, and report what you find.
Thanks for writing on Substack for all of us to read. :-)
Didn't expect to respond, butttt - Procrastination seems less apt to occur these days than it used to. I am 84 - no telling when I won't be here to 'do the deed', to 'engage'. I sense there is a stronger possibility that I may not be here tomorrow morning to respond to your (anyone's) commentary. Therefore, I am less apt to procrastinate. Love your commentary, Tommy! Makes me thoughtful.