Stop paying attention to academics.
Especially those who research how to do things in the real world.
I’d like to point you to two different pieces that have come out in the last week:
Both of them build on a point I’ve been making for a number of years: academics in the social sciences cannot be trusted. If you want to achieve real-world results, you should look elsewhere.
This is for 4 main reasons:
Academic research is unreliable, dominated by poor research practices and fraud.
Reproducibility studies have shown that approximately 50% of behavioral science research does not replicate, and when studies do replicate, the effect sizes are about half of what was originally found.
Academics operate in a distorted environment with little resemblance to the real world, limiting their understanding of how real-world systems work.
The incentive environment encourages them to make unreliable, exaggerated claims.
This means that by reading and listening to academic behavioral scientists, you’re more likely to reduce your understanding of the world instead of improving it.
It is my true belief, given everything we know about the discipline, that academic behavioral science is anti-knowledge.
So what does this mean?
Two (main) things:
Stop reading books published by academic behavioral scientists.
Only listen to real-world practitioners:
product designers
marketers
entrepreneurs
sales people
applied behavioral scientists (particularly those that work full-time at companies)
etc.
Some of my favorite quotes from Psychology, a Failed Discipline:
One of the most prominent paradigms in psychology, popularized by Daniel Kahneman’s seminal book “Thinking, Fast and Slow,” revolves around the concepts of biases, nudging, and priming. Kahneman, in collaboration with Amos Tversky, laid the groundwork for the field now known as behavioral economics.
This line of research studies how “irrational” people behave in various scenarios and suggests that subtle cues can be used to “nudge” or “prime” individuals toward making more “rational” decisions. It operates on the assumption that human behavior is predominantly governed by unconscious and irrational forces, a concept Kahneman refers to as System 1, which eludes our direct control.
Daniel Kahneman explains this view as follows:
When I describe priming studies to audiences, the reaction is often disbelief. This is not a surprise: System 2 believes that it is in charge and that it knows the reasons for its choices. Questions are probably cropping up in your mind as well: How is it possible for such trivial manipulations of the context to have such large effects? Do these experiments demonstrate that we are completely at the mercy of whatever primes the environment provides at any moment? Of course not. The effects of the primes are robust but not necessarily large. Among a hundred voters, only a few whose initial preferences were uncertain will vote differently about a school issue if their precinct is located in a school rather than in a church—but a few percent could tip an election.
The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.
Human intuition is therefore not flawed as suggested by conventional bias research. In real-world scenarios, where infinite samples don’t exist and sample sizes don’t exactly match the pattern length, people’s intuitions about probability patterns are accurate: HHTH occurs more often than HHHH. The “mathematically correct” answer is mostly wrong in practice. Instead of calling this result a “bias” or “irrational”, we should question the artificial nature of the experiment.
Contrary to Scott Alexander, I hold that cognitive biases can only be “demonstrated” in a laboratory setting, offering little insight into the actual workings of our brains, have little to no explanatory value, and lack of preparedness for these “biases” does not result in negative consequences in real life.
This broad pattern of calling people’s actions and choices “biased” when they do not agree with some famous mathematical theory explains much of what Gigerenzer calls “The Bias Bias.” The underlying assumption of such labeling is that deviations from theoretical models indicate “irrationality,” potentially leading to adverse outcomes. This assumption forms the basis for the argument in favor of “nudging” individuals towards certain behaviors.
The preference for global as opposed to indexical knowledge in the social sciences can be understood as a part of the famous physics envy. The creation of universal knowledge that holds under basically all conditions is something physicists have been celebrated and admired for. Unfortunately, psychology is not well situated to create this kind of knowledge.
By trying to remove indexicality to create global knowledge, researchers created the flawed “bias” field we discussed above: Choices that are “rational” when considered in context become “irrational” when researchers try to remove that context.
He outlines three primary practices employed by scientists prior to the advent of statistics: (1) They focused on identifying large-scale effects, as their methods were not refined enough to detect subtle ones; (2) They concentrated on exploring phenomena that were visibly apparent and undeniably occurring, yet lacked clear explanations; (3) They formulated bold, testable hypotheses about the functioning of the world.
The introduction of modern statistical techniques changed their approach because it allowed them to study tiny effects that may not actually exist. Researchers can then search for all the scenarios where an effect appears or disappears, creating “a perpetual motion machine powered entirely by an inexhaustible supply of p-values.”
I don’t think that academic psychology can be salvaged. Some of this critique has been around and ignored for at least 70 years. Even in cases where an effect has been widely discredited, it continues to be cited. For example, Kahneman mentions in his new book Noise that “If judges are hungry, they are tougher” referencing the famous study by Danziger et al. that presumably showed how susceptible judges are to hunger. The effect is fake: Dazinger and his colleagues simply overlooked the non-random scheduling of parole hearings.
However, everyone should have rejected the study without any additional analysis by simply looking at the effect size: