This is an extract from the OPIP book. Previously, A(lice) and B(obby) discussed the importance of unlearning to make progress, simplicity and unification in physics, the role of assumptions and refutations, the challenge to identify the right problems, and the need for entirely new approaches.
A: You mentioned before that we’ll probably need entirely new approaches. If so, then the new solutions will also be surprising, right?
B: Yes, it’s guaranteed they will be. There’s a simple reason for that: if they weren’t surprising, we would have already found them.
A: Why?
B: See it the other way: if something isn’t surprising, and is in line with our intuition, it means that it’s more or less obvious. With so many people thinking about it, we would have found the obvious solutions a long time ago. Like in the game of chess, the moves that only grandmasters play often surprise us; otherwise, we’d all be grandmasters.
A: So we should be open to surprising theories?
B: Yes, but it goes further than that. There are several levels of surprise. One type of surprise triggers a response such as “Oh. I didn’t expect that. Fair enough, let’s continue.” And then, there’s a different type of surprise, more of a “No, impossible. I don’t believe that. You’re nuts.” Whatever the big new theory in physics will be, it will have to trigger the latter reaction.
A: Why?
B: The logic is the same as before. Without triggering a strong emotional resistance, or sometimes even denial, it would be easier, and we would have found the solutions already.
A: “Denial”?
B: Yes, denial is real. It’s not just a river in Egypt.
A: That sounds like it’s running quite deep.
B: Yes, the solutions we need will probably go under our skin, and against our “nature”—in the same way that previous discoveries did [as discussed elsewhere in the book]. It won’t be just solving mathematical equations in a comfy chair. For sure, it will go beyond merely combining surface-level factors into something new. That would be a bit too easy. In business, you see a lot of ideas that are just the mash-up of two existing ideas, or a slight tweak of an existing concept (e.g., “Tinder for dogs”). Those hardly ever do well. Just copying something won’t get you very far.
A: Oh, forgery can be very lucrative.[1] Also in business, the strategy to copy other companies can be very successful. Think about White Castle vs. McDonalds, AltaVista vs. Google, MySpace vs. Facebook, and many others. Or maybe you’ve heard of the anecdote of Charlie Chaplin anonymously entering a “Charlie Chaplin look-alike” contest where he came in third place.
B: Those are all nice examples for when copying can be successful, but analogies also end at one point. In physics, great ideas are novel to a high degree, and they come with a new way of looking at things. Those paradigm shifts[2] are not only difficult because they are unfamiliar to us, but because they go against our deepest worldviews and, with that, also against our value systems. Those psychological challenges are the real obstacles for progress.
A: Okay, so there is a strong psychological force working against the new ideas.
B: Right. If you haven’t been making progress on the sea of wisdom for a while, maybe it’s not the fault of your boat’s engine, but that you’re going against a strong current.
A: There’s Bobby the wannabe poet again.
B: Metaphors can help to convey a point. Let me take some less poetic examples to elaborate on the point that the solutions we’re looking for have to be surprising. In Agatha Christie’s murder mystery play The Mousetrap, London’s longest-running West End show, the question is who the murderer is (a classic “whodunit”). The answer isn’t only surprising, but it’s surprising on a different level.
A: You’re not going to spoil it and tell me who the murderer is, right?
B: Of course not. Let’s take chess as another parallel. Sometimes, there are solutions that are more than just surprising. The best example I can think of is Alexei Shirov’s move in his game against Veselin Topalov at Linares in 1998. Give the critical position to someone who hasn’t seen it before and, assuming they are mere mortals, they won’t find the right move. Next, tell them that the solution is surprising. Most likely they still won’t find it. So far, nothing unusual. My point is the following: if you then tell them “Let me help you by going the other way. What move is most likely not the solution?” then many would still not find it, even though it’s the obviously worst-looking move in the position. This is what I mean with surprising on a different level: the solution doesn’t only look bad, but it never even gets on the table to be assessed.
A: You mean like a blind spot?
B: Yes. You can be as good at assessing solutions as you want, but you don’t have a chance to pick the right ones if they never get into your scope in the first place.
A: So if we want to be the “grandmasters of science,” we need to widen our scope?
B: Right. Sticking to the example of chess, it reminds me of Evgeny Bareev’s quote, “When you play Kasparov, the pieces start to move differently.” This could also point to blind spots. For instance, if you’re attacking a piece and your opponent makes a logical defensive move that also creates a threat, it’s easily overlooked. Good chess players have fewer of those blind spots. The even better ones play on their opponents’ blind spots.
A: How can we ensure we don’t overlook blind spots in physics?
B: It already helps if we’re aware that we need to widen the scope. Moreover, maybe we can draw more specific conclusions from what we just discussed. In both the Mousetrap and the chess move examples, the solution wasn’t only surprising, but it was the one we would have thought of last. Or, in other words, it’s the solution we ruled out first. That may serve as a guide again. As mentioned before, we should start wondering again about widely accepted truths that often go unquestioned. Those are very likely humanity’s blind spots.
A: So every time something seems obviously right, we should be aware that there’s a risk of a blind spot?
B: Yes, and the same can happen when things appear obviously wrong. I touched on one example earlier: some people argue in an obviously flawed and pseudoscientific way, coming to very strange conclusions (e.g., “everything is consciousness”). You look at their way of reasoning, and correctly conclude that it’s irrational. But then you make the mistake of assuming that whatever they conclude must be false.
A: And that conclusion could be wrong, as they may have hit the truth by coincidence?
B: Right. Imagine you lived in a time when almost everyone thought that the Earth is flat. However, there’s this one guy who claims that the Earth is round. He justifies it by saying that he just likes round shapes, so Earth must be round too. Plus, God told him that in a dream.
A: Conclusive evidence!
B: Right. Now, what effect does this have on the level-headed individual who argues rationally? There’s a high risk that they dismiss the possibility of a round Earth because of such flawed reasoning. This gets reinforced by society; as soon as you start arguing for a round Earth, people will immediately put you into the same category as the lunatic, even though that’s entirely unjustified.
A: That sounds like yet another case where social dynamics and psychology play a role for progress in physics.
B: Yes, and there’s another psychological aspect to this. Let’s continue the story. Eventually, the Earth is proven to be round (for the right reasons). It goes without saying that the irrational person will feel completely vindicated (“I told you!”). We have a psychological issue with that. We’re working so hard to get to the truth, putting up so much self-control to stick to logic and reason. And then we get beaten by the irrational lunatic who—pardon my French—talks out of his butt?
A: I don’t think we got beaten.
B: Of course, we didn’t; their statements were worthless, or even worse than that, as I’ll explain in a second. But that’s how it will feel, as we must admit that they were right, albeit for entirely wrong reasons.
A: How is this related to blind spots?
B: The irrational person made it a blind spot. It would have been much easier for society to get to the truth without such flawed and distracting reasoning. Even though they were right with the final prediction, their reasoning did society a huge disservice.
A: So you’re saying it’s not about the outcome, but the path to it.
B: Right. We should value sound, correct, and clean thinking in itself. It’s natural to have our eyes glued to the outcome, as that’s what counts at the end of the day. However, this approach carries the risk of assessing incorrect thinking as positive merely because it led to a positive outcome, and vice versa. If you take all your savings to a casino, bet it on number 4 at the roulette table, your decision is still utterly wrong, even if number 4 comes up.
A: Really? It made me rich.
B: In this case it did, but with that type of thinking, you won’t stay rich for long. One-off coincidences can always happen, but it’s about honing our thinking skills, as only that will bring the big payoffs in the long run. Incorrect thinking is dangerous and potentially fatal.
A: I agree it’s not good, but “fatal”?
B: In some cases, it can be. For example, the Aztecs believed they needed to sacrifice humans to ensure the sun kept rising. One might say “Well, it worked!” but this confuses correlation with causation, leading to fatal results. We don’t have to go through all types of thinking mistakes now. However, it’s crucial to remain reflective about our thinking, as in the times we’re living in, that’s our main asset.
A: Let’s get back to our main thread, how blind spots could hamper progress in physics. Any other ways how those can come about?
B: Another common scenario is when we believe we’ve already found the solution. Because if we found it, why keep looking? It reminds me of the saying, “Good is the enemy of the great,” which is often emphasized in motivational speeches and self-development literature.
A: Any examples in physics for that?
B: Let me make a bold statement: every good theory in physics could create blind spots. The Geocentric Model, Newton’s laws, Bohr’s model of the atom—their effectiveness in explaining certain phenomena well may also carry the risk that we stop looking for other theories that do it better. Let’s not settle for the good but keep pushing for the great.
The book continues by Alice proposing to avoid blind spots by conducting more experiments, leading Bobby to explore the question of whether future progress in fundamental physics is more likely to come from the theoretical or experimental side. To read it, get the OPIP book.
—
[1] For example, watch the documentary “Made You Look: A True Story About Fake Art” (2020) on Netflix.
[2] The term “paradigm shift” was introduced by Thomas Kuhn in his influential book “The Structure of Scientific Revolutions” from 1962. In it, Kuhn challenges the traditional view of scientific progress as a steady, cumulative acquisition of knowledge. Instead, he posits that science undergoes periodic paradigm shifts where a prevailing scientific framework is replaced by a radically different one. The release of the book was a landmark event and triggered controversial debates across many disciplines.