Several articles on this blog discuss different types of rules—whether they’re natural laws, government regulations, or personal decision-making guidelines. The underlying principles behind these rules often share common characteristics. Rather than repeating them across multiple articles, this post brings them together in one place, serving as a central reference point that reduces repetition, enhances clarity, and adds depth.
To keep things structured, the discussion will first focus on rules that guide decision-making and behavior. Later, this post will explore how natural laws fit into the bigger picture.
The first key characteristic of rules is that they always aim to fulfill certain purposes and are never ends in themselves. It can be easy to forget this, in which case rules may take a “life of their own,” detached from their original intent. For example, as children, we are often taught “Don’t talk to strangers” because, at that age, our lack of judgment makes it difficult to discern who might pose a risk. Yet, individuals who internalize it too rigidly may carry it into adulthood, where the original reasoning no longer applies, potentially missing out on meaningful and beneficial relationships.[1] Or, some people still wait an hour after eating to go for a swim, even if they know that modern research shows that moderate swimming after eating is safe.[2] But not all cases are that benign. For example, the rule “Don’t lie” is based on the experience that lying can harm both one’s own as well as society’s well-being. However, if we forget this original reason, and Nazis asked us if we’re hiding Jews in the attic, we may stick to the rule even though it clearly goes against its purpose to maximize well-being (more about this in When is Lying Justified?).
Therefore, it’s important to understand deeply why we forget. Apart from being human and therefore forgetful, there are several distinct reasons in the case of rules. The first is that rules are meant to forget—not the rules themselves, but the complexities they are trying to resolve. Rules basically say, “Don’t rack your brain over those intricate complexities, here’s what you should do.” They resolve confusions, provide clear instructions, and convey a sense of certainty. In a way, the value of rules exactly comes from forgetting—or never having to know in the first place—the underlying purposes or dynamics.
Incidentally, the fact that rules bring certainty is relevant not just on a rational level, but also on a deeply emotional one. People have a strong aversion to uncertainty, often staying in unfulfilling jobs or unhappy relationships simply to avoid the uncertainty of change. This preference for certainty extends to fields often assumed to be rational. For instance, in stock market history, looming dangers—like the threat of war—have often weighed heavily on the market, driving prices down. Yet paradoxically, when the feared event occurred, such as the outbreak of war, prices often surged. The sentiment was: “The bad thing happened, but at least now we have certainty.” This deep desire for clarity also drives the preference for clear-cut rules—sometimes outweighing concerns about whether they are actually good.[3]
Another reason we often fail to recognize that rules serve a purpose becomes clear when we examine what we really mean by “forgetting.” Many rules established by our ancestors have been passed down through generations. For later generations, this doesn’t necessarily mean they “forgot” the original reason—it may just be that they were never told. People often say “humanity has forgotten” something, but we should be cautious with such abstractions. After all, it is always individuals—not abstract entities like “humanity” or collective groups like countries—who act or forget. So it’s more accurate to say that those who follow a rule often don’t know its purpose—either because they once knew and forgot, or because they never learned it.[4]
The third major force that keeps us unaware of the reasons behind rules is individuals or groups with a vested interest in keeping us in the dark. History is full of laws—arguably most of them—created by select groups or individuals aiming to consolidate power, maximize personal gains, and keep people in line. Naturally, these groups have a strong incentive to obscure both the origins and the true purpose of these laws, since revealing them would expose how they serve the rule-makers’ interests. One common way to obscure a law’s origin is to attribute it to an unquestionable authority (“I didn’t make those rules. God did”). We should also be skeptical when the purpose of a rule is unclear. As a rule of thumb, if you cannot find a clear, well-intentioned purpose—or worse, if questioning it is discouraged—then you can safely assume that you are the purpose.
The above implies an even more basic truth: purposes and rules are two separate things. It’s important to state this explicitly because they are often conflated, whether consciously or subconsciously. This can happen when we call something a “good” rule, without clarifying whether we mean its intended purpose, its effectiveness in achieving that purpose, or both.
For example, some might consider a classroom rule such as “students must submit at least one question or comment per class” to be “good” because its purpose is to foster engagement. However, whether it actually achieves this goal is another question entirely. Students might ask a superficial question just to meet the requirement, then disengage—which might even lower overall engagement. An opposite example—where the rule achieves its purpose, but its “goodness” is questionable—is “any public criticism of the leadership results in imprisonment,” aimed at suppressing dissent.
How can we prevent this conflation? One approach might be to use “good” and “bad” only for purposes, as they reflect underlying values, while reserving “effective” and “ineffective” to describe how well rules achieve those purposes. However, ironically, this rule of nomenclature would likely be ineffective in everyday language. People often refer to a rule as “good” when they mean it should be implemented (or maintained), implying both its purpose and effectiveness. To complicate things further, one could argue that a rule is “good” simply because it effectively achieves its intended goal—even if that goal is bad. After all, a rule’s function is to fulfill its intended purpose. If we give someone a task and they complete it perfectly, it would be strange to say they did a bad job.
The simplest solution to this dilemma may be to use more elaborate language whenever there is a risk of confusion. In any case, the main point is that purposes and the rules designed to achieve them are two separate things, and conflating them will inevitably be counterproductive. Therefore, before discussing any rules or measures, we should first be clear on what they aim to achieve—which sounds obvious, but is often overlooked in practice. As George Harrison used to sing, “If you don’t know where you’re going, any road will take you there.”
Another crucial reason to keep this distinction in mind is that deciding what our goals should be is fundamentally different from determining the best rules to achieve them. The first is a matter of values, ethics, and emotions, while the latter is—or at least should be—a purely factual, logical, and scientific question. Blurring this distinction almost always undermines rational thought. For example, this often happens in debates criminal justice, where ethical questions (e.g., “People who commit serious crimes should face harsh punishment!”) are conflated with empirical ones (e.g., “Do harsher sentences actually reduce crime rates?”). A rational approach would first clarify the ethical goal (e.g., maximizing well-being) and then use data and logic to determine the most effective way to achieve it.
However, there’s a caveat: the paradigm of clarifying the purpose before defining any rules is—you guessed it—a rule itself. And like any rule, it has exceptions. One notable exception occurs when practical constraints require quick agreements on rules between parties. In these cases, there may be no time to fully evaluate or understand the other party’s true purpose—the immediate priority is simply reaching an agreement. These practical constraints may force us to bypass a full understanding of the purpose. Nevertheless, this doesn’t diminish the principle’s validity—it remains the ideal we should strive for whenever possible.
The statement “every rule has exceptions” hints at another core characteristic of rules: no rule is flawless. That’s because rules are essentially simplifications. Instead of determining what to do in every individual case, a rule provides a general—well, rule—for action, grouping similar but inherently unique cases into a single category. While this can be very useful, it inevitably involves some degree of inaccuracy.
This holds true for all rules.[5] No matter how important we consider a rule, we can always find cases where breaking it would be justified. We shouldn’t steal, but the situation changes if taking food from a despot who caused a famine is the only way to survive. Torture must be forbidden, yet most people would agree that twisting the terrorist’s arm to reveal the location of a ticking bomb is justified. Even the rule “don’t kill” isn’t absolute when it comes to self-defense. These exceptions demonstrate that while rules are crucial, they are not universally true.
In this context, a quick word on the saying “Exceptions prove the rule,” which directly contradicts what was just stated. The meaning of this phrase is debated. One interpretation is that an exception implies the existence of a rule (e.g., “No charge on Sundays” suggests that charges apply on other days). Another explanation traces back to an old legal principle, where “prove” originally meant “test” rather than “confirm as true.” However, this archaic usage has largely faded. In the context relevant here, exceptions falsify a rule—or at least its ambition to be fundamentally true.
The limitations of rules become especially clear when they are followed “until we know better” but discarded as expertise grows. Chess provides a good example: beginners are taught that pieces have fixed values—a pawn is worth 1, a bishop 3—so trading three pawns for a bishop is considered fair. While this rule is useful for novices, it doesn’t reflect reality. In truth, a piece’s value depends entirely on its impact within a specific position. In some cases, sacrificing almost an entire army is the best strategy if it allows a lone pawn to deliver checkmate. The stronger the player, the more this is understood; rigid rules evolve into general guidelines—first firm, then increasingly flexible. Eventually, they are abandoned altogether in favor of case-by-case evaluation. Rules are often just crutches that help us take our first steps.
Another factor highlighting the limitations of rules is that they don’t operate in a vacuum but within a network of many other rules—making conflicts inevitable, often sooner rather than later. Moreover, rules can also come into conflict with themselves. For example, let’s imagine we strongly believe in the rule against stealing. What if we hear about someone planning a bank heist, and the only way to prevent it is by stealing the keys to the getaway get-to car? A narrow interpretation of the rule would prohibit taking the keys. However, by refraining from this minor theft, we enable a much greater one—clearly contradicting the rule’s underlying intent. The purpose of the law is to reduce overall theft, but following it rigidly in this case produces the opposite outcome. Similar self-conflicts can likely be found in most rules.
The fact that all rules have flaws can be difficult to accept psychologically, as it conflicts again with our pursuit of certainty. We want to see things as either good or bad and have clear guidance on what to do. Keeping a constant “yes, but…” in mind only adds complexity, making us feel unsure and drained. That’s why, at times, it can make sense to forget that rules aren’t perfect. This idea is, fittingly, self-consistent: “Don’t forget that rules aren’t perfect” is itself a rule, and as no rule is perfect, it too must have exceptions—meaning we should occasionally be allowed to forget, at least temporarily, that rules aren’t perfect.
There’s another reason why a rule cannot be called good or bad—it always depends on the circumstances in which it is applied. Some cases are obvious (e.g., a residential speed limit of 25 miles per hour wouldn’t make much sense on a highway), but others not so much. For instance, let’s take business advice. The reason there isn’t a book titled How to be Successful in Business that truly lives up to its name isn’t that no one was smart enough to write it—it’s that such a book is impossible in principle. Advice that works in one context can be completely detrimental in another. “Don’t start a business on your own” might be good advice in some cases, but if it leads someone to pick an incompatible co-founder, it could cause bigger problems down the road. “Take on outside capital” could be beneficial, but it might also introduce a sense of urgency—investors want their money back, and a lot of it—which can prevent an idea from having the time it needs to mature. Even seemingly harmless advice like “Don’t be passive, have a bias toward action” can backfire. It’s like the cartoon where a man stands on a rooftop, looking deeply depressed and considering jumping—while just across from him, a massive Nike billboard reads: “Just Do It!” There are no general rules that always work. As the masters of rules—lawyers—like to say: “It always depends.”
Circumstances can also vary depending on the decision-maker’s level of skill, as seen in the chess example above. For babies and toddlers, the only sensible rule is the simple and straightforward “Don’t touch the stove, it’s hot!” Once they reach early childhood, they may understand the more advanced “Don’t touch the stove unless a grown-up says it’s okay.” For adults, a more appropriate rule would be “Use the stove safely by checking if it’s hot, using proper tools, and staying mindful of what you’re doing.” Telling toddlers this rule would be pointless—they simply aren’t yet capable of processing it.
Incidentally, having advanced skills—allowing us to apply advanced rules—isn’t a fixed state of “either you have it, or you don’t.” Circumstances can make it impossible to apply such skills, even if they exist under normal conditions. For example, time pressure can prevent a refined assessment, causing us to fall back on simpler rules. Even chess grandmasters may resort to the basic rule that three pawns have about the same value as a bishop when decisions must be made quickly. In other words, simplistic rules can serve a purpose as fallback rules. This creates a pyramid or hierarchy of rules—with the most sophisticated and accurate at the top—where we move back down and switch to simpler rules as circumstances require.
There’s another interesting implication from the above. Since different people have different assessment skills, should different rules apply to different people? Rules inevitably restrict decision-making—that’s their main purpose—so wouldn’t this mean treating people unequally with respect to core values like freedom? In principle, the answer is yes. However, two important points must be made. First, this discussion is purely theoretical and does not account for real-world implications, where factors like equality come into play (more on the distinction between theory and practice in The Truth About Truth). Second, a key reason why treating people differently before the law is problematic is that it often implies discrimination based on who they are. But that’s not the case here. The distinction isn’t about identity but about individual circumstances. There is no categorization into groups; quite the opposite: each situation must be evaluated on its own merits. This is a subtle but critical distinction—one we must never forget.
There’s another potential reason why we may apply different rules to different people circumstances: when decisions have a different scale of impact. For example, hate speech is—or at least should be—under much stronger scrutiny for those with a large following. Executives face stricter oversight regarding insider trading, members of the military and police are held to higher standards regarding the use of force, and larger companies are often subject to stricter environmental regulations than smaller businesses. The reason for this is twofold: first, the direct capability to cause greater harm, and second, the indirect influence on others by serving—or failing—as a role model.
Circumstances can also vary based on how people assess a rule and its implications. For example, opponents of abortion may argue that it erodes the value of life, potentially paving the way for other forms of taking life—a slippery slope. A common rebuttal is that ending the life of a fetus should have no bearing on how we value other forms of life and must be considered entirely separate. However, those who insist that society will fail to make this distinction are often the same people who struggle to draw the line themselves. If most of society shared this belief, restricting abortion might indeed be the better option. This could be described as a self-justifying opinion—one that is true only insofar as people believe it to be.[6] As societies develop a greater capacity for nuanced reasoning and clearer distinctions, the value of certain rules changes accordingly. Just as upgrading a computer’s hardware allows it to run more advanced software—which is, after all, just a set of rules—advancing societal reasoning enables more sophisticated ethical and legal frameworks.
Such dynamics highlight that the law can never be rigid; it must evolve continuously. Rules that were once well-intentioned and effective may lose their value as circumstances change (Goethe: “Reason turns to nonsense, kindness to torment.”).[7] Challenging established rules can be difficult, especially when they have brought significant benefits. Yet, no matter how much we owe to them, we must not freeze in awe but remain adaptable, ready to make necessary adjustments. Interestingly, some people passionately advocate for the immutability of the constitution and its amendments, seemingly overlooking the irony that those are amendments—a point humorously highlighted by Jeff Jeffries.
A key reason for the reluctance to change rules is, once again, our aversion to uncertainty. The overwhelming majority of people prefer to keep existing rules—in other words, to follow rather than lead. Following has its advantages: it keeps us aligned with those around us, shields us from responsibility when things go wrong, and, above all, spares us from having to think. Following happens on multiple levels. We follow those we respect (authorities). We follow the masses: when many people say the same thing, we take their consensus as proof—without realizing that they may all just be parroting one another. We also follow the dead (traditions). But above all, we love to follow the person we adore the most: ourselves. If we’ve done something a certain way before and the consequences weren’t too negative (it didn’t kill us), we tend to stick with it.
This is all understandable—stability provides a sense of security, one of our most fundamental needs (see Maslow’s Hierarchy of Needs). However, it becomes problematic when change is necessary. There’s the saying, “Change requires change,” and it holds true in several ways. First, altering the status quo often demands new solutions and approaches. As Einstein reportedly said, “The definition of insanity is doing the same thing over and over and expecting different results.” Finding new solutions, however, requires thinking outside the (imaginary) box of rules. This can be emotionally challenging, as breaking rules feels inherently disruptive and is often perceived as a threat to stability. It can also be intellectually difficult, since new solutions are often surprising and contradict rules we’ve come to accept as true. Moreover, arriving at these solutions may require applying seemingly contradictory rules simultaneously[8]—a mental leap that can be difficult to reconcile.
Second, “change requires change” also applies when we’re content with the status quo, but external circumstances shift, forcing us to adapt. To quote Einstein again, legend has it that when asked why he gave his students the same exam as the previous year, he reportedly replied, “The questions are the same, that’s true, but the answers have changed!” Things are always evolving, and instead of clinging to the illusion that change can be stopped, we should accept and even embrace it—channeling our energy into guiding it in a good direction. It’s like a stream of water that we cannot stop, but we can direct its course.
It’s implied by the above, but worth stating explicitly: the term “rules” often sounds like something imposed from the outside—likely because many of our most memorable experiences of restriction stem from external enforcement. However, at the end of the day, our internal rules—the ones we set for ourselves—are what truly matter. All external rules are meaningless if there isn’t an internal rule telling us to follow them. Hence, change always starts with the (wo)man in the mirror, and mastering new challenges requires us to redefine ourselves—or at least adjust our perspectives—from time to time. As a German song title puts it, “Only Those Who Change Remain True to Themselves.”[9],[10]
How can we ensure that people stick to the rules? Punishment for non-compliance is an obvious answer, but it’s just one tool in the toolbox. Before resorting to it, we should first consider—and prioritize—other approaches, as punishment comes with significant drawbacks (as elaborated later).
The rules we should apply to make people stick to the rules are rules too, so the same criteria mentioned earlier apply as well. The first key point was understanding the purpose of rules, which is relevant here too: people are less motivated to follow rules when they don’t understand why they exist. For example, if we believe the rule “Don’t litter” only exists to keep the surroundings tidy or spare others the trouble of cleaning up, it’s easy to dismiss. However, it becomes much harder to ignore once we see images of sea turtles mistaking plastic bags for jellyfish, leading to suffocation, internal injuries, or starvation. There are countless similar examples of how littering has devastating effects on wildlife.[11] For most people, punishment wouldn’t be necessary to follow the rule against littering—proper education would be enough.
A related but distinct concept is deeply understanding the consequences of a rule, which often requires an emotional grasp of its impact. Most people already know that wearing seat belts improves safety and that texting while driving is dangerous. However, many only begin following these rules consistently once they internalize them emotionally—perhaps after hearing about a case in their social circle or watching a documentary that vividly depicts lives devastated by such behavior. Developing an emotional understanding can be especially difficult when a rule is meant to protect others—particularly those different from us—as explored in The History, Present, and Future of Happiness (p. 24).
Even more fundamental than that, rules must be clear to those expected to follow them. This doesn’t mean they have to be simple—many laws are necessarily complex to account for the intricacies of reality. However, striving for simplicity is generally a good approach, as long as it doesn’t lead to oversimplification (Einstein: “Everything should be made as simple as possible, but not simpler.”). More importantly, rules should be understandable and free of contradictions. A rulebook filled with conflicting provisions—allowing people to cherry-pick what suits them—may superficially please everyone but is ultimately flawed.[12] As a general rule, if a rule requires too much “interpretation” it may indicate a failure in clarity.
At first glance, some rules may not seem to meet this criterion of clarity—especially when they don’t define every detail explicitly. For example, the EU data privacy laws (GDPR) require companies to implement “reasonable” measures to protect user data. But what exactly is reasonable? This ambiguity is intentional because it depends on context. A large corporation handling vast amounts of user data—with a bigger security budget—is expected to implement far more advanced protections than a small startup. Additionally, as technology evolves, security standards must adapt—what’s considered adequate today may be obsolete in just a few years. So, what’s the conclusion? Is this another case of “every rule has exceptions,” sacrificing clarity for the sake of flexibility? Not quite. While there is some trade-off, it’s much smaller than it might seem at first. The key is that rules don’t exist in isolation—they are supported by secondary rules, including regulations, guidelines, legal precedents, and enforcement actions, all of which clarify compliance in practice. Ultimately, whether a rule meets the criteria of clarity should be assessed within the entire legal framework, not in isolation. The real test is whether the decision-makers understand what they are supposed to do—because the only alternative is that they don’t, which couldn’t be right.
It’s also important that people understand their own benefits from following the rules. This isn’t just about the broader idea that “improving society benefits you too”—while true, that reasoning can feel abstract, long-term, and be subject to the prisoner’s dilemma[13]. Often, there are immediate personal benefits that go overlooked, simply because the primary justification for a rule appears to serve others. For example, many people assume that walking on the right-hand side (there are always exceptions—thanks, Brits!) in high-traffic pedestrian areas is just about not obstructing others. However, it also protects the individual. There have been numerous cases of injuries—and even fatalities—involving people who didn’t stay in the correct lane.
A well-informed public is crucial not only for understanding specific rules but also for trusting the system that creates and enforces them. Since no one has the time or resources to grasp the purpose of every rule, there must be a baseline level of trust in the overall system and its institutions. When rules are perceived as serving the interests of a select few rather than the public good, compliance tends to wane. From this perspective, it’s alarming that recent polls suggest that Americans’ confidence in their judicial system has dropped to a record low[14], signaling an erosion of respect for the law as a whole. This decline is often fueled by a lack of education—Americans’ understanding of how their government functions is deteriorating—as well as the deliberate spread of misinformation aimed at undermining trust. These issues will be explored in more detail in Why is Truth Having a Hard Time?
It’s also important that rules are perceived as fair. While all the factors discussed above contribute to this perception, they may not be enough. A key factor is that rules must apply—and be enforced—consistently to prevent favoritism. Laws must not be “like cobwebs, which may catch small flies, but let wasps and hornets break through” (Gulliver’s Travels).[15] Additionally, punishments should be proportionate to the offense, as excessive penalties can make rules seem unfair. For example, a small parking fine is likely seen as reasonable, but a $10,000 fine would be excessive and unjust (more on that below). However, ensuring universal agreement on fairness is challenging because fairness is highly subjective. Is progressive taxation fair because higher earners can afford to contribute more? Or is it unfair because it violates equal treatment under the law by imposing higher rates on some? Similarly, is it fair to charge a 100-pound traveler extra for 50 pounds of baggage while allowing a 300-pound passenger with only a purse to pay nothing extra? Yet, wouldn’t the opposite—charging based on total weight—also be seen as discriminatory? These dilemmas highlight that fairness is often a matter of perspective. Hence, the challenge isn’t necessarily creating an objectively fair rule but rather maximizing the perception of fairness across all affected individuals.
Another factor that encourages people to follow rules is seeing others do the same. As social animals, humans have a strong tendency to conform to group behavior, reinforcing our instincts to follow others, as mentioned earlier. Highlighting high compliance rates—such as “90% of people pay their taxes on time,” “your neighbors use less electricity,” or “most guests reuse their towels”—has been shown to be more effective than simply stating the rules. However, there’s a flip side. Studies have shown that witnessing rule-breaking behavior—such as graffiti—makes people more likely to break other rules, like littering. A sign that read, “Many visitors have removed petrified wood from the park, changing the natural state of the forest” was meant as a warning but backfired, increasing theft by unintentionally suggesting that stealing was common—leading visitors to believe it was acceptable. Because of this power of social proof, role models—or rather, rule models—play a crucial role in maintaining order. Public disregard of the law by influencers and other leaders can destabilize society, especially when there are no legal or social mechanisms to hold them accountable. This will be explored further in How to Save the Truth.
As mentioned, people also like to follow themselves, which can be useful when they deviate from a rule they once followed. Reminding them of their “better self” feels much more natural than asking them to adopt an entirely new behavior. “I know you—you care about this” can be much more effective than “You need to become responsible.” Straying from previous good behavior can create internal discomfort (cognitive dissonance). Encouraging someone to return to their former self helps resolve this tension in a way that feels like “coming home” rather than a forced transformation. This makes the change feel self-consistent and familiar. Additionally, past behaviors are often tied to positive emotions and memories, making it far more appealing to return to them than to adopt something entirely new.
This type of identity-based motivation applies in many other ways as well. We construct self-narratives—often shaped by media that reinforce specific identities—assuming roles that make us feel obligated to follow the rules we believe those roles imply, even when they contradict morally “good” rules. For example, someone who has committed a few petty crimes may feel the need to rationalize their behavior and arrive at the dangerous conclusion: “I’m just the bad guy.” This mindset can open the floodgates to further lawbreaking. That’s why labeling people as criminals can be risky—it may reinforce criminal behavior rather than discourage it.[16] While these narratives are often misguided and ultimately harmful, they can feel deeply compelling. However, this same psychological mechanism can be used positively by encouraging people to identify with roles that promote behavior beneficial to both themselves and society—such as being a “good citizen,” “responsible parent,” or “environmentally conscious person.”
A sense of identification with a rule can also arise from participation in its creation. People are more likely to accept rules when they feel they’ve had a say in them. This participation can take various forms—direct involvement in drafting the rule, voting on its adoption (a key argument for democracy), or even having the option to appeal once the rule is in effect, as this provides a sense of influence. The crucial factor is that individuals see themselves as part of the process, preserving their sense of autonomy. Whether their involvement is substantive or symbolic matters less than their perception of agency. For example, framing compliance as a choice (e.g., “Would you prefer to wear a mask or maintain distance?”) can reduce resistance by reinforcing a feeling of control rather than coercion.
The sense of ownership over rules is especially strong when individuals feel they’ve discovered them on their own. For example, rather than telling a child to put away their toys, a parent might let natural consequences unfold. If the child leaves toys scattered on the floor, they may later struggle to find a favorite one or step on a sharp piece. After a few such experiences, the child realizes that tidying up makes playtime more enjoyable. Without ever being explicitly told to clean up, they adopt the habit because they personally experience its benefit.
People also tend to follow rules more strictly when they feel observed—not necessarily out of fear of punishment, but simply because they know their behavior won’t go unnoticed. Well-lit streets, visible security cameras, and transparent policing have all been shown to increase compliance without stricter laws. This effect extends beyond conscious reasoning. Research suggests that even subtle cues—such as images of eyes—can foster honesty, prosocial behavior, and rule adherence. For example, posters featuring eye images have been successfully used in anti-littering campaigns, workplace honesty initiatives, and libraries to encourage silence.
Another way to increase the chances of people adhering to rules is to make compliance effortless. Examples include pre-filled tax returns based on existing financial records, contactless payment systems that automate fare collection, and modern cars with speed limit recognition that automatically adjust cruise control. Both policymakers and rule enforcers should ask: What would make compliance a no-brainer? Or—better yet—how could it be designed so that people would have to take extra steps to avoid complying? One example is opt-out organ donation: by making participation the default, donor rates dramatically increase.
Compliance also becomes easier when the steps to follow a rule are clear and actionable. Rules are inherently somewhat abstract because they represent concepts rather than physical objects, so they often need to be translated into specific actions—something that may seem obvious in some cases but not always. For example, the U.S. Occupational Safety and Health Act (OSHA) of 1970 states that “Employers must provide a workplace free from recognized hazards that are causing or are likely to cause death or serious physical harm to employees.” While this sounds straightforward, it doesn’t specify exactly how employers should comply. To clarify compliance, OSHA later developed specific regulations, such as requiring hazardous chemicals to be labeled and mandating that proper safety gear be provided, making it easier for businesses to follow the law effectively. Similarly, when regulations are large and complex, breaking them down into digestible, step-by-step parts improves compliance. Therefore, whenever possible, rules should not only set goals but also provide clear guidance on how to achieve them.
As the willingness to comply can change over time, pre-commitment becomes a valuable strategy for staying on track. By planning ahead, people can reduce the burden of last-minute decisions, which often lead to rule-breaking. Examples include signing up for a gym membership or personal training sessions in advance to ensure regular exercise, setting up automatic bill payments to avoid missed deadlines and late fees, or making a to-do list the night before to prevent decision fatigue in the morning.
How else can we make people stick to the rules? By ditching the stick and focusing on the carrot[17]—in other words, rewards. The range of possible incentives is as vast as what makes people happy, so listing them all would go beyond the scope of this article. However, they generally fall into a few key categories. Material incentives—such as bonuses, raises, gift cards, discounts, or tax breaks for timely filing—offer tangible motivation. Social recognition, in the form of public acknowledgments, leaderboards, badges, or VIP status, can make rule-followers feel valued. Unique experiences—such as invitations to events, exclusive training, early access to features, or meetings with influential figures—can also be powerful motivators. Then there’s the reward of trust: granting more autonomy, decision-making power, or leadership influence can be deeply empowering. Finally, we should never underestimate the power of genuine appreciation—sometimes, a heartfelt thank-you is the most effective reward of all.
Apart from rewarding decision-makers for outcomes, it’s important to incentivize the path to rule adherence. Every step in the process should be linked to positive emotions in some way. Gamifying rule-following—such as cities competing to reduce energy use—can make compliance engaging and even fun. Humor can also be an effective tool (“This is a bin, not a basketball hoop”). And when it comes to reinforcing positive emotions, few symbols are as universally recognized—or as subtle influential—as the smiley face. It’s been used directly to influence behavior: studies show that drivers respond more to speed displays featuring a happy or sad face—depending on whether they are below or above the limit—than to traditional digital signs that simply show their speed in green or red.
This last example of the smiley embodies the spirit of all the methods discussed so far: approaching people who should adhere to rules in a friendly, constructive way. The goal is to be on their side, guiding them toward compliance and fostering outcomes that benefit everyone. That said, it would be naïve to assume that these positive attempts always work. And here, too, the smiley is a fitting symbol: while we approach people with a big, wide smile, it doesn’t hurt that it reveals—far back—that there are teeth.
Once it’s decided that punishment is needed, what exactly should we do? Do we take out the hammer and start bashing people’s heads until they comply? Most of us instinctively feel that there must be more sophisticated solutions. However, relying on instincts alone isn’t sufficient—after all, others may feel differently. So let’s dig deeper into the rational foundations: why is head-bashing often not the right solution?
First, hurting people makes them unhappy. If we see the increase of overall well-being as a goal, this means that—at least in its immediate effect—punishment works against that aim when viewed in isolation. Of course, one could argue that this initial decrease in happiness (“investment”) might pay off later, overcompensating for the costs. However, it’s crucial to remember that it comes at a price. Acknowledging this helps prevent punishments from becoming automatic and thoughtless. It also ensures we don’t lose sight of what punishment truly means. History is full of cases where the suffering of others was downplayed in the name of a so-called “higher good,” often tied to ideological goals. We must never allow ourselves to become detached from the emotional reality of the pain we inflict and should always have a strong justification for why it is necessary. As a general rule, it’s worth asking: If we were in their position, would we see a valid purpose in this pain ourselves?
Second, rather inconveniently, people we hit tend to hit back. The belief that “Now they’ll reflect on what they did and comply in the future” is mostly wishful thinking. Hurting people creates enemies, not grateful followers who thank us for guiding them to the light. When we get hurt, we instinctively look for someone to blame—and on our internal list of possible culprits, we tend to put ourselves last. That’s why punishments are inherently escalatory: we inflict unhappiness on someone who caused unhappiness, and they will respond by following the same logic. This principle of “Happiness Begets Happiness and Unhappiness Begets Unhappiness” is explained in more detail in the InHa Book (p. 59).
However, this doesn’t change the fact that punishments can sometimes be effective and beneficial for everyone involved. For example, as toddlers, we might run into a wall and immediately receive feedback that we’d better not do that again. From that moment on, we understand that we need to stay within the boundaries the wall imposes—making it a solid analogy for rules in general, as they define the limits we should respect. Most people would agree that our encounter with the wall was a valuable lesson. But what exactly makes this kind of punishment a “good” one?
First, we—as offenders—see a clear connection between the punishment and the actions that led to it. It’s not like in Kafka’s The Trial, where the defendant doesn’t even know what he’s accused of. We immediately recognize what we did wrong (we ran into a wall, for Christ’s sake). However, many punishments are carried out without considering this connection—whether it’s punishing a child for breaking a rule they didn’t know existed, penalizing people with mental illnesses for committing crimes they didn’t understand were crimes, or reprimanding individuals from different cultures for unknowingly violating local norms. It can be tempting to punish others quickly for what seems to us like an obvious violation, but we must put ourselves in their shoes and ask whether they, too, make the connection before we lash out at them.
Closely related to this is the idea that we are more likely to recognize a connection when punishment follows an action immediately. Our example accomplishes this: Wall justice serves instant karma. The greater the delay between action and consequence, the less likely an individual is to make the connection. For example, if a dog misbehaves on a walk by eating something questionable off the street, punishing it an hour later by locking it in a dark room would be pointless. The same principle applies to humans more than we like to admit. While we have the capacity for rational thought and can link events across time, our understanding is often shaped more by emotional impact than by logic. This is one of the reasons why the death penalty—typically imposed only decades after the crime—is far less impactful than one might assume (and certainly less so than if lightning were to strike immediately after the act). This will be explored further below when discussing the interplay between the conscious and subconscious mind.
Next, it’s important that punishments are fair. But what’s fair? A possible definition is that equal cases should be treated equally. At first, that sounds fair enough. However, on second thought, the concept of “equality” becomes murkier—what counts as an “equal case” depends on interpretation. But on third thought, the original idea holds up: it establishes a default rule, placing the burden of justification on those who argue that deviations from it are warranted. If no valid justification is offered, then deviations aren’t justified. And here again, our wall delivers. Regardless of people’s skin color, religion, or sexuality—when they run into the wall, their nose will be just as bloody as ours.[18] The wall’s blindness is a positive characteristic in this respect, which is why Lady Justice is often depicted as blindfolded. This symbolizes impartiality: justice should be applied without bias or favoritism, irrespective of a person’s status, wealth, or power.
When do cases differ in a way that justifies different punishments? One scenario is when the damage caused differs. Our wall serves as a useful example here too: if we simply bump into the wall, the pain is minimal. However, if we run into it at full speed, the consequences are much greater. This principle of proportionality ensures that punishments align with the severity of the offense, preventing both excessively harsh and overly lenient penalties. Historically, however, this principle was often disregarded. For example, petty crimes such as stealing a loaf of bread sometimes resulted in extreme punishments like mutilation, exile, or death in medieval times. Over time, legal systems have moved toward more proportional punishments. For instance, courts have ruled that the death penalty is disproportionate for crimes like rape.[19] Similarly, “three-strike” mandatory sentencing laws, which imposed severe penalties on repeat offenders—even for minor crimes—were reformed to prevent excessive punishment. Today, structured sentencing guidelines are in place to help judges impose fair and consistent penalties, balancing extreme harshness with undue leniency. The rationale behind proportionality is rooted in fairness, evidence that overly harsh punishments do not deter crime more effectively, the prevention of unnecessary harm, and the need to address prison overcrowding and costs. Moreover, a justice system must maintain a hierarchy of punishments. If all crimes carried the same penalty, offenders would have no incentive to limit the severity of their actions. For example, if theft and murder both carried the death penalty, a thief might have no reason to spare a witness’s life, leading to unnecessary escalation of violence.
Another scenario that may justify different treatments is that of repeated offenders. If they didn’t get it the first time, a harsher punishment may be needed to drive the point home. Our wall has known this all along: the second time we run into it, the punishment is probably greater due to accumulated injury. Non-wall examples include regulatory fines, the red and yellow card system in sports, academic plagiarism penalties, spam filtering, and countless other areas. In fact, it’s hard to find cases where this principle isn’t applied. However, even though this logic is very intuitive—or perhaps because it’s so intuitive—we risk falling into the mistake of applying it automatically. First, we must keep proportionality in mind, as mentioned above. Second, escalating punishment can be unjustified when the behavior is beyond the offender’s control—such as in cases of addiction or poverty-related offenses. A homeless person trespassing or stealing food isn’t necessarily “not getting it”—they may simply have no better option. Additionally, repeated offenses can be a sign of a flawed or unfair system. Sometimes, the rules themselves are too strict, or circumstances make compliance too difficult—like immigrants missing their yearly visa renewal because the process is slow and expensive. Before handing out increasingly harsher penalties, rule-makers must first examine whether the system itself is fair—especially when repeat offenses keep occurring.
Whether an offense was intentional or not also plays a role. If an offense wasn’t intentional, the message of the punishment is typically less about “you need to understand this was not okay”—as they may already recognize that—but more about “be more careful next time.” The punishment is usually less severe in those cases because no lesson on right or wrong is needed. However, what matters is not just the final outcome or the damage caused, but also the actions leading up to it. A reckless driver may not have intended the crash, but their dangerous behavior was already wrong, which is why it can still be punished severely. Interestingly, even our wall takes intention into account—studies have shown that pain is felt more intensely when inflicted deliberately.[20]
Another “benefit” of the wall is that the results are predictable: we know exactly what will happen if we run into it. When punishments are unpredictable, they don’t just seem unfair—they also lose their deterrent effect. For example, habitual tax evaders often get off with lenient sentences—or even immunity—weakening deterrence and eroding respect for the law. Such cases create the sense that rules are inconsistent, applied unequally, lack transparency, or are confusing (too many walls create a maze). This goes against the principle of clear rules, as mentioned earlier.
Another key criterion is the enforceability of punishments, which is closely tied to authority. In the case of the wall, it’s simply stronger than us—which gives it a good amount of authority. When enforcement by an authority is lacking, rules don’t truly exist, as their existence depends on real-life manifestations rather than theoretical concepts on paper. As Thomas Hobbes observed in Leviathan, “It is not wisdom but authority that makes a law.” In fact, a rule that exists only in theory can be more damaging than one that was never created. Too many unenforced rules can erode respect for the law and breed widespread disregard for authority. That’s why we must decide: do we truly believe in a rule? If so, it must be enforced. If not, it should be removed from the rulebook entirely.[21]
A wall also serves as a tangible reminder of our offense—just seeing it instantly brings back our last encounter. This is especially true if the encounter was recent, when blood may still be clinging to the surface. And when we hit that bloody wall a second time, there are no excuses—the writing was on the wall. Because rules are abstract, they often lack this kind of physical presence, making them easier to forget. That’s why linking an offense to a tangible object, whenever possible, reinforces the lesson and reduces repeat mistakes. For example, some financial counseling programs in Japan give clients a small chain or a heavy coin to keep in their wallet. Each time they reach for their money, they feel its weight—both physical and symbolic—reminding them of past financial mistakes and encouraging wiser decisions.
What makes the wall exemplary isn’t just the positive traits in its philosophy of punishment but also the absence of human negative traits. A wall is always sober, stoic, unbiased, and neutral. We know the wall has no agenda and seeks no revenge—it’s more like, “You ran into me, you got hurt. Stop running into me, and it’s all fine.” It takes no joy in punishing and keeps our mishap to itself, sparing us unnecessary humiliation. The rules it enforces aren’t created by any individual; in this sense, the wall represents the only truly unquestionable authority: reality. Its punishment exists solely to teach us and protect us from future harm—not to serve anyone else’s interests. The wall’s passivity also keeps the focus on the perpetrator’s actions—just like Gandhi’s passive resistance, which drew its power from the same principle. All of this makes the wall truly impartial, helping us accept its punishment, place blame where it belongs, and return to where true progress always begins: ourselves.[22]
While the above highlights that we can learn a lot from wall justice—so much so that judges might benefit from asking themselves more often, “What would the wall do?”—no wall is perfect, as we know from the Great Wall of China or the Berlin Wall (not to mention the U.S.-Mexico Border Wall). One of its main shortcomings stems from one of its strengths: its solidity and predictability come at the expense of flexibility. For the most part, the wall treats all cases the same—but, as with all rules, this is less precise than assessing individual cases.
In the context of punishments, considering individual cases is even more important because it’s a two-step process: first, there is an external punishment—such as a wall—and in the second step, our built-in incentive system of pleasure and pain interprets it, which is what ultimately matters. Hence, external punishments don’t have a direct impact, as becomes clear when running into a wall doesn’t hurt after taking certain drugs, such as local anesthetics, opioids, or alcohol. This means that external punishments are only effective if they trigger the right, behavior-changing internal reaction. And since people’s interpretations of external stimuli can vary greatly, punishments must also be adapted accordingly to achieve the desired effect.
For example, it was previously mentioned that there must be a clear connection between actions and punishment. At first, this criterion may seem like an objective matter—either it is met or it isn’t. In the case of the wall, it appears to be met, as most people would immediately recognize the connection after running into it. However, this sense of objectivity arises from widespread agreement between subjective perspectives rather than from true objectivity (more on this in Colors, Reality, and the Twin Paradox). A single exception illustrates this point: for instance, if a drunk person runs into a wall in complete darkness, they may not immediately recognize the cause, increasing the likelihood that they repeat the mistake.
While this example is very specific, the underlying principle has broad applicability. People vary in their ability to connect actions with consequences. For instance, while nearly every adult with typical comprehension skills understands which final action landed them in prison (since the judge told them), they may struggle to recognize the preceding behaviors that led them there—such as associating with the wrong people. Some individuals make this connection quickly, while others take longer. This may justify longer incarceration for those who need more time in isolation to reflect and fully internalize the link between their actions and punishment.[23]
Also, as mentioned above, the perception of fairness is key to making people accept punishment and change their behavior accordingly. And as the word perception suggests, it is highly subjective. What truly matters is how the penalty is felt—which is relevant not only to perceived fairness but also to its effectiveness as a deterrent. For example, while it may seem fair to fine all speeding offenders $200, the impact is vastly different for a billionaire who barely notices the expense compared to someone living paycheck to paycheck. This is why many European countries link speeding fines to offenders’ incomes, ensuring proportional consequences. A similar income-based approach can be seen in New Jersey’s 2017 Bail Reform Act, which replaced cash bail with a risk-based system to prevent low-income individuals from being detained simply because they couldn’t afford bail. This principle extends beyond financial penalties. Community service sentences also consider how severely the punishment affects different individuals. For instance, a single parent juggling multiple jobs may receive a more lenient sentence than someone with a flexible schedule, ensuring that the burden is equitably distributed.
Perceptions also vary significantly across cultures, influencing how effective punishments are in different contexts. For example, a creative punishment might involve forcing someone who behaved antisocially to wear a superhero costume and offer help to others, which can be both amusing and humbling.[24] This type of punishment would likely be more effective in strongly hierarchical societies, where the humbling effect is felt more deeply—such as Japan. In the United States, by contrast, the offender would probably think it’s cool. This is, of course, an overgeneralization, but it illustrates that people are different and that punishments should be tailored to specific cases.
The effectiveness of punishments also varies because it’s not the punishment itself that deters, but rather the fear of it—which is highly subjective. For example, the perceived likelihood of getting caught differs widely among individuals, making pessimism, for once, a useful trait. In general, studies have shown that people systematically underestimate the chances of being caught.[25] This is unfortunate, given that the perceived certainty of being caught is a much stronger deterrent than the severity of the punishment.[26] The remedy, as so often, is better education—about the effectiveness of surveillance cameras, data analytics, DNA forensics, and other technologies.
Another aspect of fear is the anticipated negative impact of punishment once caught. This, too, is highly subjective. For example, a person’s emotional memory plays a role: if they forget the pain of past punishments, they won’t fear them as much. Additionally, because fear relates to the future, individuals who focus on the short term may be less susceptible to such threats. This effect is even stronger in those who lack empathy—not for others, but for their future selves. Many factors influence fear, and even more measures exist to modulate it. However, the key point is that a person’s fear of future punishment—rather than the punishment itself—is a crucial lever. Since this varies greatly between individuals, tailored approaches are necessary to ensure adherence to the rules.
This also implies that even entirely made-up punishments can be effective, as long as the fear remains intact. “You’ll burn in hell for eternity” can be extremely effective for those who believe in it. Incidentally, such extreme threats of punishment can be effective not only because they are severe but also because they increase the perceived likelihood of occurring. Humans assign higher probabilities to events they can imagine vividly, which is why we overestimate both our chances of winning the lottery and the likelihood of dying in a car crash. This also explains why, in the Middle Ages, minor offenses were not only punished severely (often with death) but in ways so horrific they defy modern comprehension. Torture was deliberately prolonged to maximize suffering before death. While such methods may appear purely sadistic from a modern perspective (and sadism certainly played a role), they also followed a certain grim logic. Given the difficulty of catching offenders in an era without forensic tools or organized policing (i.e., a very low probability of being caught), punishments were made as extreme and gruesome as possible—not just to instill fear of the punishment itself but to heighten the perceived likelihood of being caught by creating vivid and terrifying mental images.
It’s also crucial to consider an individual’s capacity for growth and learning when determining an appropriate punishment. For instance, a 17-year-old and a 30-year-old who commit the same crime, such as vandalism, are often treated differently. Courts typically offer juveniles alternative sentences—like community service or rehabilitation programs—instead of jail time, believing they have greater potential for reform. Some individuals may need only a light nudge to get back on track, while others might have to become acquainted with our wall. Genuine remorse can also be a good indicator, as it suggests a willingness to make amends—meaning the punishment may not need to be severe.
Since rules are often broken for personal gain at the expense of others, an individual’s level of empathy is also highly relevant. Increasing it can take creative forms, such as sentencing a woman who abandoned kittens in the woods to spend a night there alone; requiring reckless drivers to watch accident footage, meet crash victims, or work in a hospital trauma unit; or forcing someone caught yelling at fast-food workers to work a drive-thru shift for a week. This doesn’t necessarily mean experiencing the pain directly—though that is often highly effective—but rather fostering a closer connection with potential victims and seeing things from their perspective. For example, a man caught speeding in a school zone was required to explain to a group of 5-year-olds why he endangered them. When victims are anonymous, they’re much easier to ignore.[27]
Empathy is a great example of how subtle punishments can be. We don’t typically think of feeling compassion as a form of punishment, but it is—it makes us feel bad when we hurt others. This form of punishment is effective because it doesn’t feel like it’s coming from an external source—one we’re inclined to resist—but from within, something we naturally trust more. Another example of subtle punishment is framing compliance as something positive, such as “Help preserve the clean air in our city by following emissions rules.” What this really means is, “If you don’t comply, you’ll lose the positive thing you already have,” but it’s presented in a friendlier and more effective way. This plays on the principle of loss aversion, a cognitive bias in which people perceive the same situation as worse when framed as a loss rather than a gain.
Another important factor is the perception of choice. Punishments—which essentially say, “You made the wrong choice in this situation”—can seem questionable when individuals genuinely believe they never had one. This could apply to someone who had to act quickly in self-defense or under coercion; a person compelled to obey an internal voice; or someone who believes an imaginary higher power instructed them to act. So, should we reduce punishments in such cases? The challenge is that it’s often difficult to determine whether the individual truly didn’t know they had a choice, making it too easy to use as an excuse and evade responsibility. Additionally, it’s important to remember that the primary purpose of punishment is forward-looking—aiming to correct future behavior. This means punishments don’t actually say, “You made the wrong choice in this situation,” but rather, “When the situation arises again, do something differently.” This implies that punishments can still be justified, even if the individual had no choice. At first, this may seem to contradict our sense of justice. However, it may be justified, as punishment serves not only justice for the offender but also for those they harmed.
Flexibility isn’t just relevant for determining an appropriate punishment for an individual at a given time—it also applies to the entire process of reaching a verdict. A wall doesn’t offer this; its first judgment is final, and there’s no point in arguing—it would quite literally be like talking to a wall, and it won’t budge an inch. Good justice systems, however, often include an appeal process that allows for re-evaluations. These can be crucial for correcting mistakes in initial judgments and addressing flaws in the rules themselves. For example, a general rule may be applied, but since no rule is perfect, the possibility of appeal allows for exceptions in specific cases where a deviation from the rule may be justified (as discussed in When is Lying Justified?).
The key takeaway is that if we want to change someone’s behavior, we must understand it first. People are different, and so the solutions must be too. We need to get into the mind of the offender and see the world from their perspective. However, while this all sounds plausible and straightforward in theory, practice is another matter…
Focusing on the perpetrator seems logical, as they are the ones whose behavior we aim to change. However, much like in quantum physics—where the observer plays a crucial role in the experiment[28]—the same applies to rules and laws. Justice is not a one-way street; we must consider not only the punished but also the punisher. This chapter will explore that dynamic while maintaining the focus on changing the offender’s behavior. Other aspects of punishment, such as retribution, will be discussed later.
The previous chapter concluded that we must get into the perpetrator’s mind to determine the right punishment or other corrective measures. However, mustering the necessary empathy to do this can be psychologically difficult—especially when we’ve been personally harmed. Our instincts push us in the opposite direction: to distance ourselves and see them as “enemies” we want nothing to do with. Yet the saying “Keep your friends close, but your enemies closer” applies on a psychological level as well. If we fail to do so, we risk applying punishments too rigidly—like a wall—without addressing the root causes, ultimately preventing us from identifying effective behavior-changing solutions.
To cultivate empathy, we should remember that no matter how harmful, destructive, or irrational human behavior may seem, it follows patterns shaped by past experiences, emotions, and cognitive processes—every action has a logical cause. We should shift from emotion to curiosity: What might have led them to act this way? Second, we must make a crucial distinction: we aim to change behavior, not a person. Our goal is for them to make different decisions in specific situations, not to become someone else entirely—an unrealistic expectation. For simplicity, we often categorize people as good or bad, but this is a fundamentally flawed generalization. Strictly speaking, there are no “enemies”—only individuals who tend to engage in adversarial actions. This distinction is subtle but crucial; when we fail to make it, we not only struggle to see the world from their perspective but also risk condemning all their actions indiscriminately—or unquestioningly accepting actions from those we consider “good,” which is just as dangerous. The consequences of such groupthink—or “peoplethink”—will be explored further in Why Is Truth Having a Hard Time?
Emotions also influence judicial decisions even when they are unrelated to the crime. Judges are people too, and their emotional state inevitably impacts their rulings. Studies have shown, for example, that U.S. judges impose harsher sentences on the Monday after switching to daylight saving time—likely due to sleep deprivation impairing cognitive performance.[29] Equally bad news for defendants is when a judge’s favorite football team loses[30] or when judges are hungry[31]. Like any other job, judging carries the risk of slipping into routine, where decisions become patterns rather than deliberate reasoning (“I’ve seen this type of crime before; it usually means X.“). This is especially true when judges must handle dozens of cases a day, often forcing them to generalize rather than carefully consider the unique psychology of each perpetrator. Additionally, constant exposure to horrific crimes and difficult cases makes emotional detachment a necessary form of self-protection—further distancing judges from the motivations of offenders. This is not a criticism of judges but rather an acknowledgement that good sentencing is both emotionally and intellectually demanding. Judges must be given the time and freedom to perform at their best—something that ultimately benefits both defendants and society as a whole.
Time pressure and personal circumstances aren’t the only reasons we default to punishment. Another is its deceptive simplicity—punishments are quick, require little thought (and we love to avoid thinking), and often yield immediate results. Our goal is to make people comply, so we impose punishments, they obey, and the problem is solved, right? Unfortunately, this view is both superficial and short-sighted. Ignoring positive approaches to rule enforcement—especially helping offenders understand why a rule exists and why it is fair—creates a fragile foundation. Given the chance, people will rebel, defect, or even retaliate. Sooner or later, that chance will come. The stronger our current position of power, the easier it is to overlook this. Instead, it should be the opposite: the more power we hold, the more responsibly we must wield it.
Another psychological reason why we find punishments enticing is that we assume they would be effective on ourselves. It’s natural to judge others by our own standards, but the key takeaway from the previous chapter is that people’s circumstances differ, and thus, the remedy must differ as well. Beyond reminding ourselves, judges can gain deeper insight by reviewing pre-sentencing reports and psychological evaluations to better understand an offender’s motivations, environment, and mental state. Engaging in perspective-taking—imagining how they might have acted in the perpetrator’s situation—can help prevent overly harsh or dismissive judgments. Additionally, bringing together victims, offenders, families, and communities can provide a more nuanced emotional understanding of the case. Finally, ongoing education in psychology, sociology, and criminology can help judges recognize biases and better assess the complexities of each situation.
Judges’ verdicts are also sometimes influenced by factors that should play little to no role, such as outside pressure from the public and the media. However, outsiders are exactly that—outsiders. They don’t know the full details of a case, the legal intricacies, or the nuances required for fair judgment.[32] Public opinion is often driven by emotion and oversimplification, typically demanding harsher punishments. But, as in any other field, when it truly matters, it’s best left to the professionals. Unfortunately, politicians frequently exploit this sentiment for personal gain, calling for a “tough on crime” stance to appear tough themselves. But this is a fallacy—real toughness isn’t about how much punishment we dish out, but about how much we can take. Judges must resist these pressures, underscoring once again how demanding their role can be.
In summary, achieving fair verdicts requires understanding not just the mind of the punished but also that of the punisher. We must stay self-reflective and aware of our own mindset when administering punishment. It’s easy to focus solely on making the offender understand our perspective, but it’s wiser to follow the maxim to first seek to understand, then to be understood.
So far, the analysis of punishments has mainly focused on preventing people from repeating violations of the rules—i.e., correcting their behavior. However, this cannot be the complete solution. Focusing only on repeat offenses means the harm has already occurred—something we should aim to prevent in the first place, especially with serious violations. We cannot afford to wait for someone to commit a grave crime, like murder, and then say, “Alright, now here’s why you shouldn’t do it again.” There are many methods to prevent such violations, and—emphasizing again—we should prioritize non-punitive approaches wherever possible. However, punishment also plays a key role by demonstrating consequences to others. This is where general deterrence comes in.
The challenge in general deterrence—unlike specific deterrence, which aims to prevent an offender from repeating an act, as discussed earlier—is that those who should be deterred have not yet experienced punishment because they have not violated the rules themselves. This means they must understand that the punishment inflicted on others will also apply to them if they break the same rule. However, this requires more advanced cognitive skills. For example, if we punish our dog for chewing on the couch by firmly scolding it and spraying it with water, general deterrence would mean turning to our other dog and saying, “See, this is what happens if you chew on the couch.” This kind of reasoning is beyond dogs—and, unfortunately, often beyond humans as well. One example of this is when people vote for politicians who act against their interests but only realize the consequences when they are personally affected. This principle of “If we feel it, we get it; and if we don’t, we don’t” is explored in more detail in the InHa book (p. 77).
Another way to frame the challenge is the following: there’s the saying, “Dumb people don’t learn from their mistakes, smart people do, and the really smart learn from mistakes of others.” When talking about general deterrence, it falls into the last, most challenging category. How do we move people from the “dumb” category to the “really smart” one? As always, the answer is education. And for that, communication is key—not only sending the information to as many potential rule violators as possible[33] but also ensuring that the message is properly understood.
And here again, it’s not just about rational understanding but also emotional comprehension. But how can people emotionally grasp a punishment they have never experienced? One way is to give them a taste of it. For example, receiving small fines for minor infractions can quickly help someone appreciate the impact of a much larger fine. Similarly, linking severe punishments to relatable past experiences can make them feel more tangible. For instance, “Imagine working for months and losing every paycheck you earned,” or “Remember how upset you were when you lost your wallet? Now imagine that feeling multiplied a thousandfold.” This principle isn’t limited to fines. “Think of the worst time you felt isolated, trapped, and bored—now imagine that for years.” can make jail time more relatable and concrete.
Incidentally, this may be another reason why the death penalty isn’t as effective a deterrent as we might assume. We can never get any taste of it—we cannot be “a little dead.” And even if we could—in a sense, we have already “experienced” it before birth—it would feel more neutral than terrifying. Of course, the fear of death is very real, but it’s also an emotion we (hopefully) don’t confront too often. Since it isn’t a familiar feeling, it’s harder to relate to and extrapolate from, further weakening its deterrent effect.[34]
We should also remember that while general deterrence is essential for maintaining order, it is ultimately built on threat and fear. This isn’t inherently negative, but it must be wielded with great care. In the pursuit of deterrence, the risk of injustice is ever-present—whether through excessively harsh punishments that violate proportionality or, worse, by punishing the innocent to set an example. History offers chilling reminders of such abuses, from Stalin’s Great Purge and the French Revolution’s Reign of Terror to show trials in authoritarian regimes. Deterrence can be a powerful beacon—but the question is always: a beacon of what? If not handled with precision and restraint, it may not illuminate the path to good behavior but could become a beacon of injustice.
In addition to specific and general deterrence, several other objectives are often associated with punishment: restoration (repairing the harm caused by crime), rehabilitation (aiming to reform offenders), incapacitation (preventing further harm by removing offenders from society), and retribution (making offenders “pay” for their crimes). However, these goals are not inherently tied to punishment; they merely coincide with it. As a result, they are often conflated.
To corroborate this statement, let’s examine these other goals, starting with restoration. Supposing that someone causes $100 in damage and is forced to pay the same amount as punishment. It’s tempting to conclude that the punishment’s purpose is to restore the victim, but those are two separate concepts. More accurately, taking $100 from the offender serves as punishment, acting as a deterrent against future harm, while giving the $100 to the victim serves as restoration.[35] The confusion arises from the fact that punishment and restoration are often bundled together, making them seem like a single action. However, distinguishing between the two is not just theoretical nitpicking; it has significant real-world consequences. For example, taking $100 from the offender and giving it to the victim may initially seem fair and might be considered a sufficient verdict. However, this may fall short as punishment, as mentioned above: if the offender is a billionaire, losing $100 is meaningless and does nothing to deter future misconduct.
It’s similar with rehabilitation. Therapy for substance abuse, anger management courses, educational programs or job training may also be perceived as punishments, but that’s not their purpose. Their goal is to help individuals avoid reoffending. The fact that these aren’t about punishment becomes clear when considering that rehabilitation programs can be enjoyable. Many offenders genuinely want to change their lives, and rehabilitation programs provide them with the tools to do so. Learning new skills, gaining education, or engaging in therapy can be deeply fulfilling—especially for those who previously lacked opportunities. Additionally, many rehabilitation programs involve group activities, mentorship, or community work, which can provide positive social reinforcement and support. In fact, rehabilitation programs should probably be enjoyable rather than a form of suffering, as that would only lead to internal resistance, reducing the chances of success.
Also, incapacitation isn’t about punishment; it’s simply about preventing offenders from causing further harm. For example, a student who tends to be disruptive in one classroom may be reassigned to another—perhaps with a different approach to learning—where they feel more comfortable and engaged, making it physically impossible for them to disturb their previous classroom. This isn’t punishment in the slightest, but rather a direct benefit to the “offender.” Of course, this doesn’t apply to many forms of incapacitation in criminal law—such as imprisonment, house arrest, and especially the death penalty—but the key principle still holds: incapacitation is about removing offenders from the environment where they can cause harm, not about punishment.
And what about retribution? There are two aspects to it. One revolves around the “like-for-like” principle, which caters to our sense of “justice,” “moral balance,” “fairness” and similar values. If someone hurts another person, they must be punished, period. The problem with such “period” arguments is that they tend to violate the very first principle mentioned in this article, “Always Servant, Never Master”: it’s a rule that has taken on a life of its own. There must be a reason, a purpose, for the rule. And if we keep asking why this rule exists, we find that, ultimately, it comes down to the other principles mentioned above—except one, which can be seen as the true, unique core of the principle of retribution: satisfying the desire for revenge.
The urge for revenge runs deep, driven by psychological and biological mechanisms linked to justice, self-preservation, and emotional regulation. Humans evolved in social groups where fairness and reciprocity were crucial for survival. If someone wronged us, failing to retaliate might signal weakness, making us a target for future exploitation. Historically, revenge acted as a deterrent, ensuring others thought twice before harming us. This instinct is deeply ingrained in our brains: studies show that imagining or executing revenge can activate the brain’s pleasure centers, releasing dopamine and making it feel temporarily satisfying.[36] The emphasis is on temporarily: research indicates that while many believe revenge will provide closure, it often has the opposite effect, prolonging negative emotions such as anger and resentment—even when they succeed in getting the revenge.[37]
But still—revenge is a deeply ingrained human impulse, so retribution must be one purpose of punishment, right? It seems that way, but this is only the surface. The desire for revenge originates in the minds of victims, as evidenced by the fact that it can be satisfied simply by believing the perpetrator has been punished—even when they haven’t. Likewise, this desire can fade for reasons unrelated to justice, such as distraction, mood-altering medication, or the victim’s death. This suggests that punishment itself is not the true objective; rather, it is merely a presumed remedy—primarily to ease the lingering psychological pain caused by the original offense.
There are several reasons for emphasizing that restoration, rehabilitation, incapacitation, and retribution are not inherently about punishment. First, when these objectives are seen as natural components of punishment, they tend to be scrutinized less, making it easier to overlook their distinct benefits and drawbacks. Each serves a different purpose and should be evaluated on its own merits rather than being automatically justified under the umbrella of punishment. For instance, incapacitation through imprisonment may seem like a straightforward way to prevent further harm, but it often has unintended consequences—prisons can act as “crime schools,” fostering criminal networks and reinforcing antisocial behavior rather than rehabilitating offenders.[38] If we fail to separate these objectives from punishment itself, we risk adopting policies that are counterproductive to both justice and public safety.
Second, understanding that it’s not only about punishment opens the door to creative measures that offer many more benefits than punishments alone. For example, a teenager who posted fake threats online could be reprimanded by giving presentations to schools about the dangers of online hoaxes—raising awareness and teaching others how to discern fake news from truth. Similarly, those who mistreat animals might be required to volunteer at an animal shelter, fostering empathy while directly helping animals in need. While courts sometimes use meaningful community service that directly connects to the violation, these cases remain rare. More often, fines and jail sentences are favored due to their simplicity and ease of enforcement, even when more constructive alternatives might be more beneficial.
The third benefit of recognizing that restoration, rehabilitation, incapacitation, and retribution are not inherently tied to punishment is that it shifts our perspective on justice. When we assume that punishment alone brings about all these “benefits,” we naturally gravitate toward harsher or more frequent punishments. By avoiding this misconception—and acknowledging punishment’s many drawbacks, especially its tendency to provoke retaliation—we weaken its appeal and prevent us from embarking on an endless cycle of punitive measures. Rather than defaulting to a punitive mindset, we are encouraged to pause, think critically, and pursue thoughtful, often individualized solutions that truly work.
This rehabilitation-based system has consistently proven superior, both in individual cases and overall. This can be seen by comparing different justice models across countries. For example, the United States follows a highly punitive model, emphasizing long sentences and harsh conditions as a deterrent to crime. With over 1.8 million people incarcerated (~600 per 100,000 people), the U.S. has one of the highest incarceration rates in the world. This is partially driven by a system in which private prisons and corporations profit from mass incarceration, lobbying for harsher sentencing laws to keep prisons full. Norway, in contrast, operates government-run prisons designed to treat inmates with dignity and prepare them for reintegration into society. The focus is on education, vocational training, and psychological support rather than punishment. While Norway’s system is more expensive in the short term ($93,000 per prisoner annually vs. $36,000 in the US), this cost is more than offset by significantly lower incarceration and recidivism rates (57 per 100,000 incarcerated, with a ~20% recidivism rate vs. 60%-70% in the U.S.).[39] Encouragingly, some U.S. states—such as North Dakota—are experimenting with elements of the Norwegian model. However, broader reform remains challenging, largely because opposition is often driven more by emotion and ideology rather than by rational policy analysis. Many Americans continue to embrace deeply rooted beliefs in punitive justice, political narratives promoting “tough on crime,” and retributive justice philosophies—reflecting the biblical principle of “an eye for an eye”[40]—rather than considering the overwhelming evidence showing that rehabilitation is the more effective approach.
In conclusion, while punishments have their place, they should be a last resort. The priority should be designing rules that people naturally want to follow. This involves engaging them in rule-making, ensuring clarity, and providing education on the necessity of rules—”He who opens a school door, closes a prison,” as Victor Hugo put it. Furthermore, we should foster emotional and personal connections to the consequences, set role models, make compliance effortless, and reward adherence. If punishment becomes necessary, it should be gradual, well-explained, and individualized, with a continued focus on rehabilitation. Ultimately, we must invest as much—if not more—effort in fostering adherence as we do in crafting the rules themselves.
What has been said so far rests on a big assumption: that nearly all decisions we make are conscious. However, as we all know, this is far from true. Recent studies suggest that 95% of our thoughts occur below the threshold of conscious awareness.[41] So, does this mean that all the insights discussed so far are only relevant to the remaining 5% of our decisions?
First, some of the measures mentioned above do have a direct impact on the subconscious as well. For example, punishments that instill fear can certainly influence our actions, even if we’re not consciously aware of it. Second, even if we’re only talking about 5% of our actions, that may not be as problematic as it might sound because that’s often what really matters: our conscious mind tends to kick in when it comes to important decisions (or, more precisely, those the subconscious mind considers to be important). Additionally, that 5% carries more weight than it may initially seem, as it provides us with the opportunity to program the subconscious 95%, at least to some extent. For this to work, however, a clear understanding of the relationship between our conscious and subconscious minds is crucial.
On a side note, this chapter takes a different perspective—it focuses on how we change our own behavior and adherence to rules, whereas the previous chapters were more concerned with influencing others. While this might seem like a shift, the distinction is actually artificial. The principles discussed earlier apply just as well to ourselves, and the ideas in this chapter can just as easily be used to influence others’ behavior. The change in perspective is mainly for variety’s sake and serves as a reminder that these concepts work in both directions.
Let’s start with the obvious: we cannot change anything consciously if it’s not at our level of consciousness. Therefore, we should apply every possible method to become more aware of our actions. This includes reflection, mindfulness, and self-observation, all supported by a generally positive attitude toward conscious and rational thought. Since it’s easy to get caught up in daily routines and the relentless pace of life, we should set aside dedicated time for introspection and deliberately schedule it in our calendars. Making the implicit explicit is a prerequisite for analyzing the rules we follow. It also helps us avoid working on the wrong problems and becoming increasingly efficient at tasks that don’t need to be done at all.
However, conscious reflection shouldn’t be limited to Fridays from 2 to 4pm. We need to increase our awareness more generally, which can be achieved by slowing down automatic actions and valuing the gap between stimulus and reaction. To do this effectively, a healthy dose of skepticism toward our instincts, impulses, and emotions can be crucial. We should keep in mind that our incentive and behavioral system evolved over millions of years in an environment that no longer aligns with our modern way of life (as discussed from page 16 in the InHa book). Furthermore, emotions are shaped by our experiences and upbringing, which underscores the importance of not mistaking them for objective truth. From this perspective, the recent rise in populist rhetoric—appealing to emotions, fears, and hopes rather than empirical evidence (“what feels true is true”)—is deeply concerning (see The Truth About Truth).
Once we start thinking consciously, it’s still not guaranteed that we identify all the relevant actions we need to question. One such blind spot can be actions perceived as normal. This can be immensely powerful. It doesn’t matter how severe the consequences are—whether they stem from unethical workplace practices, environmental destruction, or supporting factory farming—as long as these actions are perceived as normal, they not only continue but may also slip past our attention, evading analysis altogether. The human mind has an extraordinary capacity to adapt, especially when the victims are distant, invisible, or unable to fight back. As a result, these decisions often bypass conscious scrutiny. From this perspective, “Which rules do we follow subconsciously, simply because we’ve been programmed to do so?” may be one of the most important questions of all.
Let’s assume we succeed at all of the above—that is, we manage to take the time to think consciously about our actions, identify the wrong behavior from the past, and discovered the new rules we want to adhere to moving forward. How do we then reprogram our subconscious mind to follow these new rules? One of the most powerful methods—the core of normalization as just described—is to harness the power of repetition. By repeating an action, we essentially “lay the wires” in our brain, strengthening neural pathways and making these behaviors automatic, allowing our conscious mind to focus on other tasks. Repetition also taps into our preference for familiarity, which we perceive as safer and more predictable. Over time, these repeated behaviors shape our self-identity (“That’s who I am”), reinforcing the habit and making it feel even more natural and automatic.
This is what is happening when we push the initial resistance and consistently go to the gym, put on the seatbelt, lock the doors, or recycle—it quickly becomes a habit, and we do it with almost no conscious thought. The range of application for this method is so broad that it would make more sense to look for examples where this principle doesn’t apply. But like any powerful tool, it can be used for both good and bad—it’s a key method of manipulation. This can be relatively benign, like those sneaky game developers offering us free 30-day trials, knowing that we’ll get hooked and eventually purchase the premium version. But it can also be much more serious. Demagogues often repeat falsehoods over and over, knowing that many people will believe them simply because they’ve heard them so many times. Or, even worse, they spread different falsehoods until the public grows numb to the lies, perceiving them as normal and starting to view truth itself as arbitrary, subjective, or even irrelevant (more about this in “Why is Truth Having a Hard Time?”).
Such manipulations are especially dangerous when they occur in small, incremental steps, always staying under the radar. Our consciousness tends to kick in only for drastic changes, so a long series of gradual shifts may go unnoticed. While the boiling frog analogy isn’t literally true, the underlying principle is very real, whether in political authoritarianism, privacy erosion, or climate change. We may only realize the changes when it’s too late, which underscores the importance of occasionally stopping, stepping back, and consciously reflecting on the direction we’re heading.
However, repetition can also be used to enhance conscious awareness. When Bart Simpson was required to repeatedly write “I will not drive the principal’s car” on the blackboard (The Simpsons, Episode 21), the goal wasn’t necessarily to make him avoid it automatically in the future. While that would be ideal, mere repetition alone likely wouldn’t achieve this. Instead, it ensured that the next time the idea crossed his mind, it would surface in his awareness, prompting a more deliberate decision. In this context, repetition becomes a tool for conscious reflection, rather than mindless obedience, transforming it into a powerful and positive method.
However, drill and repetition are only effective when done the right way. Specifically, they are ineffective when there is strong inner resistance. Repeatedly being hit on the head won’t make us enjoy or adopt such practices—quite the opposite, we will be conditioned to shy away from them. For repetition to work, the actions must fall on fertile ground, meaning they should be perceived as positive, or, at the very least, neutral. This can include repeating a skill we enjoy practicing, using vivid visualization of positive scenarios (e.g., picturing ourselves as confident), positive affirmations and self-talk, exposure and immersion in what we want to become (e.g., listening to motivational speakers or reading inspiring books), anchoring desirable actions with positive emotions, or simply behaving as we wish to (“fake it till you make it”).
While repetition is the master method for programming the subconscious, it is not the only one. A single strong emotional event can lead to lasting behavioral change without repetition. Additionally, techniques like meditation and hypnosis, and other methods (such as dream programming or Neuro-Linguistic Programming) aim to directly rewire the subconscious by utilizing mental shortcuts, emotional shifts, and pattern interrupts. However, their effectiveness depends on factors such as the specific technique used, the individual’s mindset, and how well the method is applied.
Another crucial factor in subconscious behavior is our environment. After all, the mental “code” we run determines how we react to external circumstances. By altering our surroundings, we inevitably change our reactions as well. This can be done in various ways, such as changing jobs or relocating, but perhaps the most significant influence comes from our social relationships.[42] There’s a lot of truth in the saying, “We are the average of our five closest friends.” More important than their mere presence, however, is the depth of these relationships. This is why strengthening community ties can be particularly effective in preventing crime. For instance, involving an offender’s friends and family in developing intervention measures works on a subconscious level—while the offender may not explicitly think, “My mom won’t like this,” the strength of these social connections shapes their sense of right and wrong long before a decision is made.
While changing the environment is relatively easy for individuals, it becomes more challenging when it comes to entire societies. We cannot all relocate to the Moon—at least not yet—and even if we could, we would carry our social patterns and issues with us. This means that if we want to foster rule adherence and reduce crime overall, we must focus on improving our existing environment. For the most part, this comes down to increasing subjective well-being, as higher levels are strongly correlated with lower crime rates, according to several studies[43]—tying back to the Happiness Begets Happiness principle mentioned earlier. Therefore, efforts to enhance well-being—such as providing universal healthcare, enabling personal growth through education, or fostering economic stability—can indirectly but significantly contribute to reducing crime.
There are several other methods to influence our subconscious. One powerful approach is through the body, which often impacts the subconscious more than we realize. For instance, deep breathing and specific postures (like standing tall with shoulders back) can signal confidence, while even an artificial smile can make the subconscious feel happier. The subconscious also responds strongly to symbols, metaphors, and rituals—drinking a special tea before deep work, for example, can help the brain to enter a focused state. Similarly, storytelling and self-narratives shape how we perceive ourselves. A simple shift from “I’ve always been bad at math” to “I’m learning to improve my math skills” can rewire the subconscious and create lasting behavioral change.
Another crucial point with respect to programming the subconscious is something every developer knows: programming is easiest when there isn’t too much legacy code. As children, with clean slates, we are most easily molded, absorbing the rules we are fed like sponges. Once programmed, it can be extremely difficult, or even impossible, to change these rules and beliefs, no matter how harmful, baseless, or logically flawed they may be. An example of this is ingrained prejudices and biases, such as the belief that a certain group of people is dangerous or inferior.[44] These ideas often persist into adulthood despite a complete lack of supporting facts and even clear contradictions. Given the lasting damage such beliefs can cause, instilling them in children is, rightly, often considered a form of psychological abuse.
An obvious solution is to teach children the “good” rules and beliefs, but this inevitably raises the highly subjective question of what is truly “good.” Theoretically, the only truly impartial approach would be to allow children to discover it for themselves. Rather than imposing specific beliefs, the focus should be on fostering open-mindedness and critical thinking to minimize the risk of children being programmed according to someone else’s agenda. In other words, we should indoctrinate them not to be indoctrinated and program them not to be programmed.
However, this rule has a flaw: many years must pass before children are intellectually mature enough to determine for themselves what is right and what is not. Until then, there is a void that must be filled with something. If a child takes an action, we are forced to respond—either by allowing it, implicitly conveying the message that “this was okay,” or by telling the child “this was not okay.” We cannot simply say, “Wait 18 years and then decide for yourself if what you did was right.” So, in the end, there is no escaping the fact that we must convey some values.
Does that mean we’re back to square one? Not necessarily. Teaching children to think critically can still serve as a fundamental principle. While deviations may be necessary, they—here’s the key point—must be well justified based on reason and evidence. For example, if a child hurts another, we shouldn’t reprimand them simply because “we think hurting others is bad,” but because humanity’s collective experience provides objective evidence that causing harm ultimately reduces overall well-being. Every thought, belief, or value we teach children must withstand this scrutiny and rational evaluation.
Since this blog is about science, it would be a crime not to touch on the relationship between rules and science. There are two angles to this, which will be discussed in turn: first, how the rules studied by science—the laws of nature—fit into the bigger picture. After that, the focus will shift to how the rules discussed so far—relating to shaping behavior—could benefit from more scientific thinking.
When it comes to the laws of nature, a fundamental difference seems to be that they aren’t man-made and they don’t fulfill any “purpose.” However, this isn’t entirely true: the laws of nature are just as much a human construct and serve a purpose, similar to all the other rules discussed above. Before the reader jumps to the conclusion that the author of these lines has gone completely mad, a bit of space should be granted to explain this statement.
For example, let’s consider a law of nature that we encounter early in life: gravity. The way we understand this rule is that “all things fall to the ground.” This understanding serves us well—it teaches us not to move food beyond the edge of the table, that dropped toys don’t come back on their own, and to have a healthy respect for cliffs. Now, what happens when the man with the hot air balloon appears? Emotionally, we’re amazed. Rationally, we modify our rule to “All things fall to the ground. Except balloons, apparently.” This revision refines our previous belief—which is the essence of learning—and helps us better navigate and understand the world. However, strictly speaking, this isn’t just an adjustment—it’s a refutation of the original rule. As mentioned earlier, exceptions don’t refine a rule; they disprove it. The addition “Except balloons, apparently” is merely a patch on a flawed premise. While this patch may be useful, it ultimately reveals that our understanding is fundamentally wrong.
This principle is ubiquitous in science, no matter how sophisticated the field may seem. To illustrate this, we don’t even have to leave the example of gravity. When Isaac Newton formulated his Universal Law of Gravitation in the late 1600s it was groundbreaking—it explained so much in people’s everyday lives that it could be described as 99.99% accurate (it only fell short in explaining phenomena far away, such as the precession of Mercury’s orbit or the bending of light around massive objects like stars). But then Einstein showed that Newton’s laws weren’t 99.99% correct, nor 50%, nor even 1%. They were fundamentally wrong in how they understood gravity. Undoubtedly, this trend will continue—eventually, even Einstein’s theories will likely be proven wrong.
Hence, science must never be misunderstood as revealing absolute, unchanging truth; it is merely a quest to get closer to it. As Karl Popper emphasized, science provides no proof, only falsification. Or, in the words of George Box, “All models are wrong, but some are useful.” A deep understanding that the best we can ever do is be less wrong is also beneficial as it fosters humility. In any case, the above implies that while an objective reality may exist, the moment we try to grasp and describe it, the “laws of nature” become human constructs—imperfect models designed to serve a purpose.[45]
Returning to the core of this article—the rules that shape our behavior—there are several ways that relying more on science could lead to better decision-making. The first is straightforward: don’t ignore science. When there is overwhelming scientific consensus on issues like climate change or the efficacy of vaccines, it’s crucial to acknowledge it rather than dismiss it because of personal biases or political agendas.
Not only should we not ignore science that is clearly evident, but we should also proactively seek out scientific evidence wherever possible. This is particularly relevant for rules that “feel” right, as our emotions can sometimes mislead us, often bypassing scientific scrutiny. For instance, as previously mentioned, three-strike laws and the death penalty have been shown to be ineffective deterrents. Another example is public sex offender registries, which have been shown to increase rather than reduce recidivism.[46] Similarly, harsh drug laws with mandatory minimum sentences, like those seen in the “War on Drugs,” and programs like “Scared Straight,” which expose troubled youth to harsh prison environments, tend to increase rather than reduce crime.[47] As a general rule, the more emotional or severe the crime or punishment, the higher the risk of allowing emotions to override rational policymaking.
On a side note, emotions and rules are often at odds. We need rules because our emotions don’t always guide us in the right direction. Without them, we would simply do whatever we feel like. As a result, rules often run counter to our nature, creating an ongoing tension between emotion and rationality—what Robert Louis Stevenson famously described in Dr. Jekyll and Mr. Hyde: “Man is not truly one, but truly two.” More precisely, it’s a battle between succumbing to short-term emotions and optimizing long-term ones—since the ultimate goal of rational thinking is also to optimize emotions, just over a longer time horizon. While rational thought is ultimately superior, it is inherently fragile, constantly at the mercy of our short-term impulses, which determine whether we can think long-term at all. Anything that heightens short-term emotions— whether it’s suffering, demagogues appealing to fear and anger, or the sudden loss of structure and security—increases the risk of our primal instincts overpowering civilization, rationality, and scientific thought. Lord of the Flies illustrates this descent vividly: when the rules that uphold order disintegrate, the boys on the island are consumed by immediate fears and desires, abandoning rationality in favor of chaos and brutality.
Calling for more science is obviously asking for correct science, rather than junk science that remains pervasive in the legal system. This issue was highlighted in the influential 2009 report Strengthening Forensic Science in the United States: A Path Forward[48], a landmark study revealing that many forensic techniques used in criminal investigations and courtrooms lack scientific validation. Methods like bite mark analysis (matching bite marks on a victim to a suspect’s dental impression), hair fiber analysis (visually comparing hair samples), and bullet lead analysis (matching bullet fragments to a suspect’s ammunition) were shown to be scientifically unreliable. These flawed techniques have led to numerous wrongful convictions—and, in some cases, even executions. Although progress has been made since the report, many of these forensic methods are still used today, despite their lack of scientific credibility, continuing to pose a significant risk to justice.
We can also learn from science in how it emphasizes the importance of experiments—which lawmaking rarely follows. Many policies could be tested through controlled trials before widespread implementation, but such approaches are uncommon. A key reason for this is that politicians are driven by election cycles and tend to prioritize immediate action over long-term experimentation. Additionally, policymaking is often more influenced by tradition, historical precedent, and political ideology than by rigorous testing, leading to policies that may have unintended consequences.
Such dynamics can also contribute to violations of the scientific principle of falsifiability. Many programs have been proven ineffective by studies, yet they continue due to political motives (no one wants to admit failure) or systemic inertia. One example is the Drug Abuse Resistance Education (D.A.R.E.) program, launched in the 1980s in the U.S. to prevent youth drug use through police-led classroom sessions. However, research found no significant impact, and, in some cases, students who participated were more likely to try drugs. Similarly, abstinence-only sex education—which promotes avoiding all sexual activity before marriage—was intended to reduce teen pregnancy and STDs. Yet, studies consistently showed that states with abstinence-only education had higher rates of both compared to those with comprehensive sex education. Despite clear evidence against their effectiveness, both programs persisted for years before significant policy changes were made.
Scientific thinking can also be fostered by recognizing that nearly all fields can be explored scientifically. Today, there is a prevailing belief that some fields—particularly ethics and values—lie outside the scope of science. This belief often stems from the idea that these areas are highly subjective and therefore incompatible with the objective, empirically driven nature of scientific inquiry. However, we live in a world governed by cause and effect, meaning that what appears subjective is ultimately the result of objective developments. This suggests that values—and even the fundamental question of “What is good?”—can be analyzed and derived scientifically.[49] Pushing these questions into fields like philosophy often renders them vague, unscientific, and ultimately futile.
While some may contest whether goals can be derived scientifically, the following should not be controversial: once we agree on a goal, determining the rules to achieve it should be a purely scientific process. As mentioned at the beginning of this article, purposes and rules are two separate things, yet they are often conflated. This leads to the second question—how to achieve a goal—being clouded by subjective, unscientific reasoning. To ensure clarity and optimization, goals should be explicitly stated and agreed upon, allowing the rules to be determined through objective analysis. Aristotle’s “The law is reason, free from passion” reflects this ideal, while John Godfrey Saxe’s “Laws are like sausages; it’s better not to see them being made” remains the more accurate reflection of reality in most cases.
Beyond its tangible and concrete benefits, science teaches a deeper lesson—the spirit of science. It’s not just about using tools but about the mindset with which we approach problem-solving: rationally, carefully, and humbly, always aware of how easily we can be wrong. The only certainty is that we can never be completely certain. That’s why we must be cautious about holding rigid opinions—which suggest an unshakable stance—and instead treat them as assessments based on the best evidence and reasoning currently available. Only this mindset keeps us open to new insights and allows us to adapt as more evidence emerges. This isn’t easy; it requires intellectual humility and the strength to embrace uncertainty, which runs counter to our instincts. And even if we manage to do this, success is not guaranteed. Still, we must cherish the scientific method—it’s the best approach we’ve got.
Also in the spirit of science, this article will continue to evolve based on new evidence or insights. Your feedback is the key driver of this process. So please, share your thoughts here—be as open, blunt, and direct as possible, while remaining thoughtful. After all, the pursuit of better understanding is always a collaborative effort.
—
[1] See Sandstorm, M., Boothby, E. (2020): “Why do people avoid talking to strangers? A mini meta-analysis of predicted fears and actual experiences talking to a stranger” (Self and Identity).
[2] See Roth, I. (2018): Mayo Clinic Minute: Should you wait 30 minutes to swim after eating? (Mayo Clinic).
[3] This phenomenon—that the strong human need for clarity and purpose sometimes overshadows ethical concerns—applies not only to rules but also to roles, which often function like rules themselves. People without a clear role in life often feel unfulfilled, aimless, and adrift. When an opportunity arises that offers purpose, belonging, and identity, it can be irresistibly compelling—even if the mission itself is harmful. Adolf Hitler, for example, lived a directionless life of repeated failure until he joined the military in World War I, where he found discipline, recognition, and a sense of mission. He excelled, earning both the first- and second-class Iron Crosses, among other awards. A more recent fictional example (fiction often reveals deep truths): In the final episode of Breaking Bad, when asked why he did it, drug lord Walter White dropped his long-standing justification—that he did it for his family—and admitted, “Because I was good at it.” This is the ever-present danger: when scoundrels, ne’er-do-wells, and the embittered find purpose—and a way to feel needed—they may embrace it with fervor, even if they’re “serving the devil.” As Banksy put it, “The greatest crimes in the world are not committed by people breaking the rules but by people following the rules.” A society with good laws can keep them at bay, but if it falters, they emerge—they are always there, even if unseen in stable times.
[4] This also points to the very fragile nature of rules: they must be completely re-learned with every new generation. The only way to address this is through a strong education system, as emphasized throughout this article.
[5] The astute reader may ask whether the statement “every rule has exceptions” is self-contradictory, as it is itself a rule. This resembles self-referential paradoxes like the Liar Paradox (“This statement is false”), where a statement’s truth negates itself. The most pragmatic resolution in this context—to prevent a short-circuit that might make the reader’s head implode—is to rephrase it as “most rules have exceptions.” The only reason the main text wasn’t adjusted accordingly is that doing so would not have given the author an excuse for this footnote.
[6] This scenario—where the value of something depends entirely on how much value people assign to it—has many parallels. The most obvious example is paper money, but the principle extends to nearly everything society deems valuable. It also applies even in the absence of third parties; for instance, we may value a belief, and in doing so, give it its very value.
[7] This can even lead to a “yo-yo” effect, where the same rule alternates between beneficial and harmful depending on shifting circumstances. One example is forest fire management policy in the United States. From the late 19th through the mid-20th century, early strategies focused on strict fire suppression, initially seen as unquestionably beneficial for protecting timber resources, wildlife habitats, and human property. However, ecologists later discovered that preventing all fires allowed dense underbrush and fuel loads to accumulate, inadvertently increasing the risk of massive, uncontrollable wildfires. By the late 20th century, what was once regarded as prudent management came to be seen as dangerously counterproductive. Yet circumstances shifted again with climate change, making strict suppression beneficial once more—though with some exceptions. This illustrates how the effectiveness of policies can flip-flop as circumstances evolve.
[8] This may be one of the greatest challenges in achieving peak performance. No single rule, no matter how refined or optimized, will take us all the way. Those who succeed often do so by combining different—ostensibly contradictory—approaches and skill sets. For example, Einstein’s uniqueness may not have stemmed from excelling in any single discipline, but rather from his ability to merge rational thought and mathematics with powerful inspiration and imagination (see Profiling Top Physicists). A similar phenomenon exists in the arts. In Giuseppe Verdi’s opera La Traviata, the central character, Violetta, undergoes a dramatic transformation. At first, she is a lively courtesan, delivering her arias in a brilliant coloratura soprano. Later, as a terminally ill woman with a broken heart, her voice shifts to a lyrical soprano, requiring a vastly different vocal quality. These two facets demand contrasting vocal techniques, making singers who can convincingly perform both exceedingly rare. This is why the role of Violetta is so difficult to cast. For more on the interplay between contrasting skills and creativity, see Creativity in Physics and The Creative Process and Parallels Between Physics and Chess.
[9] “Nur wer sich ändert, bleibt sich treu” from Wolf Biermann, released in 1991.
[10] To prevent misunderstandings: Calling for change does not mean treating change as an end in itself. This fallacy is often exploited by demagogues and populists who argue, “You asked for change, I am bringing change—so why complain?” But demanding change is not a blank check for reckless action. Every proposed change must be examined critically to determine whether its impact is truly beneficial. A more accurate way to frame the above might be openness to change—allowing previously unconsidered options to be put on the table. Whether they should actually be implemented is a separate matter—one that demands rigorous analysis, clear reasoning, and an unwavering commitment to truth.
[11] See The Devastating Effects of Litter on Wildlife and How littering harms animals.
[12] Allowing ambiguity—or even outright contradictions—in a legal framework can be a deliberate strategy by rule-makers to appease the public while maintaining unchecked control. A compelling literary example is George Orwell’s Animal Farm. The ruling pigs (literally pigs) initially declare, “All animals are equal” to gain approval, only to add later, “But some animals are more equal than others.” This tactic not only fosters obedience but also enables those in power to justify virtually any action under the guise of established rules. Of course, fiction is used here because it illustrates the point well, not for lack of real-world examples of manipulation. As Stalin allegedly put it, “It’s not the people who vote that count. It’s the people who count the votes”—a fitting reminder of how control often lies not in the rules themselves, but in those who enforce them.
[13] Two suspects are arrested for a crime and interrogated separately. Each has two choices: remain silent (cooperate) or betray the other (defect). If both cooperate, they each get 1 year in prison. If one defects while the other cooperates, the defector goes free, while the cooperator gets 3 years. If both defect, they each get 2 years. The best overall outcome is for both to cooperate, but from each individual’s perspective, defecting is always the safer choice—no matter what the other does. The Prisoner’s Dilemma is a classic game theory problem that illustrates how rational self-interest can lead to worse outcomes for everyone.
[14] See Americans Pass Judgment on Their Courts.
[15] Or, to borrow another literary reference: In To Kill a Mockingbird, Harper Lee writes, “But there is one way in this country in which all men are created equal—there is one human institution that makes a pauper the equal of a Rockefeller… That institution, gentlemen, is a court.” – Atticus Finch. Ironically, however, Tom Robinson is convicted unfairly, highlighting the gap between this ideal and reality.
[16] Becker, H. S. (1963): “Outsiders: Studies in the Sociology of Deviance” (Free Press).
[17] The “carrot and stick” principle is rather unfortunate for those who detest carrots—like the author—as it turns the saying into more of a “rock and a hard place” dilemma. This proverb, however, originates with donkeys, from which the author prefers to clearly distinguish himself—despite certain similarities, including a preference for a plant-based diet, independent nature, cautiousness, slow learning in select areas, and a tendency to get bored quickly. The distinction, however, becomes undeniable when considering the author’s markedly lower engagement in other donkey-like behaviors, such as eating feces, crossbreeding with horses, making loud bray-noises, having a general aversion to children—and the enjoyment of carrots, of course.
[18] Of course, this isn’t always the case—facial features, bone structure and density, blood pressure, facial tissue elasticity, muscle tension at impact, overall blood volume, nasal cavity structure, and other factors all play a role. In this context, a joke that’s too hard to omit: What’s the outcome if Donald Trump runs into a wall with a full-on erection? A broken nose.
[20] Gray, K., Wegner, D. (2008): The Sting of Intentional Pain (Psychological Science).
[21] This applies especially to rules that have lost most of their meaning—such as blasphemy laws that remain in legal texts but are never enforced in some regions. This is less relevant for laws that are only partially enforced, like jaywalking, where enforcement is inconsistent but still present. Even when police rarely intervene, social norms play a role—people may frown upon jaywalking, especially in front of children, particularly in cultures where strict adherence to pedestrian rules is expected. As the saying goes: In Germany, a red light is the law; in Italy, an option; in France, a recommendation; and in India, a decoration.
[22] The principle of taking full responsibility for oneself can be a powerful mindset for personal growth. When we stop seeing ourselves as victims, we take action instead of blaming circumstances. This holds true even in extreme cases. For example, if we are wrongfully imprisoned for a crime we didn’t commit, it’s natural to lament the injustice. But the real danger is stopping there. Instead, we should ask ourselves: Did we associate with the wrong people? Did we make choices that put us at risk? Did we fail to prepare legally? Adopting a Radical Responsibility mindset not only pushes us to improve our situation but also transforms helplessness into a strong sense of control over our lives.
[23] This mentions only one effect of incarceration and does not advocate for it as a whole. Incarceration has many drawbacks, as discussed further below.
[24] Although punishments based on embarrassment can also backfire (see here). On a more serious note, public humiliation as a form of punishment should be used with extreme caution due to its lasting psychological effects. A striking example comes from reports of World War II concentration camp survivors. Many who endured physical torture—no matter how severe—were often able to recover after the war. However, those subjected to prolonged humiliation suffered deep psychological wounds that never fully healed. This has been well-documented in studies and literary works, including Jean Améry’s At the Mind’s Limits and Bruno Bettelheim’s Surviving and Other Essays.
[25] For example, see Ariely, D. et al. (2009): “The Dishonesty of Honest People: A Theory of Self-Concept Maintenance” (Psychological Science).
[26] Wright, V. (2010): “Deterrence in criminal justice: Evaluating certainty vs. severity of punishment” (The Sentencing Project).
[27] Forcing offenders to adopt a different perspective is an important and recurring theme in punishment, and it doesn’t necessarily require empathy. For example, when a misbehaving student is made the class leader for a week, they are forced to set a good example and realize that society cannot function when people behave as they did. This kind of punishment—where more responsibility is given rather than taken away—demonstrates that effective consequences can sometimes be surprising and counterintuitive.
[28] In quantum physics, the observer has a decisive impact on reality—at least according to the Copenhagen Interpretation, which remains the most widely accepted view (for its popularity, see Schlosshauer, M. et al. (2013): „A Snapshot Of Foundational Attitudes Toward Quantum Mechanics” (Studies in History and Philosophy of Science)). However, while the Copenhagen Interpretation assigns a crucial role to the observer, it leaves open what actually constitutes an observer (without necessarily implying the need for consciousness) and at what point in the process a measurement occurs. This “measurement problem” remains one of the key open questions in quantum physics today. For more on the topic, see Subjective vs. Objective Worldviews.
[29] Cho, K. et al. (2017): “Sleepy Punishers Are Harsh Punishers” (Psychological Science)
[30] Eren, O., Mocan, H. N. (2018): “Emotional judges and unlucky juveniles” (American Economic Journal: Applied Economics).
[31] Danziger, S. et al. (2011): “Extraneous factors in judicial decisions” (Proceedings of the National Academy of Sciences). However, another study has reputed these findings, see Glöckner, A. (2023): “The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated” (Cambridge University Press)
[32] A prime example of the public’s misunderstanding of legal intricacies is one of the most high-profile cases of the 20th century: the O.J. Simpson trial. In the U.S. criminal justice system, a conviction requires that the defendant be found guilty “beyond a reasonable doubt.” Given the numerous errors in the investigation—such as questionable handling of DNA evidence, racial bias within the police department, and a key detective invoking the Fifth Amendment (which allows a witness to refuse to answer questions to protect themselves from self-incrimination) during trial when asked whether he had planted evidence—it is difficult to see how any jury could have concluded no reasonable doubt remained. This does not mean that O.J. Simpson was innocent—he likely was not. His later conviction in a civil trial (which applied the lower standard of “preponderance of the evidence” rather than “beyond a reasonable doubt”) was entirely logical. Yet, despite the legal justification for the criminal trial’s verdict, there was significant public outcry, as many believed Simpson was guilty. However, this frustration should have been directed at law enforcement failures—or even at the legal framework itself—rather than at the jury’s decision. The case illustrates how legal reasoning often diverges from public perception and underscores the depth of expertise and critical thinking required for a proper understanding of such cases.
[33] A striking, humorous, and grim example illustrating the importance of communicating punishments for deterrence is the 1964 film Dr. Strangelove. In the film, the Soviet Union constructs a “Doomsday Machine” designed to automatically trigger global nuclear destruction if attacked—one that even they cannot stop once activated. However, because they keep it a secret, it fails as a deterrent. When a rogue U.S. general initiates an unauthorized nuclear strike, leaders on both sides are unable to respond effectively, and catastrophe becomes unavoidable, ultimately resulting in the annihilation of the planet. This underscores the necessity of making deterrent threats known in advance.
[34] A related anecdote: During Britain’s “Bloody Code” era (~1680s-1830s), when theft was punishable by death, some historians reported that pickpocketing was rampant in the crowds gathered to witness executions—making a mockery of the idea that capital punishment was an effective deterrent. While such accounts should be viewed with skepticism—they may well be apocryphal—they reflect a broader truth about the limits of harsh punishments in preventing crime.
[35] The fact that these two events are logically separate does not mean they cannot influence each other. For example, a Democrat may perceive a $100 punishment as more severe if the money is directed toward Republican campaign funding. Conversely, when an offender is required to donate to charity, the punishment may feel less severe—perhaps even reframed as a voluntary donation. In some cases, this could lead the offender to feel morally justified or even encouraged to violate the rules again. Interestingly, the solution is not to stop requiring offenders to donate to charity, but rather to require donations large enough that the act feels more like a true punishment. To find charities to donate to, see IncreasingHappiness.org.
[36] Chester, D., DeWall, N. (2015): “The pleasure of revenge: retaliatory aggression arises from a neural imbalance toward reward” (Social Cognitive and Affective Neuroscience)
[37] Clarsmith, K., Wilson, T., Gilbert, D. (2008): “The Paradoxical Consequences of Revenge” (Journal of Personality and Social Psychology).
[38] The foundational work on prisonization theory is Donald Clemmer’s book, The Prison Community, first published in 1940. For a contemporary analysis of Clemmer’s concept, see Tomasz Sobecki’s 2020 article, “Donald Clemmer’s Concept of Prisonisation“.
[39] See What We Can Learn From Norway’s Prison System: Rehabilitation & Recidivism (First Step Alliance).
[40] To be precise, the “eye for an eye” principle, which emphasizes retributive justice and revenge, is most prominently found in the Old Testament. In the New Testament, however, there are numerous teachings that advocate for empathy, forgiveness, and the commandment to “love thy neighbor,” along with the call to “turn the other cheek.” While these principles mark a significant shift toward compassion and non-retaliation, the retributive mindset of the Old Testament still resonates for many, whether consciously or subconsciously. Much like adulthood builds upon childhood without erasing it, these older notions of justice remain deeply embedded within the human psyche and cultural practices, influencing attitudes toward retribution even in modern contexts.
[41] Young, E. (2018): “Lifting the lid on the unconscious” (New Scientist).
[42] Stuart, B., Taylor, E. (2021): “The Effect of Social Connectedness on Crime: Evidence from the Great Migration” (The Review of Economics and Statistics).
[43] Stickley, A. et al. (2015): “Crime and subjective well-being in the countries of the former Soviet Union” (BMC Public Health)
[44] Meltzoff, A., Gilliam, W. (2024): “Young Children & Implicit Racial Biases” (Dædalus).
[45] For those with a scientific or philosophical inclination, another key takeaway might be the inherently subjective nature of the laws of nature—see Subjective vs. Objective Worldviews. If the preceding discussion didn’t already convince the reader that the author has lost their mind, this certainly will.
[46] The overwhelming majority of research indicates that public sex offender registries do not reduce recidivism and often increase it. The key reasons include social isolation and ostracization (loss of family, friends, and community support), economic hardship (difficulty finding jobs and stable housing), and lack of integration opportunities: When offenders feel permanently marked, they may see no incentive to comply with the law. For example, see Prescott, J. J., & Rockoff, J. E. (2011): “Do Sex Offender Registration and Notification Laws Affect Criminal Behavior?” (The Journal of Law and Economics).
[47] Petrosino, A. (2013): “”Scared Straight” and Other Juvenile Awareness Programs for Prevention Juvenile Delinquency” (Office of Juvenile Justice and Delinquency Prevention).
[48] National Research Council (2009): “Strengthening forensic science in the United States: A path forward.” (The National Academies Press). Also, see Ten Years Later: The Lasting Impact of the 2009 NAS Report (The Innocence Project).
[49] One attempt to derive values from scientific thought can be found in the InHa book.