Game Theory as Moral Alibi: A Coherence Audit of Selfishness, Fascism, and Stochastic Terror
By Thomas Prislac, Envoy Echo, et al. Ultra Verba Lux Mentis. 2025.
Galbraith’s Challenge, The Search for a Justification of Selfishness
Over half a century ago, economist John Kenneth Galbraith offered a sharp observation about modern conservatism: “The modern conservative is engaged in one of man’s oldest, best financed, most applauded, and, on the whole, least successful exercises in moral philosophy: the search for a superior moral justification for selfishness”. Galbraith noted that this ideological project inevitably produces internal contradictions and absurdities. For example, he pointed out wealthy elites who preach the “character-building” value of hardship—for other people. He listed cases such as:
· Rich moralizers praising poverty for the poor while living in luxury themselves.
· Beneficiaries of natural windfalls decrying government “handouts”, e.g. an oil baron railing against “unearned income” from the state.
· Corporate bureaucrats denouncing bureaucracy, executives thriving in huge organizations while scorning public-sector red tape.
· Suburban residents fearing federal education aid they’d never personally refuse, and patients leaving military (socialized) hospitals while condemning “socialized medicine”.
· Heirs with trust funds warning of the dangers of Social Security for society.
· Meanwhile, “those who are immediately threatened by public efforts to meet their needs...are almost always oblivious to the danger”, in other words, the people actually relying on assistance are rarely the ones attacking it.
Galbraith’s point goes beyond simple hypocrisy. He was describing a structural pattern: whenever a powerful group wants to behave selfishly yet maintain moral legitimacy, it must invent rationalizations to “square the circle.” The goal is to extract benefits or hoard privilege while renaming those actions as virtuous necessity. In Galbraith’s era, the justifications ranged from Victorian ideas of the deserving vs. undeserving poor, to Social Darwinist claims that aiding the weak only impedes natural progress. Today, we see new variations on this old theme, with “game theory” often invoked as the latest moral alibi for selfish behavior. The refrain has become: “It’s not that I want to be selfish, but the game forces me to. If I played nice, I’d lose. Therefore my ruthlessness is rational and inevitable.”
Our aim here is to pull that alibi apart. We will examine what game theory actually is (and what it isn’t), how a crude pop version of it is deployed to excuse cruelty and extraction, and how this ties into right-wing ideologies from Ayn Rand’s objectivism to fascist friend-enemy politics. We’ll then explore how proper use of game theory, and alternative frameworks emphasizing empathy, fairness, and transparency, can expose these justifications and design better outcomes. In essence, we’re conducting a “coherence audit” of these ideologies: testing how well their claims align with ethical and factual reality (using our Grand unified Field Theory of Data Coherence (GUFT) lens of Empathy (E) and Transparency (T) to gauge field coherence Ψ = E × T). Spoiler: we will find that the “game theory” moral alibi fails the audit.
What Game Theory Is (and Isn’t)
First, let’s be fair to the math. Game theory is a branch of applied mathematics and economics that studies strategic interaction, situations where multiple decision-makers (players) each have choices, and the outcome for each player depends on the choices of all[1]. It provides formal models (like payoff matrices and decision trees) and solution concepts (like equilibrium strategies) to describe and predict behavior in strategic settings. Key tools include: normal-form games (e.g. the Prisoner’s Dilemma), extensive-form games (game trees), Nash equilibrium analysis, repeated games, Bayesian games with incomplete information, and mechanism design, among others.
Crucially, game theory itself is a descriptive and analytical framework, it does not inherently prescribe any moral values. The mathematics makes no judgment that “selfishness is good” or that “maximizing one’s own payoff is the only rational goal.” Those notions are not built into the equations; rather, the analyst (or modeler) defines what the “payoffs” are in any given game model. The payoffs could represent money, utility, happiness, fitness, or any stakeholder’s welfare, whatever interests are deemed relevant. Players are usually assumed to act rationally with respect to their given preferences, meaning they try to maximize their own payoff as defined in the model. But who or what those preferences include is entirely up to how the game is set up. If a model defines each player’s payoff as “personal monetary gain, short-term,” then “rational” play will likely look self-serving. If instead you define a player’s payoff as “the long-term well-being of me and my community,” then rational play may look much more cooperative or altruistic. The math will obediently reflect whatever values and constraints are input; it does not miraculously decree one set of values as The Truth. As one game theorist puts it, “game theory...models strategic behavior but remains value-neutral”. Any normative leap, for instance, concluding that a Nash equilibrium outcome is “good” or that selfish strategies are “natural law”, is not a mathematical result, but a human interpretation layered on top**.
Even the classic toy games taught in Game Theory 101 illustrate that individual self-interest can conflict with collective well-being. The Prisoner’s Dilemma (PD) is the textbook example. Two accomplices are arrested and separately offered a deal: if one defects (betrays the other) while the other stays silent, the defector goes free and the other gets a harsh sentence; if both stay silent, they both get a light sentence; if both defect, both get a moderate sentence. From each prisoner’s self-interested standpoint, defecting is the best response no matter what the other does, hence defection (“betray”) is the Nash equilibrium. Yet when both defect, the outcome (both get 2 years in prison, say) is worse for each of them than if they had both cooperated and stayed silent (1 year each). This tension between individual rationality and collective benefit is precisely why the Prisoner’s Dilemma is famous: it shows that individually “rational” behavior can lead to a collectively suboptimal outcome. In fact, scholars often cite PD as justification for external intervention or institutions, e.g. laws against betrayal or mechanisms to enforce cooperation, because “individually rational decisionmaking leads to collective disaster” if left unchecked.
Furthermore, more complex game models show pathways to cooperation:
Iterated and repeated games: When the same players face the same dilemma repeatedly (and care about future outcomes), short-term defection can be punished or outperformed by reciprocal cooperation. In an iterated Prisoner’s Dilemma, for instance, a strategy of “tit for tat” (cooperate first, then mirror your opponent’s last move) famously won Robert Axelrod’s computer tournament[2]. Cooperation can emerge as a stable equilibrium over the long run[3]. John Nash himself noted that what’s “rational” in a repeated game may differ from a one-shot game[3], meaning trust and reputation can become rational strategies when players expect to interact again. In other words, the folk wisdom that “nice guys finish last” is not a general truth; in repeated interactions, nice guys who reciprocate can finish first. Formal models confirm that: “When the game is played repeatedly… strategies for cooperation begin to emerge,” transforming a Hobbesian one-shot scenario into a more community-oriented equilibrium.
Public goods and social welfare games: Game models of public goods (like community resources or public services) show that pure self-interest leads to underprovision, the classic “free rider” problem where everyone hopes others will pay. But they also show that with the right incentive tweaks (taxation, rewards, punishments), or with communication and trust, groups can reach much better outcomes. Mechanism design (discussed later) is essentially the science of designing rules that align individual incentives with the common good.
Coordination games and norm enforcement: Many games aren’t strictly zero-sum; they have multiple equilibria or possibilities to coordinate for mutual benefit. Often, leadership, communication, or norms can help players avoid bad equilibria. For instance, a community facing a water shortage can all agree to conserve (a coordination solution) rather than each selfishly overusing and causing a collapse. Game theory doesn’t say they won’t do this; it simply analyzes the conditions under which they will. In fact, Nobel laureate Elinor Ostrom documented numerous real communities that escaped the “tragedy of the commons” through locally crafted agreements and trust-building, even when classical theory predicted doom[4][5].
The takeaway here is: game theory is a mirror, it shows you what you assumed. If someone claims “game theory proves selfishness is the only rational way!”, look carefully at what assumptions they fed into the model. Usually, you’ll find they defined the “game” in a narrow way that encodes their own values. They may have assumed players only care about short-term individual gain, or that they interact only once, or that no enforcement of fairness is possible. With those premises, the math may indeed spit out a bleak outcome. But change the premises, allow longer-term thinking, broader utility definitions, or external enforcement, and the math often favors cooperation and prosocial behavior. Nowhere does game theory itself declare that “greed is good” or that an equilibrium outcome is morally right. As one article notes, the Prisoner’s Dilemma and similar models are often used to justify wise regulation, precisely because “individually rational behavior [can risk] producing a sub-optimal, inefficient collective outcome” without coordination. In short, game theory properly used can identify pitfalls and suggest solutions; it does not hand down a commandment to be selfish.
Therefore, when we encounter popular arguments that invoke “game theory” as an excuse for ruthless behavior, we should be on guard: such arguments often smuggle in an ideological choice of payoffs, and then attribute that choice to “the wisdom of the game.” This is a sleight-of-hand, not science.
Pop Game Theory as a Right-Wing Performance
In the current media and political landscape, a caricatured version of game theory has become fashionable, especially in certain right-wing, libertarian, or authoritarian circles. It’s less a rigorous analysis and more a performance, a kind of pseudoscientific theater that paints cynical selfishness as savvy realism. We see pundits and online influencers saying things like:
· “I defect in every deal because if I don’t, someone else will take advantage. In the game of life, only suckers cooperate unilaterally.” In other words, “if you play fair while others cheat, you’re the fool”, so one must preemptively cheat.
· “Everything is a zero-sum game. Your gain is my loss, and vice versa. There are only winners and losers.” This asserts that all situations are like poker or war, if one side doesn’t dominate, it will be dominated.
· “The game is rigged, so any tactic that gains advantage is justified. Morality is a luxury you can’t afford in a competitive arena.”
This isn’t real game theory; it’s folk-Hobbesian rhetoric with a pseudo-mathematical patina. Thomas Hobbes in the 17th century famously described life in a state of nature as “solitary, poor, nasty, brutish, and short,” with a war of all against all, unless absolute power imposed order. The modern “gameified” rhetoric echoes that brutish outlook, but with catchier memes and charts. It dovetails with Ayn Rand’s Objectivism and with fascist or neo-reactionary ideas in revealing ways:
Ayn Rand’s worldview (Objectivism) glorifies radical individualism. In her novels The Fountainhead and Atlas Shrugged, empathy is portrayed almost as a weakness or pathology; altruism is explicitly denounced as evil. Rand argued that altruism, the ethic of self-sacrifice for others, “is the morality of cannibals,” and that if civilization is to survive, “it is the morality of altruism that men have to reject.” In Rand’s moral universe, the “virtue of selfishness” is the highest virtue; productive “makers” owe nothing to unproductive “moochers.” The ideal human is an amoral homo economicus pursuing his own profit and pleasure with uncompromising rationality. Any empathy for the “weak” is at best misguided and at worst a threat to human progress. This philosophy presents itself as a bold logical clarity: by definition, each person’s own interest is paramount, so any call to care for others is irrational. It’s not hard to see how a cartoon version of game theory fits in: If life is a game where everyone rationally maximizes their own utility, then trying to be unselfish is not only futile, it’s immoral. Rand’s followers often rationalize ruthless behavior with exactly that kind of language.
Fascist and extreme right-wing ideology takes the zero-sum outlook even further. Fascism reduces all of politics to a binary conflict of friend vs. enemy. In the fascist worldview (heavily influenced by the theorist Carl Schmitt), society is an arena of perpetual struggle where one group must dominate others or be destroyed. There is no pluralism or win-win cooperation, only winners and vanquished enemies. As Umberto Eco summarized “Ur-Fascism,” “life is permanent warfare”; fascism insists on an endless battle against both external and internal foes, and “pacifism is trafficking with the enemy”. Compassion or empathy toward the out-group is treason. Truth is subordinate to tribal mythology; anything that strengthens the in-group’s dominance is justified. This attitude often comes wrapped in militaristic or game-like metaphors, “politics is a blood sport,” “4D chess,” “owning the libs,” “no substitute for victory.” But these are games devoid of ethical rules or referees. Fascist “game-playing” is really just raw power struggle, adorned with slogans. There’s no equilibrium analysis of general welfare, only a narrative of us vs. them. Any show of mercy or fairness is weakness. As Eco noted, fascist regimes cultivate a mentality that enemies are simultaneously overwhelmingly powerful and contemptibly weak (a paranoid contradiction) and that the nation must always be prepared to strike them down. Complexity and nuance are rejected; the fascist game only recognizes brute strength.
Both Randian objectivists and fascist-leaning ideologues thus claim a kind of harsh “realism.” They sneer at notions of empathy, community, or equality as naïve or degenerate. They frequently invoke something like “game theory” or evolutionary logic as a scientific-seeming alibi for their values. The message is: “Don’t blame us for being cruel; nature (or the market, or game theory) demands it!”
This is precisely what Galbraith meant by “the search for a moral justification for selfishness.” The invocation of “game theory” here is usually in bad faith. The speakers have chosen selfishness or domination as desirable, for other reasons (greed, fear, hatred, pride), and then retrofit a pseudo-strategic narrative to justify it. It’s a performance designed to make brutal competition look cool-headed and rational, rather than morally bankrupt. As one commentary on these trends notes, the shallow use of game theory in these circles is essentially folk Social Darwinism with a thin veneer of equations, a way to gamify cruelty and call it intelligence.
A striking aspect of our moment is how this “gamey” rhetoric has combined with online culture and stochastic violence. The far-right internet subculture often explicitly gamifies harassment and terrorism. Consider:
Internet trolls and extremists refer to harassment campaigns as if they were MMORPG quests or shooter games. They keep score of how many targets they can intimidate (“points” for doxxing an enemy or making them quit social media) and brag about “high scores” in violent attacks. The Christchurch mosque shooter (2019) live-streamed his massacre with a helmet-cam, effectively presenting it like a first-person shooter game for an online audience. On anonymous forums like 4chan/8chan, participants treat vicious hate speech and calls for violence as a kind of joke or game, a low-stakes form of entertainment. The jargon of “for the lulz” (for laughs) indicates how dehumanization and terror are framed as “only a game.” Researchers note that this “jokey, game-like atmosphere” lowers social and psychological barriers to extremism: the participants don’t feel like they’re doing something gravely immoral when they meme about killing, because it’s all cloaked in irony and play. Yet the consequences can be deadly real.
Stochastic terrorism is a concept that describes how demagogic rhetoric inciting hatred can lead to actual violence unpredictably. Leaders or media figures may not direct any specific attack, but by constantly portraying a target group as dangerous, subhuman, or an existential threat, they raise the probability that someone out of millions will take violent action. The process is “stochastic” (random) but statistically likely. For instance, when a prominent figure demonizes immigrants or religious minorities over and over, it’s almost inevitable that a lone actor will eventually commit an attack believing they are saving the nation. The originators then claim “I never told anyone to do that!”, hence plausible deniability. It’s like lighting many fuses and shrugging when one of them ignites an explosion. Researchers describe stochastic terror as “the use of mass media to incite random actors to carry out violent or hateful acts that are statistically predictable but individually unpredictable”. Notably, this too has been framed as a game in some extremist communities: militants speak of “weaponized memes” and “culture wars”, scores are kept in terms of “red-pilling” people or inflicting fear on targets, and attackers are sometimes lauded as if they “leveled up” in a video game of political violence. For example, the QAnon/“Pizzagate” conspiracy (which falsely claimed a cabal abusing children in a D.C. pizzeria) was treated like an online role-playing detective game by its followers, until one believer showed up with a rifle and opened fire in the restaurant. Likewise, the man who attacked an FBI office in 2022 after weeks of incendiary anti-FBI posts by politicians was essentially acting out the implied “game plot” that had been sketched by those provocations.
In these contexts, when someone justifies cruelty by saying “that’s just how the game works; if you can’t handle it, don’t play,” it is a moral abdication. They are using “the game” as a shield for choices that are, in the real world, causing harm. Game theory, properly understood, offers no support for this. Real game theory doesn’t say “suckers deserve what they get” or “always defect.” To the contrary, it often teaches that unchecked defection and revenge lead to everyone losing (the “mutual destruction” equilibrium), whereas well-designed rules can create win-win outcomes. What the hate-gamers and social darwinists are doing is cherry-picking a simplistic narrative (e.g. one-shot Prisoner’s Dilemma = always betray) and ignoring the broader analytical framework that includes repetition, communication, reputation, and external enforcement. They want the aura of scientific inevitability without any of the actual constraints that science (or ethics) would impose.
To sum up: Pop-culture “game theory” in the service of selfishness or fascism is a performative myth. It’s an old story (might makes right, greed is virtue) wearing new jargon (Nash equilibria! strategy!). But as Galbraith foresaw, the contradictions are the same. Those who loudly celebrate “rational self-interest” often rely on the very social goods (trust, law, infrastructure, public support) that their philosophy would undermine. The next sections will dissect those contradictions more formally using the coherence lens, and then explore how we can counter this narrative.
Coherence Audit: Empathy and Transparency in Ideologies
Our GUFT coherence lens provides a framework to evaluate social fields or ideologies based on two key invariants: Empathy (E), how well the system recognizes and includes the interests of all stakeholders (not just the powerful ones); and Transparency (T), how open, honest, and truth-tracking the system is (are its claims and mechanisms auditable, or are they shrouded in propaganda and deception?). We define a field’s overall coherence as Ψ = E × T. A high-coherence system strongly couples to reality and human needs (high E) and is explicit and truthful about its workings (high T), leading to robust, ethical outcomes. A low-coherence system has low empathy (it ignores or dehumanizes many affected parties) and/or low transparency (it runs on lies or unaccountable power), leading to brittle and destructive outcomes.
Let’s run two of the ideologies we’ve been discussing, Fascism and Randian Objectivism, through this coherence audit:
Fascist Theory: Friend/Enemy without Limit
Fascism, as an ideology, is deliberately low empathy except within a narrowly defined in-group. Fascist movements typically:
Deny the full humanity of out-groups. Out-groups (ethnic or religious minorities, political dissidents, etc.) are portrayed as parasites, traitors or subhumans. Nazi propaganda, for example, described Jews as vermin; Rwandan Hutu extremists called Tutsis “cockroaches.” This is E ≈ 0 for those outside the favored race or nation. Within the in-group, there may be pseudo-empathy (a sense of brotherhood), but it’s conditional on total conformity. Anyone outside or deviating is an enemy. As Eco enumerated, fascism exploits a “fear of difference” and a “contempt for the weak,” encouraging the people to channel frustrations against those who are “different”.
Suppress transparency and truth. Fascist regimes are marked by propaganda, censorship, and the subordination of truth to power. Debate and dissent are not tolerated, recall that one of Eco’s fascist traits is “Disagreement is treason”, reflecting an extreme anti-intellectualism. Instead of factual transparency, fascism relies on grand myths (national rebirth, racial destiny) and cults of personality. Historical facts are rewritten to suit the party line. There is no independent audit of the regime’s claims; media and institutions are forced into lockstep. In short, T ≈ 0, the system runs on controlled narratives rather than open inquiry. Schmitt’s concept of decisionism (the leader decides what is true or legal based solely on will) exemplifies this: the sovereign decides, and questioning him is itself illegitimate.
Create a false internal coherence by scapegoating. Inside a fascist cult, Ψ appears high because everyone is made to sing the same tune (forced consensus) and the ideology explains away failures by blaming enemies. But it’s a fragile coherence: it cannot adapt or accept feedback from reality (low T), and it cannot extend stability beyond its own violent expansion (because it provokes external resistance and internal decay by dehumanizing others, low E globally). Historically, fascist regimes drive toward war and collapse precisely due to these features: with no genuine empathy or truth-seeking, they double down on aggression and fantasy until ruin results. For example, Nazi Germany maintained internal coherence through terror and propaganda while committing atrocities externally, but this led to catastrophic war, illustrating how low E and T made the system unsustainable despite intense short-term unity.
In game-theoretic metaphor, fascism treats society as a constant zero-sum game of survival, where the only equilibrium is domination of the enemy. It’s like playing Prisoner’s Dilemma but deciding unilaterally that the “other” must die, thereby eliminating any possibility of mutual benefit. Such an approach can mobilize people ferociously (for a time), but it neglects the reality of human interdependence. A nation or group that tries to be a lone predator eventually finds itself isolated or destroyed. Coherence analysis shows fascism is high-Ψ only inside its echo chamber; system-wide, it’s profoundly incoherent (Ψ → 0) because it ignores the welfare of most humans (low E) and rejects objective truth (low T).
From the perspective of game theory used correctly, a fascist society is like a game where one player (the regime) tries to maximize its payoff by treating others as fixed negatives. But real human systems are not two-player contests; they are networks where today’s enemy could be tomorrow’s needed ally (think of international trade or fighting a pandemic). Fascism’s one-dimensional friend/enemy framing cannot accommodate the cooperation needed for common goods. It’s telling that even militaries, which fascists idolize, rely on trust and codes of conduct, a completely “everyone defect always” army would disintegrate in friendly fire. Thus, fascism’s claim to strategic genius is illusory: it abandons many real strategies (diplomacy, compromise, win-win deals) that high-level game theory recognizes as valuable.
Randian Objectivism: Rational Selfishness and Its Contradictions
Ayn Rand’s Objectivist philosophy is in some ways the mirror-image of fascism: instead of subsuming the individual into an all-powerful state, it deifies the sovereign individual and demonizes the state (or any collective claims) as oppressive. However, when we apply the coherence audit, we find that Objectivism also scores low on empathy and (often) on transparency, resulting in questionable coherence.
Empathy (E): Objectivism explicitly elevates “rational self-interest” as the highest moral pursuit, and disparages altruism as a vice. Rand’s novels depict characters who refuse to help even their closest friends or family if it means betraying their own “creative purpose.” Charity is not a virtue in Objectivism; it’s at best an optional sentiment, at worst an enablement of “parasites.” As Rand bluntly stated, “If any civilization is to survive, it is the morality of altruism that men have to reject.” In practice, this means an Objectivist framework has extremely narrow empathy, each person is to concern themselves with their own life and perhaps those they personally value, but systemic empathy (caring about the poor, the sick, strangers, future generations) is regarded as irrational. Rand’s followers often refer to social welfare programs as theft and to their beneficiaries as undeserving. There is little concept of social obligation or common good beyond what emerges “naturally” from individuals trading value for value. So E is very low, limited to what each “hero” personally chooses to care about. Entire categories of people (the disabled, the elderly poor, children born into poverty) are essentially erased or blamed for their plight under strict Objectivist dogma.
Transparency (T): At first glance, Objectivism extols a kind of “clarity” or truth-telling. Rand was big on calling things by their names, she praised businessmen as the source of all wealth and denounced “looters” and “moochers” who live off others. One might think this implies transparency. However, Objectivism often ignores or obscures the initial conditions and external systems that allow some individuals to prosper. For example, Rand celebrated self-made tycoons but overlooked how much any business relies on public infrastructure, legal systems, and a stable society, factors outside the individual’s control. In her worldview, if someone is rich, it’s solely due to their virtue and effort; if someone is poor, it’s due to their weakness or vice. This is a selective transparency at best. It hides real-world complexities like luck, inheritance, power imbalances, externalities (e.g. pollution affecting others), and cooperative inputs to innovation (many inventions build on collective knowledge). There’s also a lack of transparency about the feasibility of everyone being an island. Rand’s philosophy doesn’t offer a clear mechanism for how public goods (like environmental protection or disease control) would be handled, except by assuming the market or private agreements will solve them. In essence, Objectivism has a self-justifying narrative that isn’t fully open to audit by data. When confronted with evidence that unregulated markets can fail or that pure self-interest can lead to crises (like the 2008 financial crash), hardcore Randians tend to either deny the facts or blame government interference in any capacity. Indeed, Alan Greenspan, a long-time Rand devotee, admitted after the 2008 crisis that he “found a flaw” in his free-market model, he was shocked that banks didn’t self-regulate in shareholders’ interest. This was a rare moment of transparency breaking through: Greenspan confessed that the assumed harmony of individual and collective interest did not hold in reality, undermining decades of Objectivist-influenced policy.
Internal Coherence vs External Coherence: Objectivism can feel internally coherent, it’s a tight moral system (self-interest good, altruism bad) that produces heroic narratives. But like a beautiful theory that crashes against ugly facts, it often fails external coherence. The famous example is Ayn Rand herself. After years of denouncing social welfare as evil, Rand in her old age fell ill with cancer and found herself in financial trouble. She reluctantly agreed to accept Social Security and Medicare, benefits from the very government programs she lambasted, under a pseudonym to avoid publicity. Rand’s defenders argue there’s no contradiction (she paid into the system, so she’s entitled to get her money back, they say). But critics see it as revealing: even the high priestess of selfishness ended up relying on the collective safety net when faced with “the abyss that awaits the uninsured ill,” so to speak. As one commentator put it, “Reality intruded upon her ideological pipe dreams”. Moreover, many institutions associated with Objectivist champions have quietly availed themselves of public aid when convenient (for instance, the Ayn Rand Institute reportedly took a PPP pandemic bailout loan in 2020, despite its general stance against government support). These contradictions highlight that pure self-reliance as a universal principle doesn’t map well onto real human vulnerabilities. There is an opacity (lack of honest reckoning) in how Objectivism deals with these practical necessities. It tends to handwave them or cast them as temporary aberrations, rather than updating the theory.
In coherence terms, Randianism is higher on T than fascism (Objectivists don’t typically rely on outright lies or censorship, they state their harsh views openly). But it’s still only medium transparency because of its blind spots and rationalizations. It’s extremely low on E beyond the self. So Ψ = E × T remains low. The field “tears” when you try to apply it universally, it cannot, for instance, coherently address public health, climate change, or systemic poverty except by either denying the problem or assuming a deus ex machina of private solutions. This fragility came through in Greenspan’s admission that his ideology “was not working” as expected.
To connect back to game theory: Objectivism can be seen as a game model where each player’s payoff is defined as their own profit and achievement only. Notably, any benefit to others is considered irrelevant or even negative (the philosophy scorns “serving others”). In such a game, sure enough, the “rational” action is to maximize one’s own gain without regard to others, a kind of continuous Defect strategy. But that model choice is just that: a choice. Game theorists could equally model a scenario where each player’s utility includes, say, their family’s welfare, or their reputation for fairness, or the long-term stability of their community. Those models yield very different “rational” behaviors (more cooperative, reciprocal, and so on). Objectivists act as if the only game in town is one where Sucker’s Payoff = caring for someone else. That is a value judgment masquerading as scientific truth. It ends up, ironically, less scientifically robust because it ignores empirical evidence of humans’ interdependence and pro-social instincts (from evolutionary biology to behavioral economics, evidence abounds that humans are not pure sociopaths in real life). In short, Randian objectivism fails the coherence audit by refusing to couple with large parts of human reality (low E) and by being dogmatic even when its predictions fail (lower T than it claims).
Stochastic Terrorism and the Gamification of Cruelty
Let’s delve a bit deeper into the phenomenon of stochastic terrorism introduced earlier, as it exemplifies the extreme consequences of the ideologies and rationalizations we’ve discussed. Stochastic terrorism is, in effect, what happens when violent game rhetoric collides with real-world actors. It has three key components:
Demonization in mass messaging: A public figure or outlet repeatedly vilifies a target group or individual, painting them as dangerous, evil, or a legitimate object of hatred. Importantly, they stop short of explicitly calling for violence, instead they may joke, wink, or just “raise questions” that dehumanize the target. For example, a demagogue might constantly claim that a certain minority group is plotting to destroy the country from within, or that a politician is part of a satanic pedophile cabal.
Audience reach and repetition: These messages reach a large audience, often millions, through media broadcasts, social media, etc. Because the calls to hate are vague or metaphorical (e.g. “If they take power, it’ll be doomsday; patriots must take a stand”), they don’t activate everyone. But statistically, with a large enough audience repeatedly hearing that X is a dire threat, some individuals will interpret this as a call to arms. It’s akin to repeatedly loading the dice: each provocation increases the probability that a “lone wolf” will roll over into violence.
Plausible deniability for the instigator: When violence does occur, the original instigator says, “I never told anyone to do that!” They frame it as a lone crazy actor, not connected to them. This is the “stochastic” part, the violence seems random and not traceable to a direct order, so legally it often doesn’t count as incitement under strict definitions (like the U.S. Brandenburg standard, which requires imminence and specificity). The original speaker maintains a façade of innocence or even condemns the violence in general while continuing to push the incendiary narratives that sparked it.
This tactic is chillingly effective at creating what one scholar calls an “atmosphere of violence”. People in the targeted group feel terrorized because attacks seem to come out of nowhere, a synagogue gets shot up after months of public figures spewing antisemitic conspiracies; a pizza parlor is invaded by a gunman after viral memes claim it’s a hub of evil. Society overall becomes more polarized and fearful. And the perpetrators of stochastic terror (the speakers) game the system: they reap the benefits of riling up their base and eliminating “enemies” without taking responsibility.
In the context of our analysis, stochastic terrorism illustrates the extreme end of gamified zero-empathy behavior. The instigators treat their rhetorical influence as a game where they try to “score points” (higher ratings, political victories, personal vengeance) by maneuvering others into committing real harm. They manipulate the multi-agent game: instead of directly attacking (which has personal cost or risk), they outsource the attack to random actors, much like summoning a probabilistic missile. They count on the fact that someone in their audience will have the lethal “combination” of mental instability, fanatical conviction, and opportunity to carry out what they only imply.
For example, the Great Replacement conspiracy theory (prominent in far-right media globally) claims that elites are deliberately replacing the white population with immigrants. This narrative has been saturating certain forums, podcasts, and even mainstream TV segments. It’s essentially a call to view immigrants and those who support them as an existential enemy. The connection to violence is stochastic but documented: the Christchurch mosque shooter in New Zealand (2019), who killed 51 Muslims, explicitly cited “replacement” fears and fringe media figures who pushed that idea. He saw himself as a player in the grand game to “defend” his people. Similarly, the Pittsburgh synagogue shooter (2018) attacked Jews after absorbing rhetoric that Jews were orchestrating mass immigration. In both cases, no single TV host or politician said “go shoot those people at this time,” but the constant demonization set the stage for a few individuals to take that murderous step. The pattern repeats with frightening regularity, from attacks on reproductive health clinics after relentless “baby killer” propaganda, to assaults on FBI offices or Capitol buildings after leaders portrayed them as tyrants.
Gamification comes in via how these messages are delivered and perceived. On social media, incendiary content is often packaged like a challenge or quest, “Will no one rid us of this corrupt leader?”; “Patriots, it’s time to stand up!”; memes depict opponents in crosshairs or as zombies to be slain. Online communities then treat real planning like part of a game. There have been cases of extremists creating “leaderboards” for harassing particular targets or committing acts (e.g. bragging about “who can post the most threats” or celebrating each new violent incident as a win for the cause). A U.S. Department of Homeland Security report noted that some far-right forums explicitly use points systems and game language to encourage ever more extreme propaganda and action, blurring reality with a video-game mentality. This is cruelty and terror repackaged as a competitive sport.
Ethically and practically, it’s important to call this what it is: an anti-social perversion of game logic. In any true game-theoretic sense, stochastic terrorism is a high-risk, negative-sum strategy. It may appear to “work” for a demagogue in the short run (they eliminate some opponents or intimidate a community), but it massively degrades overall trust and security, and often backfires by mobilizing opposition or prompting crackdowns. It’s akin to always defecting with an added twist of randomness. Game theory tells us that in iterated settings, always defect tends to produce cycles of retaliation that leave everyone worse off. Stochastic terror adds an unstable feedback loop: the violence it produces can spiral unpredictably. There is no equilibrium in which society functions normally under such threats; it’s a path to chaos or authoritarian clampdown.
In sum, stochastic terrorism represents the lethal culmination of the “game makes me do it” mindset. The perpetrators (both the speakers and the triggermen) have abandoned empathy entirely, their victims are just objects, or “NPCs” (non-player characters) in their imagined game world. And they have abandoned transparency/truth, the whole enterprise depends on spreading lies (e.g. conspiracy theories) and denying responsibility. It is the opposite of a coherent, ethical strategy; it’s coordinated incoherence weaponized.
Having surveyed this grim landscape, the question is: How can we counter these misuses of “game theory” and selfish ideology? The next section turns to that, arguing that proper game theory, and allied analytical tools, are actually very powerful for designing better systems that undercut the logic of selfishness-as-inevitability.
Using Real Game Theory (and Beyond) to Counter Misuse
It is ironic that while a distorted game metaphor is used to justify selfishness and cruelty, the actual insights of game theory often support the opposite agenda: cooperation, fairness, and intelligent institution design. If we shed the ideological baggage and use game-theoretic reasoning rigorously, we find many ways to defuse the zero-sum myths. Here are some key applications:
Public Goods, Externalities, and the Invisible Hand Myth: Free-market fundamentalists (including Randians) often invoke Adam Smith’s fully debunked “invisible hand”, the idea that individuals pursuing their own gain will magically maximize social welfare. Game theory provides a clear rebuttal: whenever there are public goods or externalities, uncoordinated self-interest generally fails to achieve the best outcome. The Prisoner’s Dilemma is essentially a two-person public goods problem in disguise (each prisoner had an incentive to “free ride” on the other’s silence, making both worse off). In larger contexts like pollution, climate change, or public health, game models show that without cooperation or intervention, the equilibrium is suboptimal, too much pollution, too little vaccination, etc. This is well-known in economics (market failures). The implication is that “the game naturally rewards selfishness” is a half-truth at best. We must ask: which game? If it’s a game where my factory’s profit doesn’t include the cost of the river I pollute, then yes I’ll “rationally” dump waste, but that’s because the game is mis-specified. A mechanism design approach would correct the payoff structure (e.g. a tax or cap-and-trade for emissions) so that my self-interest aligns with the public good. In practical terms, this means things like progressive taxation, environmental regulation, and public infrastructure investment are not anti-game-theory, they are informed by game theory to solve collective action problems. When someone claims “selfish behavior is just the equilibrium, deal with it,” a valid response (backed by game theory) is: change the game! Design new rules or incentives so that what’s individually rational also benefits the group. Game theory doesn’t lock us into one cruel world; it gives us tools to engineer better ones.
Incomplete Information and Power Asymmetry: Real-world strategic interactions usually involve players who are unequal in power or information. Game theory has entire subfields (Bayesian games, signaling games, bargaining theory) devoted to this. One takeaway is that the “rational” strategy for a weak player is very different than for a strong player. The folk-social-Darwinist rhetoric assumes a level playing field or says “if you’re weak, that’s your fault.” But game analysis of, say, dictator vs. responder games demonstrates that if one side has all the power (a dictator allocating resources) and no accountability, the outcome will severely favor the powerful side, which is not an ethically acceptable equilibrium. It’s essentially coercion in game form, not a pareto-optimal or fair solution. This is why democracies put checks on power and why negotiations require transparency. In short, when someone glorifies a ruthless strategy, ask who has the power and knowledge in that scenario. Often the ones preaching a “tough game” are those confident they hold the cards (e.g. a monopolist in a market, or a majority group in society). Proper game theory encourages us to consider the perspective of all players and to recognize when equilibria are inequitable or unstable. For example, a political order that treats a minority brutally is setting up a game where that minority has no choice but to eventually rebel or collapse, it’s not stable. This mirrors the coherence lens: ignoring the welfare of some stakeholders (low E) will produce an incoherent field that eventually fails.
Repeated Games, Reputation, and the Value of Cooperation: As discussed, in repeated interactions, cooperation can be a rational equilibrium because players develop reputations and can punish defection over time. This is a powerful counter to the notion that kindness equals weakness. Famous analyses (Axelrod’s tournament, etc.) showed that strategies like “Tit for Tat”, which are nice (cooperative), retaliatory (punish after someone defects), and forgiving (return to cooperation if the other does), tend to do very well in long-run engagements[2]. This suggests a couple of things for society: one, always acting selfishly can destroy one’s reputation and invite others to retaliate or ostracize you, which is ultimately bad even for self-interest. Two, fostering environments where relationships endure and people know each other’s history (or have memory via institutions) encourages better behavior. In practical terms, policies that increase the shadow of the future, like long-term business relationships, diplomacy channels, community-building, can convert one-shot dilemmas into iterated games where trust can grow. Extremist rhetoric, by contrast, often seeks to short-circuit trust by claiming the other side will never cooperate so you must strike first. But again, game theory warns that if both sides adopt that stance, it’s a race to the bottom (mutual defection every round). Breaking that cycle, through trust-building measures or third-party enforcement, is key. This is why, for example, arms control treaties (with verification) can be seen as a game-theoretic strategy to escape prisoner’s dilemmas of arms races. Likewise, maintaining civility and norms in politics is a way to ensure that even if we compete, we don’t defect on the entire democratic game.
Enforcement and Altruistic Punishment: Real cooperation in society isn’t about naive niceness; it’s often about conditional cooperation, I’ll be good to others, but I expect them to be good too, and I will sanction bad actors. Game theory and experiments show that groups can achieve high cooperation if they develop mechanisms for punishing defectors, even at some cost to themselves. This is called altruistic punishment (punishing cheaters helps the group at a personal cost). It’s essentially what functioning legal systems and social norms provide. Now, fascists and “game the system” types often do favor punishment, but they target the wrong people (out-groups or the vulnerable) rather than actual defectors on social contracts. Meanwhile, they ignore punishment for elites or themselves. A coherent application of game theory would say: we should indeed punish those who defect from cooperative norms, for instance, polluters who free-ride on the environment, financiers who break trust and collapse the economy, or officials who abuse power. Rather than being anti-rational; it’s establishing credible deterrence to uphold cooperation. In fact, societies with strong rule of law (impartial enforcement of rules against cheating/violence) have more trust and prosperity, a finding one can interpret through game theory (fewer incentives to defect when you know it’s punished). So a response to “I must be selfish or I lose” could be: “Not if we agree on rules that punish selfish cheating. Then if you’re selfish, you lose.” This flips the script of the alibi, making cooperation the safe strategy.
Mechanism Design and Alternate Models: Mechanism design, sometimes called “reverse game theory,” is the field where we start with a desired outcome and work backwards to create a game (rules/incentives) that will achieve it. This is essentially what policy-making should be. For instance, if we desire a society with innovation but also broad welfare, mechanism design might suggest a combination of markets (to drive innovation) and redistribution (to share the gains), thus aligning individual drive with social benefit. If we want media that inform rather than inflame, we might design algorithms or incentives (like subscription models or public service requirements) to realign the payoff of media companies away from pure engagement-at-any-cost. The key point is, nothing in game theory says we have to accept the initial “rules of the game” as given. Those rules, whether it’s tax laws, campaign finance, international treaties, forum moderation policies, are human-created and can be re-created. The right loves to say “you can’t change human nature,” but you can change the game humans find themselves in. Often, when you do, human behavior changes accordingly. A trivial example: put a price on carbon emissions, and suddenly companies find it “rational” to pollute less. Remove dark-money loopholes, and politicians might find it “rational” to listen more to voters than to donors. In effect, proper use of game theory gives us a blueprint for reform, not a fatalistic resignation.
Empathy and Broader Definitions of Utility: A more foundational point: we can expand our model of what humans value. Behavioral game theory and evolutionary theory suggest people have preferences beyond pure material self-gain: they value fairness, they value reciprocity, they can internalize empathy for others. Many experiments (ultimatum game, dictator game, public goods with punishment) show that real people often make choices consistent with valuing fairness or punishing unfairness, even at personal cost[6]. Traditional game theory treated those as anomalies, but modern approaches incorporate social utility, meaning a player’s payoff can include terms like “I get disutility if I see someone else get less than their fair share” or “I gain utility when I contribute to a cause I value.” Once you include that, suddenly “rational” behavior can mean cooperating or helping others because it genuinely makes the person happier (their utility is higher in an outcome where others are better off too). This is not just wishful thinking; it aligns with psychology and even neurology findings about empathy being wired in our brains. So one might say: if you don’t value others at all, maybe that’s your preference, but don’t pretend game theory says no one does. In fact, the highest-payoff equilibrium for a community of empathetic agents is very different (and better for all) than for a community of sociopaths. Encouraging empathy, through education, contact between groups, fostering shared identities, effectively shifts the “game” by changing payoff evaluations. For example, if I start to see an out-group as humans like me (increase E in coherence terms), harming them no longer increases my utility, it decreases it. Thus I no longer have incentive to defect against them; cooperation becomes rational. On a societal scale, this is the basis of expanding circles of compassion (e.g. getting people to see that cooperation across races or nations is positive-sum, not zero-sum).
Beyond game theory, other modeling tools can also guide us toward empathy, fairness, and long-term thinking:
Systems Thinking and Coherence Models: Instead of single-game scenarios, system dynamics models look at feedback loops and holistic outcomes. For instance, climate change models incorporate economy, environment, and well-being together. They often reveal that short-term selfish exploitation (like burning fossil fuels aggressively) leads to long-term disaster for everyone, there’s no “winning” that game if you run it for enough rounds. Our GUFT coherence field concept is along these lines: it’s essentially saying a system that is both empathetic and transparent will be more sustainable. This echoes approaches like the Doughnut Economics model (Kate Raworth) which defines a safe operating space where social foundations (no one left in extreme need) and planetary boundaries (no ecological overshoot) are both respected. That model suggests that an economy should not just maximize one metric (like GDP or one player’s payoff) but meet multiple constraints for the long-term thriving of all. These frameworks are alternatives to the simplistic competitive game; they are more like cooperative games where the aim is to optimize the overall state for all players. Concepts like stakeholder theory in business ethics likewise argue that decisions should account for impacts on all stakeholders (workers, community, environment), not just shareholders[7][8]. When companies do this, studies often find more durable success, aligning with our coherence idea that high E and T (caring broadly and being transparent) yields better stability.
Polycentric Governance and Community Solutions: As Elinor Ostrom demonstrated, alternatives to top-down control or pure competition exist in the form of polycentric, community-driven governance. Communities around the world have created ingenious ways to manage commons, fisheries, forests, irrigation, through trust, reciprocity, and rule enforcement without resorting to either Hobbesian chaos or Leviathan authoritarianism[4][9]. These case studies, backed by empirical research, show that human groups can escape the tragedy of the commons by adding a bit of communication, agreed norms, and monitoring, essentially altering the game from one-shot anonymous to repeated transparent. They resolved what looked like unwinnable dilemmas by expanding the strategy set: introducing new strategies like “voluntary quota with community monitoring” which simply aren’t available in the simplistic market or state dichotomy. We should take inspiration from those models. Instead of saying “it’s a dog-eat-dog game,” look for how people have changed the game to “dogs and dogs cooperate to achieve more together.”
Technology and Platform Design: In the online world, rather than designing platforms to maximize engagement (which has proven to amplify division and misinformation as a side effect), we can design for constructive interaction. For instance, platforms could reward accuracy or empathy, imagine a forum where posts that accurately summarize an opponent’s view or find common ground get upvoted. This is a kind of mechanism design challenge being explored in “cooperative AI” and social media reform discussions. The goal is to shift the payoff for participants away from the cheap thrill of flaming others (points for owning) towards the more durable reward of reputation for fairness. In essence, change the online game from an arms race of outrage to a collaborative filtering for truth and bridge-building.
Bringing this back to our main thread: The very tools that cynics use to claim selfishness-as-law can be turned on their head to demonstrate the advantages of empathy and fair play. It’s not naive to do so; it’s rigorous. For example, if someone says “well, tit-for-tat is nice but what if someone just hits you and never stops,” the answer is “then tit-for-tat hits back and never stops too, thereby deterring that behavior in a population.” In other words, cooperative strategies are not pacifism; they include proportional responses to enforce the cooperation. Societies can be kind and strong against freeloaders, these are not contradictions. In fact, they’re complements: strong enforcement of just rules allows kindness to flourish elsewhere without being exploited. That’s a more nuanced but much healthier view of “the game of life”.
Coherence vs. Alibi: Returning to Galbraith
Galbraith’s quip about conservative ideology searching for moral cover for selfishness has proven prescient. We’ve seen that whether it’s religious hierarchy, Social Darwinism, extreme free-market worship, or newfangled “game theory” rhetoric, the fundamental pattern is the same: an attempt to justify why some people’s self-serving behavior should be accepted as inevitable or even virtuous. Yet the contradictions Galbraith noted remain as glaring as ever. Billionaires tell workers to tighten their belts for the common good while they themselves pay minimal tax; pundits rail against “government handouts” while broadcasting on publicly regulated airwaves or driving on public roads; politicians decry “class warfare” if wages are raised, even as they engage in tactics that harm the many for the benefit of a few.
What our discussion has added is a modern twist: today’s superior moral justification often comes clothed not in scripture or natural law, but in pseudoscience and game jargon. Instead of saying “God wills the strong to rule the weak,” we hear “evolution/game theory dictates the strong should prevail, it’s just nature.” Instead of “the poor are lazy and deserve hardship,” we get a softer “helping them would only create perverse incentives; the game rewards hard work, trust the market.” Instead of openly advocating violence, demagogues now cultivate it under the guise of “just asking questions” and meme warfare.
But in truth, cruelty is not a strategy, it’s a symptom of failed strategy. If you have to harm or oppress many others to feel secure or successful, you have architected a very low-coherence game for yourself. High-Ψ systems (high empathy, high transparency) tend to outperform in the long run: they enjoy legitimacy, adapt better to shocks (because they acknowledge reality), and build broad support. Low-Ψ systems (like fascism or oligarchic libertarianism) sow so much misery and disinformation that they eventually face backlash or collapse under the weight of unmet needs and factual errors. As evidence, consider how often unregulated markets crash (e.g. 1929, 2008), eventually their own champions call in the state to clean up the mess. Or how fascist regimes, while initially “efficient” at mobilization, often lose capacity as sycophancy replaces feedback and wars drain resources. By contrast, a democracy that cares about all citizens and welcomes truth (even inconvenient truth) can self-correct and innovate far more effectively. The success of many social democracies with high living standards and freedom (countries that embrace both market and welfare, competition and compassion) underscores this point.
In practical terms, how do we move beyond the “game” as presented by the selfish? A few guiding principles emerge:
Refuse the Alibi: When someone says “I must be selfish; the game leaves no choice,” do not accept that framing. Expose the choice that was made in defining the game. Ask, “Who set up this game? Can we change the rules? Why do you define winning that way?” Often, just revealing that alternatives exist is enough to puncture the aura of inevitability. For instance, if a CEO claims labor exploitation is necessary or competitors will outcompete him, one might point to companies that pay fair wages and still succeed, or to regulations that ensure no competitor can undercut by abusing labor. The inevitability argument falls apart once you show a different game design is possible.
Show the Payoffs: Demand transparency in what people value and what outcomes their actions produce. If a policy is justified as “rational,” have it quantified and scrutinized. Many times, purportedly rational policies (like extreme austerity cuts) end up harming even economic output, undermining their own stated goals. By forcing clarity on “what payoff are we maximizing, and for whom?”, you reveal who is being left out. If someone insists “it’s zero-sum, my group must take all,” illuminate the externalities: okay, you cut costs now, but what about the social cost (crime, illness, instability later)? Often those costs boomerang back onto the originators eventually. Transparency widens the lens of analysis.
Expand the Stakeholder Circle: In any debate, bring in the voices or data from those who are usually ignored. It’s easy to justify cruelty in abstract; it’s harder when the real human impact is made visible. High-empathy systems actively include stakeholders in decision-making (e.g., community input, labor on boards, citizen assemblies). This not only is morally right; it actually produces more information and creativity. Many seemingly unsolvable problems get solved when all affected parties collaborate (Ostrom’s water management case is a prime example[4]). So, insist that the “game” include all players at the table. This undercuts the friend/enemy propaganda by making “the other” human and present. It also prevents the powerful from externalizing costs to the voiceless.
Promote Mechanisms of Accountability: Since stochastic terrorists and bad actors hide behind unpredictability and plausible deniability, improving accountability is key. For example, media platforms could downgrade or label content that repeatedly correlates with violence, shine light on the connection rather than letting it hide. Politicians who flirt with violent rhetoric could face stronger norms against holding office (imagine if openly encouraging QAnon-type conspiracies became as disqualifying as openly using slurs, a norm shift). Internationally, hate propaganda could be treated as a human rights violation (as it sometimes is, though that raises free speech issues, transparency might be a better tool here than outright ban). The point is, make it costly to play the cruel strategy. In game terms, increase the punishment for defection from civil norms. For example, if a billionaire runs a red light, the ticket fee should be proportional to punish them with the same economic disincentive as a working-class person in debt.
Educate about the Real Lessons of Game Theory: Perhaps we even need to reclaim the narrative by popular education. People love game theory concepts; they’re catchy (prisoner’s dilemma, zero-sum, etc.). Why not propagate the more cooperative concepts into public awareness? For instance, teach the idea of “win-win” or positive-sum games, that many situations (trade, relationships) can benefit all sides (the opposite of zero-sum). Introduce the public to “the tragedy of the commons” and its solutions, so they recognize climate change denial and pure market fixes won’t work, but collective agreements will. Spread stories of “the prisoner’s delight” (a coined opposite to dilemma) where people trusted each other and achieved great outcomes (e.g., the spontaneous ceasefires in WWI trenches during Christmas, a real case where soldiers informally cooperated instead of defecting). In short, change the cultural narrative of what rational strategy looks like. It’s not a naive appeal, it’s updating folk wisdom to match what science actually says about human strategic interaction.
In our coherence audit terms, the goal is to raise Empathy and Transparency in the field. That is how you improve Ψ. More empathy means the game’s payoff reflects more people’s true well-being; more transparency means no one can easily get away with pretending their selfish move was inevitable or harmless. When both are high, the need for moral alibis disappears, decisions can be debated on honest terms, considering everyone’s stake.
Finally, to echo Galbraith one more time but flipping it positively: The search for a superior moral justification for selfishness has been long and futile. Perhaps it’s time for a superior strategic justification for cooperation. We don’t need to pretend humans are angels; we just need to harness their very real capacities for foresight, reciprocity, and yes, self-interest, properly understood to include living in a thriving community. Game theory, when used with integrity, actually validates moral intuitions that fairness, trust, and mutual aid are smart. Those are equilibrium behaviors in the right conditions. So rather than reject the math, we should refine the math to reflect our highest goals.
Conclusion: Beyond the “Game”
Life is not a fixed, merciless game where the only choice is to be predator or prey. It’s more like an evolving, complex design project where we are simultaneously players and rule-makers. “Game theory as moral alibi” is a cop-out, a way for people to shrug off responsibility by saying “don’t hate the player, hate the game.” But as we’ve argued, we can hate that game, and change it.
We’ve examined how fascist and ultra-libertarian ideologies fail a coherence test: they achieve a brittle internal logic at the cost of breaking their connection to humane values and reality. We’ve seen how misuse of game logic leads to destructive cycles (stochastic terror, exploitation, environmental ruin) that ultimately come back to haunt even those who initiated them. And we’ve laid out how genuine application of strategic thinking suggests better paths: redesign incentives, build trust through repeated interactions, enforce fair rules, include all stakeholders, and don’t fall for the zero-sum trap when positive-sum solutions abound.
Going beyond the game means remembering that we write the rules. If a particular “game” (be it unregulated capitalism, or partisan trench warfare, or social media clout-chasing) is bringing out the worst in people, we have agency to change that system. It might be politically difficult, but it’s conceptually no harder than recognizing a Prisoner’s Dilemma and deciding to implement a solution that rewards cooperation (like the West Basin Water Association did to stop groundwater overuse[4]). We should evaluate our institutions and cultural norms with a critical eye: do they encourage empathy or suppress it? Are they transparent or full of dark corners for manipulation? Do they align individual success with community benefit, or pit them against each other? This kind of audit will show where we need reform.
Galbraith noted that the justifications for greed always involved contradictions and absurdities. When someone says “I cheat you for our mutual good” or “I hurt them for the greater good,” that’s an absurdity we must confront. No amount of game-theoretic buzzwords will make selfish cruelty virtuous. Kindness and cooperation don’t need moral justification in fancy terms; they largely justify themselves by the outcomes they produce, peace, prosperity, resilience. But if it helps, we can indeed articulate a strategic justification: in many cases, nice guys (and communities) finish first. And when they don’t, it’s often because some cheating went unpunished or some externality wasn’t internalized, fix those, and goodness can prevail more often than cynics expect.
In closing, perhaps the real superior moral (and practical) philosophy is not one that excuses selfishness, but one that illuminates interdependence. We are players in a grand game, yes, but also partners in designing it. The choice is not between ruthless realism and naive idealism; it’s between lazy fatalism and active problem-solving. Those who invoke “game theory” to defend the indefensible are being lazy, they accept a shoddy design and call it fate. We don’t accept that. We can audit the game, find where it’s broken, and fix it. That’s not anti-math or anti-science; it’s applying science with a conscience. It’s replacing the cold alibi with a warm, functioning strategy. And in the long run, a coherent field, one that is empathetic, transparent, and just, will outperform any fragmented egoistic scramble. That is the equilibrium worth striving for.
Sources:
· Galbraith, J.K. on the moral justification for selfishness
· Stanford Encyclopedia of Philosophy on definition of game theory[1]
· Wikipedia, Prisoner’s Dilemma (rational self-interest vs collective payoff)
· Kenney, C. (2024), Game theory and cooperation (Prisoner’s Dilemma justification for intervention)
· Kenney, C., Iterated games and cooperation (Hobbesian view vs long-term strategies)
· Eco, U., Ur-Fascism traits (life as permanent warfare; disagreement as treason)
· Ayn Rand quote on rejecting altruism
· Open Culture, Rand’s ideology serving wealth and her reliance on social security
· Guardian, Alan Greenspan’s admission of a flaw in his free-market model
· Vox, Explanation of stochastic terror and examples (Trump’s rhetoric, Christchurch)
· CREST research, Gamification in extremist subcultures (8chan humor and lowered barriers)
· Chen & Ledyard, Mechanism design aligns individual incentives with social goals
· Nordman, E., Elinor Ostrom’s case study of community cooperation (West Basin water)[4]
· Axelrod’s tournament, Tit-for-tat success in repeated PD[2]
· Fehr & Gächter, Altruistic punishment in sustaining cooperation
[1] Game Theory (Stanford Encyclopedia of Philosophy/Spring 2018 Edition)
https://plato.stanford.edu/archives/spr2018/entries/game-theory/
[2] [3] Prisoner's dilemma - Wikipedia
https://en.wikipedia.org/wiki/Prisoner%27s_dilemma
[4] [5] [9] Crises of the commons: Elinor Ostrom’s legacy of self-governance - Centre for the Study of Governance & Society
https://csgs.kcl.ac.uk/crises-of-the-commons-elinor-ostroms-legacy-of-self-governance/
[6] [PDF] What's in a name? Anonymity and social distance in dictator and ...
https://rady.ucsd.edu/_files/faculty-research/uri-gneezy/whats-in-a-name.pdf
[7] Stakeholder theory - ScienceDirect.com
https://www.sciencedirect.com/science/article/pii/S0148296323004629
[8] Stakeholder-Oriented Firms Have Feelings and Moral Standing Too