March 25, 2026
The Tale of Ayn and Gene
This piece argues that selfishness is humanity’s default setting—not necessarily out of malice, but as a built-in survival architecture. By comparing biological and philosophical lenses, it shows that very different ways of studying human nature arrive at the same conclusion: left untrained, people tend to orient toward self-interest. It then pushes further, arguing that our selfishness is intensified by the fact that human perception is narrow, local, and short-term. We protect what we can immediately feel and see, while future consequences, distant harms, and unseen interdependence remain faint. In that frame, generosity is not a natural reflex but an intentional achievement—something that must be taught, practiced, and designed into life. At its core, the essay asks: if selfishness and limited perception are the baseline conditions of being human, what does it take to become generous on purpose and behave as if we can truly see beyond ourselves?
“Let us try to teach generosity and altruism, because we are born selfish.” — Richard Dawkins
There are two ways to learn the same truth, and they do not always resemble each other.
Sometimes you learn it the way a chemist learns it: by isolating a compound, controlling the variables, and watching the numbers converge with grim consistency. Other times you learn it the way a philosopher learns it: by taking the human animal seriously as a moral agent, then pressing on the logic until it either holds or snaps.
Different instruments. Different detection principles. Same reading.
Because there is a particular measurement—call it a signal, call it a bias, call it a baseline condition—that keeps showing up whenever anyone tries to model what humans do when the stakes are real. Whether you run the experiment in a laboratory of cells and populations, or in the laboratory of language and ethics, you keep encountering the same stubborn phenomenon:
Left untrained, left unexamined, left to default settings, humans orient toward the self.
Not always with malice. Not always with conscious intent. Often with the quiet inevitability of a system doing what it was built to do. A hand closes around resources. Attention collapses inward. Justifications bloom afterward like algae on a warm pond. We protect what we believe is ours—our body, our status, our tribe, our narrative, our future—and we do it with a talent that feels less like choice than architecture.
Here is the part that matters: you can read that result through two completely different detectors and still get the same trace on the screen.
One detector speaks the language of selection, replication, and inherited strategy. It does not care about meaning. It does not issue verdicts. It simply tracks what persists and why. It treats “selfishness” as a descriptive property of systems that endure under competition: patterns that conserve themselves tend to remain in the population.
The other detector speaks the language of values, rights, agency, and obligation. It does care about meaning. It does issue verdicts. It treats “selfishness” as a moral word—something to condemn, redeem, refine, or defend—because it is answering a different question: not what happens, but what should happen, given that we must live together?
When those two instruments agree, you should not treat it as coincidence. You should treat it as an empirical event.
Agreement across disciplines is rare when the detection principles are this different. When a physicist and a poet both point to the same star, you do not argue about which telescope is more correct. You ask what kind of star forces itself into view no matter what lens you use.
This is not yet a flag for a biological argument or a philosophical argument. It is simply a measured statement of baseline conditions:
We are born with strong self-oriented incentives, and if we want anything beyond those incentives—generosity, restraint, fairness, mercy, reciprocal care—we will have to train for it. Teach it. Rehearse it. Institutionalize it. Engineer it into our lives the way you engineer safety factors into a bridge.
That is not cynicism. It is design realism.
It also clarifies the real axis of this chapter. The question is not whether humans are “good” or “bad.” That framing is a child’s moral cartoon. The real question is structural:
What is the default behavior of a human system under pressure, and what interventions reliably change that behavior?
Different disciplines answer that question in different dialects, but they keep circling the same gravitational mass. One says: you are built on strategies that were rewarded for persisting. The other says: you are a moral agent who must choose how to live, and you cannot outsource that choice to the crowd or to your instincts.
At this point, the two detectors have produced the same reading, but they interpret it in different units.
Biology says: the baseline bias makes sense given the constraints of survival and reproduction.
Philosophy says: the baseline bias creates predictable failures in human relations unless it is disciplined by reason and principle.
And this is where the gonzo tension lives—right in the seam between explains and excuses.
Because once you can measure the baseline, you can do something dangerous with it: you can treat it as destiny. You can turn description into permission. You can say “this is natural” as if “natural” means “good,” or as if “natural” means “inevitable.” That move has wrecked more moral systems than any single ideology ever did.
But the reverse mistake is just as common: pretending the baseline does not exist. Building moral language on top of a fantasy model of human nature, then acting shocked when the results fail to replicate outside the sermon.
The adult position is harder: measure the baseline honestly without turning it into a religion.
Ayn and Gene are not the point yet. They are simply the names we will later hang on two instruments. One instrument sees human nature through moral sovereignty and rational self-interest. The other sees it through replicators and inherited strategy. These instruments disagree about many things—metaphysics, meaning, what counts as purpose—but they converge on something that is hard to ignore:
Self-interest is not a cultural glitch. It is a native feature.
That does not make it sacred. It does not make it shameful. It makes it present. It makes it measurable. It makes it the starting condition you must account for if you intend to design anything—an ethic, a society, a relationship, a life—that does not collapse into exploitation, resentment, or coercion.
So we begin here: not with names, not with doctrines, but with a shared reading from two different machines.
And once we trust the reading, we can start asking the question that actually matters:
If selfishness is the baseline, what kind of creature does it take to become generous on purpose?
Here is the bigger gut-punch: we protect our interests first individually, and then collectively when it is convenient. The “we” expands and contracts like a lung. Me becomes mine, mine becomes us, and us becomes righteous the moment it needs a flag. What changes is not the underlying impulse. What changes is the scope of who counts as “self.”
And all of that is happening while we negotiate with reality using an absurdly limited dataset.
The old tree-in-the-woods problem is not a cute parlor trick. It is a reminder that our experience of the world is not the world itself. “Sound” is not a property of the tree; it is a relationship between pressure waves, an ear, and a nervous system that knows what to do with them. No ear, no experience. The forest can still move air, sure—but sound, as we live it, is a translation. A decoding. A private reconstruction.
Light is even more damning. The electromagnetic spectrum is a continent; humans see a hairline sliver and call it reality. We evolved to detect what was useful—what helped an ape find fruit, avoid cliffs, recognize faces, spot predators—not what was true in any cosmic sense. The rest of the spectrum—radio, infrared, ultraviolet, X-ray, gamma—exists all around you as a matter of physics, but it does not exist for you as experience.
And if we somehow opened the sensory firehose—if we were forced to continuously perceive everything outside the visible band, plus the subtle electromagnetic noise our devices are vomiting into space, plus the thermal signatures of every body in the room, plus the radiation history of every rock—we would likely go mad. Not because the universe is hostile, necessarily, but because our brains are not built to carry that much unfiltered signal. Now bolt onto that the other invisible noise: mass communication and endless informational dumps.
That is kind of the point.
We are blind, and we mistake our blindness for completeness.
Now fold that back into selfishness as baseline, and the picture gets sharper and uglier: a creature with limited perception will default to protecting what it can perceive. Your nervous system can feel hunger now, threat now, embarrassment now, pleasure now. It can imagine tomorrow, but it imagines it using hardware designed for immediate survival. The long-term, the invisible, the distributed consequences—those are faint signals. They do not scream. They do not bleed. They do not trigger adrenaline the way an insult or a rival does.
So the baseline is not merely selfish. The baseline is local. Near-term. Self-referential. It is a strategy that makes sense for an animal trying to survive in a world it can only partially sense.
Generosity, then, is not merely being nice. It is a technological achievement of the mind. It is what happens when a creature learns to act on information it cannot directly feel—when it can treat the unseen as real enough to matter: love, friendship, camaraderie, joy.
To become generous on purpose, the creature has to do at least three things that do not come for free.
First, it has to model beyond its senses. It has to accept that the world continues outside the bandwidth of immediate experience—other minds, future costs, distant harms, silent benefits.
Second, it has to override the default weighting of time. It has to give the future a vote. It has to make delayed outcomes emotionally legible enough to compete with present impulses.
Third, it has to expand the boundary of self without turning that expansion into a lie. Not by pretending everyone is one happy organism, but by recognizing interdependence as a systems fact: if you poison the commons, you eventually drink downstream.
This is where the two instruments begin to harmonize again. One discipline says: minds evolved under constraints, so of course they privilege local information and immediate payoffs. The other says: moral maturity is the deliberate correction of those defaults, guided by reason and principle rather than impulse and convenience.
Same reading. Different units.
And the punchline—the one that matters for the rest of this chapter—is this: our ethics are not installed on top of perfect perception. They are installed on top of blindness. We are trying to build a reliable moral system inside a creature whose sensors reveal only a narrow strip of the real.
So the question sharpens:
If we are blind by design, what does it take to behave as if we can see?