Part Two of a Three-part Series
Between Part 1 and this sentence, I finished writing a twenty-five page research paper. It is the kind of document academic institutions recognize as serious. It is not my natural register. In fact, I found writing it so frustrating initially it actually inspired this other piece, How do you spell the sound of a scream? which let me say what I actually felt, so I could go back and find the formal language for it.
I am a gemologist, an illustrator, a father in long-term recovery with no degree in any of the fields the paper draws from. On the title page, where the institution goes, I wrote: Independent Researcher.
I submitted it to PsyArXiv — a preprint repository, which is where researchers post papers publicly before formal peer review. It’s the front door of academic publishing. You put your work up, other researchers can read it, cite it, respond to it. The barrier to entry is supposed to be minimal — the whole point is making research accessible before the slow machinery of journal review decides its fate. They rejected it. The reason:
“For submissions on this topic, we only accept submissions from authors who have previously published in this area. As we found no record of previous relevant peer-reviewed publications by the author(s) in a brief search, we cannot accept the work into the repository. This moderation decision is final. Please do not resubmit.”
I uploaded it elsewhere and emailed it directly to five researchers whose work I’d built on. One of them — a leading figure in the field, whose publications appear throughout Part 1 of this essay — responded in less than twenty-four hours. He wrote:
“Many thanks for sharing your paper with me. I just skimmed through it quickly to get a sense of what it’s about, and I was very impressed. You seem to be integrating many prominent lines of theoretical and empirical work to advance some novel ideas.”
He said he would read through the full paper in detail and referred me to his colleague who is writing a book on the subject, because he thought my work would be relevant to it.
(Here is the link to that paper for those interested: The Inverse Constraint Hypothesis: Language’s Conditioning Power as a Function of Domain Sensory Verifiability)
Same paper. Same week. One system checked the author. The other checked the work.
So when I tell you that the register you speak in determines whether you are heard — I have lived on both sides of that wall in the same week, with the same document, carrying the same truth.
That is the foundation we are about to dig into.
• • •
The Prediction Engine
In Part 1, we established that language starts in the body. Sound carries meaning before words exist. The first word in human life is built from the body’s first sound during its first act of care. And when children with no linguistic input meet each other, full language emerges spontaneously from embodied demonstration.
That was the first floor.
The foundation is this: most of what we call consciousness is our brain’s best guess at reality, generated from all prior experience and current context, projected over the incoming world before the incoming world arrives.
Our brains are prediction machines. They build a model of what they expect to encounter and run that model forward. The sensory data — light hitting the retina, sound vibrating the eardrum, pressure against skin — functions primarily as error correction. It updates the model where the model got it wrong. What we consciously experience is substantially the prediction, adjusted at the margins by sensation.
This is the dominant framework in neuroscience for how perception works. It’s called predictive processing, formalized by Karl Friston in 2010, building on decades of earlier work. The brain generates expectations from the inside out — from memory, context, and pattern — faster than raw sensory data travels from the outside in. Neuroscience calls these inside-out expectations “top-down” processing and the raw sensory stream “bottom-up” processing. The terms are literal: predictions flow down from higher brain regions; sensory data flows up from the eyes, ears, skin.
Our experience of the world is where those two streams meet. And the prediction usually wins.
Think about what that means.
Walk into a room we’ve been in before. The brain predicts walls, furniture, light, floor. If something has changed — a new painting, a broken window — that difference registers as a prediction error, and we notice it. Everything else runs on the model. We perceive what we expect, adjusted where reality insists.
This is efficient. It has to be. Processing the full sensory stream from scratch every moment would overwhelm any system. So the brain takes shortcuts. It builds models from experience and runs them forward, using sensation as a check. The better the models, the fewer errors, the smoother the experience.
Here is what really matters about this.
The system only updates when the prediction fails. When what we expected and what arrived don’t match, the brain generates a prediction error — a signal that says: the model is wrong here, fix it. If no error arrives, the model runs. Unchecked.
Bring back Marc Andreessen from Part 1. He declared his goal: zero introspection. Move forward. Go build things. Just go.
In predictive processing terms, he is describing a system that has turned off its own error-correction function. The model runs. It generates predictions about the world — about what’s valuable, what people need, what progress looks like — and because the system never pauses to check those predictions against incoming data, the model never updates.
It can be any system — a person, a company, an institution, a culture — that stops checking its predictions against reality.
The model becomes the reality.
This is how we end up convinced we are helping our spouse or child while they feel undermined, unseen, or erased. This is how we cause harm without intending it. This is how people commit atrocities while believing they are doing God’s work. This is how technological advancements meant to improve humanity’s quality of life actively destroy the environment and conditions that support human thriving.
The prediction was never checked. The model ran. The bodies absorbing the consequences were outside the model’s field of vision.
• • •
The prediction engine runs on different fuel depending on how much physical reality is available to check it against.
Stand inside a house. Sunlight on hardwood floors. Our feet register the solidity of the ground, the temperature of the wood. Walls return sound differently than open air; the auditory system models room volume before we consciously estimate it. We smell paint, dust, coffee from the kitchen. A hand on the doorframe registers texture, temperature, the slight give of aged wood. Proprioception — the body’s sense of itself in space — places us relative to walls, ceiling, staircase.
The brain’s predictive model generates “house,” and that prediction is being checked by sensory data arriving through every available channel simultaneously. Each channel generates its own prediction errors. If the model is wrong — if the structure is unsound, if the floor is about to give — the sensory systems will escalate errors until the model updates. We will feel it before we name it.
This is a high-constraint environment. The senses keep the model honest.
Now: a person evaluates a mortgage-backed security.
Somewhere inside that instrument is the house. Or rather, thousands of houses. Each one has hardwood floors, paint, coffee in the kitchen. People live in them. The person holding the MBS has sensory access to none of it. There is no photon, no sound, no smell, no temperature, no texture. The “object” being evaluated has no physical form that any sensory organ can detect. What reaches the brain is pixels forming letters and numbers on a screen.
A mortgage-backed security is a legal construct in which thousands of individual mortgages have been pooled, sliced into risk tranches, and sold as bonds. Each underlying mortgage is itself an abstraction — a legal claim on a stream of future payments from a person who lives in a physical house. The house is concrete. The person is concrete. The holder of the security has zero sensory access to either.
The brain’s predictive model for “what this MBS is worth” is constructed entirely from linguistic and numerical inputs. A rating agency’s label: AAA. Yield figures. Prospectus language. Verbal assurances. Other people’s stated confidence. The system that generates the prediction is the system that generates the evidence for the prediction. A closed loop.
This is a low-constraint environment.
We already know what happens when low-constraint environments run unchecked, at scale. Eventually reality intercedes. It escalates. It forces the model to update. This is what we saw in 2008.





