The Open Condition
AI will do what you did yesterday. AI can't do what you'll do tomorrow.

The Open Condition
Many of the major AI predictions share the same assumption, and that assumption is wrong.
The dominant narrative runs a single track: AI gets smarter, automates more, progressively closes the gap between what's known and what isn't. The optimists say new jobs will emerge or we'll all be taken care of through concepts like UBI. The pessimists say we'll be kept around as curiosities, professional humans generating authentic despair for machines to study. Both camps accept the same premise that closing the gap is possible. Both camps are wrong about it. Not in degree. In kind.
What Reality Actually Does
Reality is chaotic. Not metaphorically. Not as a temporary condition we'll engineer our way past. Chaotic in the technical sense: genuinely novel conditions emerge continuously, unpredictably, and without consulting our models first.
Physics doesn't negotiate with your optimization function. Entropy doesn't care about your roadmap.
This matters because it isn't a gap that closes with more compute. The universe is open. It generates information that didn't previously exist, not as recombination of known patterns, but as something structurally new. Every problem solved reveals adjacent problems nobody anticipated. Every closed question opens three more. The frontier doesn't recede because we fail to reach it. It recedes because reaching it produces more frontier.
Now, when you see completely independent intellectual traditions converging on the same structural insight, pay attention. Prigogine saw it in thermodynamics: novel order emerging from far-from-equilibrium conditions through irreversible processes. Polanyi saw it in the limits of explicit knowledge, the recognition that we know more than we can tell. Indigenous epistemologies have long held that understanding emerges from sustained, embodied relationship with place and cannot be extracted from context without being destroyed. Snowden built it into organizational strategy through Cynefin: complex systems can't be analyzed into predictability, only probed through direct engagement. Heidegger saw it in how technology reveals certain truths while concealing others.
These aren't converging because they're fashionable. They're converging because they're describing a boundary condition. Reality is open. That's not a perspective you adopt. It's the operating environment you're standing in whether you acknowledge it or not.
Three Layers of Time
AI processes what's known. It does this extraordinarily well: finding patterns, optimizing, scaling, accelerating at speeds and volumes no human matches. But everything it processes had to come from somewhere. Someone, some human, some point of contact with physical reality, had to encounter it first.
Think of it as three temporal layers.
Yesterday is what's been encoded. The problems we've solved, the patterns we've mapped, the knowledge articulated clearly enough to become data. This is AI's domain, and it is genuinely brilliant at it. AI will do the work you did yesterday, beautifully, efficiently, at scale.
This isn't a claim about current limitations. It's a description of the architecture. Generative AI works by learning statistical relationships across massive datasets, then producing outputs that are probabilistically consistent with those patterns. When it writes, it's predicting the next token based on the distribution of all the text it's been trained on. When it generates an image, it's navigating a learned latent space of visual patterns derived from millions of existing images. When it reasons, it's pattern-matching against the structure of arguments it's already seen. The outputs can be surprising, even genuinely useful in ways nobody anticipated. But the surprise comes from recombination at a scale and speed humans can't match, not from contact with anything new. The "generation" in generative AI is sophisticated interpolation. Interpolation within the space of what has already been encountered, encoded, and fed in as training data. That space is vast. It is not open.
Tomorrow is what hasn't been encountered yet. Novel conditions reality is generating right now that don't exist as information anywhere. No model can process what isn't yet data. No system can encode what hasn't been sensed. Tomorrow's work is structurally upstream of any technology, no matter how powerful.
Today is where it actually matters.
Today is the live interface. The present tense where humans stand at the edge, in contact with reality, making sense of what's emerging. Today is where humans and AI are already working together, right now, in the rooms where actual decisions get made.
I've been in those rooms. A field engineer hears something in a machine that no vibration sensor has flagged yet. She can't articulate it. She just knows the sound is wrong. An analyst stares at a quarterly projection that checks out numerically but doesn't match what the sales team told him last Tuesday, and he sits with that dissonance instead of clicking approve. A product manager reads an AI-generated competitive analysis, finds nothing factually wrong with it, and still feels the absence of something the model couldn't see because it hasn't happened yet. She overrides the recommendation, and three months later that instinct is the reason the launch didn't walk into a market that had shifted underneath the data.
AI will do what you did yesterday. It can't do what you'll do tomorrow. Today is the part that needs you.
Today is also where the danger is most acute.
What Control Actually Costs
What I see in organizations right now is a dangerous confusion. Leaders look at AI and see control. More prediction. More optimization. More certainty. The pitch is seductive: enough data, enough compute, and we close the gap. We tame the chaos.
But in the incessant pursuit of control through technology, something else happens. We don't eliminate uncertainty. We eliminate our ability to see it.
We've been here before, though the parallel needs to be drawn with precision. The 2008 financial crisis was this pattern at systemic scale. An entire industry adopted the same risk models, the same optimization logic, the same confidence in its ability to encode reality into manageable abstractions. The models were sophisticated. The dashboards were beautiful. And they were all wrong, in the same direction, at the same time. The system didn't self-correct gently. It broke catastrophically. The people who paid the highest price were the ones furthest from the decisions.
Now, the opposite error is just as fatal: overstating the parallel. AI adoption is more heterogeneous than pre-crisis finance. Different organizations are using different models for different purposes at different integration depths. There's no single equivalent of the VaR model creating perfect monoculture across the ecosystem. The degree of correlation is lower. That matters.
But the structural signature holds where it counts. The logic driving adoption is the same logic that drove the risk models: local optimization without systemic awareness. Every individual decision to automate, to cut the role, to trust the dashboard is locally rational. The danger isn't in any single move. It's in the composition. Not identical bets, but directionally aligned ones: the broad, simultaneous shift toward encoding over encounter, across organizations that are all reading the same case studies and hiring the same consultants. You don't need a monoculture to get correlated blindness. You just need enough firms moving in the same direction that the ecosystem's aggregate sensing capacity degrades faster than any individual firm notices.
So what does this look like on the ground?
Every role you cut because AI handles the output is a sensor removed from the perimeter. Every position you don't fill because the model is "good enough" is a point of contact with reality you've severed. Every layer of human judgment you replace with automated confidence is a feedback loop you've broken.
The chaos doesn't stop because you stopped looking at it. It accumulates. Silently. Behind the dashboard. In the gap between what your model predicts and what's actually happening. And by the time the mismatch surfaces, the people who could have seen it coming are gone.
You didn't optimize your organization. You blinded it.
Every Soldier Is a Sensor
The military learned this the hard way. Centralized intelligence architectures failed in complex environments because ground truth changed faster than command structures could process it. The doctrine that emerged wasn't about turning infantry into analysts. It was about recognizing that the people on the ground, in contact with reality, were already generating the intelligence the system needed. Their position was their value.
Organizations are making the inverse mistake. They're pulling sensors off the perimeter to save on headcount, then wondering why they can't anticipate what's coming.
The sensor isn't defined by seniority. It's defined by two things: proximity to ground truth and the judgment to interpret what you're seeing. A mid-career engineer who's been on the floor for twelve years and notices a pattern break nobody else caught, that's a sensor. A customer success manager who reads between the lines of a client's email and flags a churn risk six months before it shows in the data, that's a sensor. The value isn't the title. It's the position.
But here's why entry level matters, and why cutting junior roles is one of the most structurally damaging moves an organization can make right now. Sensors aren't born with twelve years of pattern recognition. They're built. Every veteran who reads a room, who senses that something is off before they can explain why, developed that capacity by spending years at the edge, in the mess, making mistakes with real consequences. Your junior hires are how you grow sensors. They encounter friction, notice anomalies, ask questions the model doesn't know to ask. Not because they're smarter than the AI. Because they're there, doing the unglamorous work of learning what reality actually feels like when it pushes back.
Cut your junior pipeline and you haven't just lost cheap labor you can replace with a model. You've severed the developmental pathway that produces the very judgment your organization depends on. Five years from now, you won't have mid-career people who can sense what's changing, because you never put anyone in position to learn how. The pipeline doesn't start at the point where it becomes valuable. It starts at the point where it looks expendable.
That's the trap.
But I Want to Be Careful Here
This argument needs to be honest about where it gets complicated.
AI is already moving into physical environments. Robotics, autonomous vehicles, sensor networks that interact with the world directly. An AI system embedded in a body that navigates unstructured terrain, encounters friction, and adapts in real time starts to look less like a pure encoding layer and more like something meeting reality on its own terms. That's a fair challenge and I'm not going to wave it away.
But there's a distinction that holds: sensing is not the same as encountering. A sensor measures. An encounter involves interpretation within a web of meaning, context, judgment, the ability to recognize that something doesn't fit a pattern that hasn't been defined yet. A temperature sensor detects heat. A human standing in a room senses that something is wrong before they can articulate why. The former generates data. The latter generates understanding. As AI systems become more physically embedded, that line will blur. But the line exists, and it's load-bearing.
There's also the trust problem, and this one cuts closer to the bone. Human attention and trust are genuinely scarce, but they're not unmanipulable. People already form attachments to AI systems. They already trust AI-generated recommendations in ways that bypass the very judgment the open condition demands. If trust can be simulated convincingly enough to capture behavior, then the scarcity itself doesn't protect us the way we'd like. It just relocates the advantage to whoever builds the most convincing simulation. That's not a future risk. That's today.
These complications don't break the argument. They sharpen it. The question isn't whether AI will eventually sense reality in increasingly sophisticated ways. It may. The question is whether we'll maintain the human judgment needed to know when it's sensing accurately and when the map has diverged from the territory. That judgment doesn't come from technology. It comes from people who've spent time in contact with the thing itself.
The Pipeline Runs One Direction
I build with AI every day. The technology is extraordinary. This is not the anti-AI argument.
But we need to be precise about what it is and what it isn't.
AI is the most powerful encoding layer humanity has ever created. It takes what's been sensed, experienced, and recorded, and it makes it operational at scale. That's transformative. That's worth building on, building with, building toward.
What it's not is a replacement for the sensing itself.
The sequence matters. Humans encounter reality. Reality generates novel information through that encounter. AI encodes and scales the result. That's not a race between humans and machines. It's a pipeline. And it only works in one direction.
The moment you treat AI as if it can run ahead of human encounter, as if it can generate its own ground truth, you've broken the pipeline. You're running the encoding layer on increasingly stale representations of a reality that's already moved on.
You're moving fast and seeing less.
The Real Risk
I'm not worried about AI taking all the jobs. I'm worried about something more specific and more dangerous: that we'll dismantle our own capacity to sense what's real in pursuit of a control that isn't possible.
The work will always exist. Reality guarantees it. Every solved problem reveals adjacent unsolved ones. Every closure generates new openings. The frontier is structurally inexhaustible, not because of human creativity or resilience, though those matter, but because the universe is genuinely open and will not stop producing conditions we haven't seen before.
The question was never whether humans will have work. Of course we will. The question is whether we build the systems and structures that keep humans connected to the frontier, or whether we optimize them out and discover, too late, that we've lost the ability to see what's coming.
AI will do yesterday's work. Beautifully. At scale.
Today, right now, someone is standing at the edge of what's known, making sense of what's emerging, holding the ambiguity, using judgment no model can replicate because the situation has never existed before.
Tomorrow will ask for the same thing. It always does.