Trading in the Zone with Claude Code
Mark Douglas gave us the psychology. AI coding agents give us the infrastructure. Here is how I built the system he described.
Mark Douglas wrote Trading in the Zone in 2001. He laid out exactly what a trader needs to do: accept uncertainty, think in probabilities, execute without deviation. Simple ideas. Almost obvious once you hear them.
And almost impossible to do consistently with willpower alone.
Think about that for a second. A guy solved the central problem of trading psychology twenty-five years ago. Explained it clearly. Published it in a book anyone can buy for fifteen dollars. And the vast majority of traders who read it still can’t do what it says. That is a remarkable fact. It suggests the problem is not information.
I know because I tried for years. I read the book. I understood the concepts. I agreed with every word. And then I sat down at the screen and did the exact opposite. Not because I forgot the lessons. Because the lessons require a kind of mechanical consistency that a human brain, running on sleep and emotion and ego, was never designed to sustain.
The concepts were right. The execution environment was wrong. The execution environment was me.
That is what changed. Not the psychology. The tools.
AI coding agents did not give me better ideas about trading. They gave me the ability to build the system Douglas described and actually run it. Every day. Without dropping steps. Without negotiating with myself. Without the process degrading because I had a bad night or a good streak.
The entire value proposition, if you want to be honest about it, is that I am the weakest link in my own operation and I built a system to route around myself.
Before AI: The Spreadsheet Era
I was doing this work semi-manually for years. Some of it was automated, but most of it still ran through me. Every Sunday night I would pull data, mark up charts, cross-reference earnings calendars, build signal composites, and write analysis. Even the parts that were partially automated still needed me to assemble, interpret, and quality-check. The process took hours. Sometimes an entire day.
And it was fragile. Not because the process was bad. Because it depended entirely on me. Which is another way of saying: the quality of my risk management was a function of how well I slept. That is not a robust system.
If I was sharp, the work was excellent. If I was tired, things slipped. A date wrong on the earnings calendar. A level marked from last week instead of this week. A signal I rationalized away because the price action felt good and I didn’t want to see something negative.
Douglas talked about this. He said the market doesn’t generate painful information. Your mind does.
But what he didn’t talk about, because it didn’t exist yet, is that your process can generate painful information too. When your process is manual, it runs through the same brain that has to trade. And that brain has opinions. It has fatigue. It has bias. It filters. The tool you built to keep yourself honest is subject to the same dishonesty you built it to prevent. That is the structural problem nobody solved until now.
The process was supposed to keep me honest. But the process ran through me. So it was only as honest as I was on any given Sunday night.
That was the real leak. Not bad analysis. Inconsistent process.
Layer One: The Research Pipeline
The first thing I built with AI coding agents was the research layer. Not trading decisions. Just the preparation.
Every week, the system pulls fresh data. Breadth readings across sectors. New highs versus new lows. Volatility regime. Earnings dates. Economic releases. Options positioning. It assembles all of this into a structured pre-game before I look at a single chart.
This sounds simple. It is not. Before this existed, It was tedious and manual mostly, and the quality depended on how much time I had and how disciplined I felt. Some weeks the pre-game was thorough. Some weeks I skipped half of it because I “already knew” what the market was doing.
The system does not skip. It does not already know. It pulls the same data, in the same order, every single time. And it surfaces what the data says, not what I want it to say.
Here is what that looks like in practice. One week, the index was grinding higher and everything felt bullish. If I had been running the pre-game manually, I probably would have built a bullish case and moved on. But the system pulled the sector breadth data and surfaced something I did not want to see: participation was narrowing. Fewer sectors were confirming the move. The rally was thinning out underneath a rising price.
That is the kind of finding you ignore when you are manually building a case that confirms what you already believe. The system does not build cases. It builds context. And sometimes the context says something you do not want to hear.
This is what Douglas meant by accepting uncertainty. You cannot accept what you cannot see. And you cannot see clearly when your process filters out the things that make you uncomfortable.
The research layer removed the filter.
Layer Two: Execution and Accountability
The research layer tells you what to look for. The execution layer holds you accountable for what you actually do.
This is where most traders fall apart, and honestly where trading as an activity gets a little absurd. You spend all weekend building a plan. You define every rule. And then you sit down in front of the screen and the rules become suggestions. You are, in effect, paying yourself to generate advice that you then ignore. It would be funny if it weren’t expensive.
The second layer I built tracks execution against the plan. Every trade gets logged against the criteria that were supposed to trigger it. Did the setup meet all conditions? Was the size correct? Was the stop where it was supposed to be? Did I follow the exit rules or override them?
At the end of a sample, I see a number: my edge capture rate. The percentage of my theoretical edge I actually kept. Not what the system would have made. What I made, given my actual behavior.
That number is brutal. And it is the most valuable number in my trading.
Most traders have no idea how much edge they leak. They look at P&L and think that’s the whole story. But P&L is a blended number. It combines the quality of your edge with the quality of your execution. A great system with sloppy execution looks identical, in your account balance, to a mediocre system with perfect execution. You can’t tell the difference without separating the variables. And most traders never do.
Douglas described this as the mechanical stage. The stage where you execute every trade according to your rules, without deviation, across a large enough sample to see the distribution. He said most traders never get through this stage because they can’t stop evaluating individual outcomes. They win two trades and feel invincible. They lose two and start rewriting the rules.
The execution layer makes the mechanical stage survivable. It logs everything. It shows you the pattern. It proves, in cold numbers, that three losses in a row did not break the system. That your edge is intact even when it doesn’t feel like it. That is how you build the probabilistic belief Douglas said was essential. Not by reading about it. By watching it happen in your own data.
Layer Three: Risk and Regime
The third layer is the one that changes how you sleep at night.
Markets shift regimes. What works in a trending, low-volatility environment does not work in a choppy, high-volatility one. This is obvious. Everyone knows this. And yet most traders adjust for it intuitively, which means they adjust late, emotionally, and inconsistently. They reduce risk after the drawdown, not before it. They increase size after the winning streak, right before it ends. The timing is almost perfectly wrong, because the timing is driven by feelings, and feelings are a lagging indicator.
The third layer I built classifies the current regime based on a combination of factors: trend, macro conditions, and breadth. It doesn’t predict where the market is going. It describes the environment the market is in right now. And based on that classification, it adjusts the parameters.
In a favorable regime, the system runs with full exposure. In a deteriorating regime, it tightens. Position sizes get smaller. Setups need stronger confirmation. The threshold for taking risk goes up.
Four winners in a row. Every instinct said press harder. Size up. This is the hot streak, ride it. But the regime layer had quietly shifted the classification. Breadth was deteriorating, macro conditions were tightening, and the model moved from favorable to neutral. It cut my position sizing before I had a chance to do something stupid with my confidence. Two weeks later the market pulled back hard. The system did not predict that. It just kept me from being overexposed when conditions no longer supported aggression.
Douglas talked about trading without fear. But he never said ignore risk. He said accept it. Accepting risk means knowing your worst case before you enter. It means sizing so that the worst case is physically acceptable. The regime layer extends that idea. It means your worst case adjusts to the environment, automatically, before you have a chance to talk yourself into something you shouldn’t be doing.
The Framework Is the Asset
These three layers, research, execution, and risk, are not three separate tools. They are one system. Each layer feeds the next. The research surfaces what to look for. The execution tracks whether you followed through. The risk layer adjusts how aggressively you can play based on the environment.
And here is the part that matters for the future: this framework is not locked to any single broker, platform, or tool. It is a process. A process is portable in a way that a gut feeling is not. It can plug into anything. Today it runs with the tools available now. Tomorrow, when the agents are faster and more autonomous, it plugs into those too. The traders who have a codified framework will upgrade overnight. The traders who are still running on feel will be starting from scratch, which is a strange place to be when the tools are moving this fast.
The framework is the asset. The process is the moat.
Right now, I set the levers. I decide how much autonomy the system gets. On research, it runs almost fully on its own. I review the output, but I trust the data pull. On execution, it logs and flags, but I still press the button. On risk, the regime classification runs automatically, but I review the adjustments before they take effect.
The system is also built to learn. It has memory. It remembers what worked, what didn’t, what I overrode and why. Over time it does more of the things that led to good outcomes and less of the things that didn’t. That is still a work in progress. But the technology is changing so fast right now that what felt ambitious six months ago feels like a starting point today.
Those levers will move over time. As the tools get smarter, more of this will run without me. Not to replace the trader. To free the trader for the one thing that still cannot be automated: reading the game behind the game. Understanding market structure. Finding asymmetric opportunities. Knowing who is trapped and when the crowd is leaning too far. That second-level thinking is where outsized returns come from. And you cannot do it when you are buried in process.
The Goal Was Never More Screen Time
People ask me how many hours a day I spend watching the market now. The answer surprises them. Less than before. Significantly less.
The whole point of building this was not to trade more. It was to trade better with less of me involved. To get to the point where the system runs the pre-game, flags the setups, sizes the positions, classifies the regime, and all I have to do is decide whether to act. That’s it. One decision, with all the context already assembled, instead of eight hours of assembly followed by a tired decision.
Douglas wrote about getting to the intuitive stage. The stage where you have ground through the mechanical work so thoroughly that discipline is no longer something you force. It is just who you are. That stage requires thousands of repetitions. It requires a foundation so solid that you never have to think about process while you are thinking about markets.
AI coding agents do not get you to the intuitive stage. They get you through the mechanical one faster. They hold the process steady while you build the reps. They log everything so you can see your own patterns. They remove the tedious work that used to consume all the bandwidth that should have gone to deeper thinking.
The future will not give us more signals. It will give us more time.
We built tools that can process more market data in a minute than a human can process in a week, and the correct use of those tools is to spend less time looking at markets. The whole point of the technology is to need less of it. The best outcome is that the system runs, you check it once, and you go do something else. The optimal amount of screen time, once the framework is right, is close to zero.
That is what trading in the zone actually looks like. Not intensity. Clarity. Not more effort. Less friction. A system that runs whether you are watching or not, built on a framework that adapts to whatever comes next.
The process is the edge. The framework is the future. And the screen time, if you do this right, is optional.
LobWedge Research helps traders identify asymmetric opportunities through market structure, positioning analysis, and process-driven frameworks. We publish weekly analysis built on the only edge that compounds: discipline. And we are building the infrastructure for what comes next.
This is research and education, not investment advice.


