[

A high-pass filter (HPF) is a circuit or software algorithm that allows signals with frequencies above a certain cutoff to pass while reducing those below it. It’s a simple concept from signal processing, but it’s the perfect metaphor for what AI is doing to individuals and organizations right now.
AI is a high-pass filter for individuals and organizations. Those who pass the filter are leaving the rest far behind. That’s not a brag. It’s a problem I worry deeply about. The gap isn’t just widening; it’s becoming a chasm.
As an individual developer, if you have strong engineering, design, workflow, and leadership skills, and solid discipline in applying them, AI amplifies them and accelerates not only your work but also how quickly you learn new things, including new ways to use AI to amplify that learning loop.
Here’s what that looks like in practice: I’m prototyping faster, exploring architectural alternatives in hours instead of days, and testing ideas in minutes or hours that would have taken weeks to validate. I have very low emotional investment and can throw away the bad ones much more easily. I’m also discovering new problems AI brings to the entire process and how to solve them. AI isn’t doing the thinking. I am. AI is accelerating my ability to execute on ideas I already know how to evaluate, and the bigger the problem, the more amplified the feedback is.
The acceleration compounds. Better execution leads to faster feedback. Faster feedback leads to better learning. Better learning leads to more sophisticated use of AI. It’s a virtuous cycle if you already have the fundamentals and focus on improvement rather than just doing things.
If you don’t? AI will confidently generate plausible-looking garbage, and you won’t know the difference. The high-pass filter blocks you from seeing gains, not because you lack AI access, but because you lack the signal amplitude.
I’m looking for new ways to cope with the speed of learning, to be honest. The rate of learning and experimentation has accelerated beyond what I thought possible even six months ago. Now I’m spending minutes building AI augmented applications to help me manage my human context window. The first step is a CLI tool for me. You can find it on GitHub.
The same dynamics apply at the organizational level, but the effects are even more dramatic.
If your organization is designed to optimize the flow of value (you know, that DevOps thing) instead of maximizing utilization, AI will amplify your value delivery. You’ll ship faster, learn faster, and adapt faster. Your competitors won’t be able to keep up.
If not, AI will have zero value unless you first use it to help fix the organization.
Think about it: if your software delivery is already a malformed supply chain with handoffs, approvals, testing bottlenecks, and deployment theater, adding AI just means you’ll pile up information inventory at those bottlenecks faster. You’ll write code faster, only to wait longer for it to go through your dysfunctional process. The problems will compound.
Organizations with high-performing continuous delivery practices (small batch sizes, trunk-based development, automated testing, operational responsibility) will see AI multiply their throughput. They have the infrastructure to handle increased velocity.
Organizations optimizing for resource utilization, feature factories, and process compliance? AI exposes their dysfunction faster than continuous delivery ever did. Just like CD, AI is a diagnostic tool that shows you exactly where your supply chain breaks. AI does it much faster without soft landings. Jez Humble famously said, “If it hurts, do it more!” AI makes it much easier to hurt.
Here’s the counterargument I hear constantly: multiple studies show that AI coding assistants produce minimal or even negative returns. GitHub Copilot studies show modest productivity gains at best. Some research suggests AI-generated code has more bugs. Developer surveys report frustration and reduced code quality. Academic papers conclude that the benefits are oversold.
Fair enough. Let’s look at what these studies actually measure and how they measure it.
The typical study takes a group of developers, gives them access to AI coding tools, measures some metrics before and after, and reports the results. They find things like:
Code completion speeds up by 10-20%
More code gets written per day
Some developers report feeling more productive
But bug rates stay the same or increase slightly
Time to complete tasks shows marginal improvement
Many developers stop using the tools after the study ends
Industry consultants and skeptics point to these results and conclude: “See? AI isn’t the revolution everyone claims. It’s marginal at best, harmful at worst.”
None of these studies surprises me. In fact, they confirm exactly what I’d expect from a high-pass filter. The problem isn’t AI; it’s what the studies measure and who they measure.
But there’s a deeper issue here, one that goes beyond AI: these studies are making the same mistake every failed platform transformation makes. They’re measuring what happens when you give people new capabilities without changing how they work to leverage the platform.
I’ve seen this pattern repeatedly:
Organizations adopt cloud platforms but lift-and-shift legacy architecture: they get a bigger hosting bill, not agility
Teams implement CI/CD tooling but keep long-lived feature branches and manual testing: they get faster builds of the wrong thing with more defects
Companies buy DevOps platforms but maintain silos between dev and ops: they get expensive dashboards showing the same dysfunction
Enterprises roll out Agile frameworks but keep annual planning and project-based funding: they get expensive ceremonies with no improvement in delivery
The pattern is always the same: new platform capabilities require new ways of working. When you measure platform adoption without workflow change, you get marginal results at best. The studies showing minimal AI impact aren’t measuring AI’s potential; they’re measuring organizational inability to leverage it.
Let’s look at the specific flaws in how these studies are designed.
Every study I’ve seen focuses on utilization and output rather than business outcomes. They measure lines of code written, pull requests created, tickets closed, code completion speed, and developer “perceived productivity.”
These are vanity metrics that tell you nothing about whether you’re delivering value faster or building better products. Of course, those numbers go up; AI helps you write code faster. But if that code sits in a queue for weeks waiting for approval, or if it fails in production because your testing strategy is garbage, what did you actually gain?
The studies don’t measure time from idea to production, lead time for changes, mean time to repair, customer satisfaction, or business value delivered. They’re optimizing for resource utilization in a system where utilization is the wrong metric.
None of the studies I’ve seen controls for work habits and engineering discipline. They don’t separate developers who practice TDD from those who don’t, developers who understand modular design from those who don’t, teams with continuous integration from those with feature branches, or organizations optimizing for flow from those optimizing for utilization.
They measure everyone together, average the results, and report that number. They’re mixing signal and noise, then being surprised when the result is meh.
A developer with strong fundamentals in modern software engineering practices (XP, BDD, CD) uses AI completely differently from one without them. They use AI to identify flaws in requirements and develop better methods for validating pipeline artifacts. They give agents tests to pass, not code to write. A developer without those fundamentals uses AI to generate code and then fights to make it code they like.
The studies treat these two populations as the same. That’s like measuring the impact of power tools by averaging the results of master carpenters and people who’ve never held a hammer.
The studies also ignore organizational context. AI in a dysfunctional organization is like adding a bigger engine to a car with a broken transmission. The studies measure “RPMs” while ignoring that the car can’t move.
This is the killer. They report the average across all participants. Average doesn’t pass a high-pass filter.
If you ran the same studies but segmented by engineering maturity, you’d see dramatically different results:
High-performers (ATDD, CD, operational responsibility): massive gains
Mid-performers (some good practices, some dysfunction): modest gains
Low-performers (no testing discipline, broken delivery): zero or negative gains
Average those together and you get “marginal improvement.” But that average hides what’s actually happening: AI is amplifying the differences between high and low performers. The gap is widening, not staying constant.
That’s not a failure of AI. That’s the filter working exactly as expected.
This is the most critical flaw. None of the studies I’ve reviewed ask whether teams changed their development workflow to leverage AI capabilities, whether developers learned new ways of working that amplify AI’s strengths, or whether companies addressed the bottlenecks that AI would expose.
Instead, they give developers AI tools and measure what happens when everything else stays the same. The high performers aren’t just using AI; they’re changing how they work to leverage it:
Using AI to generate comprehensive test suites, not just production code
Prototyping multiple architectural approaches before committing
Exploring edge cases and failure modes faster
Accelerating the feedback loop between idea and validated learning
Treating AI as a force multiplier for existing engineering discipline
The studies measure everyone else: developers using AI like a faster autocomplete while keeping broken workflows intact. Of course, the results are marginal!
The studies also miss timing effects. High performers adopt AI, get immediate gains, and compound those gains through accelerated learning. They’re already six months ahead of the research curve by the time the study publishes, with the possibility of multiple model upgrades in between.
Low performers try AI, get marginal results, blame the tool, and go back to their old ways. The studies capture this initial disappointment but miss the divergence that happens next.
The research will eventually catch up. Someone will run studies that control for engineering maturity, measure business outcomes, and segment results by performance level. When they do, they’ll confirm what high performers already know: AI is a massive multiplier for engineering excellence and organizational capability.
What I worry about is people waiting for those studies before mastering the tools. No matter what role they have today, they are tomorrow’s junior developers, starting over with skills that aren’t the primary required skills. It’ll be like trying to get a job as a web developer with mastery of tables and iframes, and where JS was for form logic.
So what do you do if you want to pass the filter?
For individuals: build the fundamentals. Learn testing. Learn how to describe “what” rather than dictating “how.” Learn Behavior Driven Development. Learn how to design fitness functions in the pipeline rather than inspecting for quality. Learn your business domain. AI will amplify those skills. Without them, AI just makes you confidently wrong faster.
For organizations: engineer your software supply chain for flow. Ask yourself: “Why can’t we deliver today’s work today?” Every answer points to a bottleneck in your supply chain. Fix those.
The specific practices matter: trunk-based development, small batch sizes, automated quality gates, operational responsibility, and continuous integration. These aren’t optional. They’re the prerequisites for AI to deliver value. Without them, you’re just generating waste faster.
Here’s the good news: continuous delivery has always been a forcing function that exposes dysfunction. If you’ve been doing CD properly, if you can confidently use your standard change process in an emergency, your organization is probably well-positioned to leverage AI. If you haven’t, AI will shine on your problems like a supernova, making them impossible to ignore.
That’s an opportunity. Use AI to help map your value stream. Use it to identify bottlenecks. Use it to prototype solutions. But understand that AI won’t fix your organizational structure, your testing strategy, or your deployment process. That’s still on you.
You can either improve your signal to get through the filter or join the average. It’s a choice.
People with mature continuous delivery practices are delivering value at a pace that makes others look frozen in time. Individual developers with strong fundamentals are solving problems in hours that used to take days. This isn’t about being “left behind” in some theoretical future. It’s happening now, and the chasam will continue to grow.
For me, I choose to make it through the filter. I don’t want to be lonely on the other side.
What are you going to do?
No posts