Train billowing smoke

How Jevon’s Paradox is Manifesting in AI-driven Software Development

William Stanley Jevons was an English economist that in 1865 posed an interesting question about coal production: when the efficiency of steam engines improved, why did the demand for coal soar? In other words, when the amount of coal required for each task dropped, why didn’t that equate to less coal usage?

Jevons posited that greater efficiency made the steam engine more cost effective, allowing it to be used in more applications where steam power now made economic sense. So the demand for steam engines exploded, and in turn so did the demand for coal.

The Jevons Paradox has been applied to software development tools for decades now. As the efficiency of software development (and computing equipment) has steadily improved, the demand for applications, services, and data processing has soared. At this point, the vast majority of paper-driven processes (at least in first world countries) have been replaced with a digital alternative.

Now AI-driven software development (i.e. “vibe coding”, conversational programming, and so on) is rapidly changing the formula yet again. Only this time, many are predicting the demise of the software engineer as application demand can be easily met by AI agents.

I argue something different, however. I argue that the demand for technical people who understand how systems of applications work will explode. There may be less jobs for traditional code jockeys, but they will be replaced by jobs that manage outcomes driven by many applications working together. 

The Scale of Systems is Changing

To start with, let’s all acknowledge that AI is well on its way to dramatically reducing the cost of producing code. I’m not saying it's making coders more productive, or that we need fewer of them. It is just that we are generating more code faster in an attempt to solve more problems than ever before. 

As we work out the “bugs” in using AI to code—finding the right processes, prompting patterns, feedback loops, and so on—we are going to find ourselves creating code for reasons that we cannot even fathom right now. And it will be cheap to do so. Even people with no background in a specific technical or business space will get AI to build requirements, tests, and code that works—a much more complete application than older “no code” approaches—in a matter of days, maybe hours.

There are two knock-on effects of this brave new world. The first is that there will be many more failures that get swept under the rug, as individuals and small businesses take a stab at something wild and unexpected (aka “differentiated”) and—as had been the case since business began—usually miss. The second is that what succeeds will likely be increasingly interdependent on the behavior of other AI-built and/or AI-driven applications that themselves succeeded in separate experiments. The pattern of increased interdependence of independent, autonomously built and operated “agents” has a name: it's a complex adaptive system.

I have a whole video about complex adaptive systems and how interdependence can result in unexpected outcomes. (Accompanying slides here.) This will be no different. While each actor will attempt to build agents that drive the outcomes they want, so will everyone else. The combination of competition, cooperation, new creations, removed (often “failed”) components, and unexpected interactions will result in what systems thinkers call emergent behaviors. Many of which are not necessarily desirable. (For example, many stock market “crashes” were sudden emergent systemic behaviors, not the action of a single individual or organization.)

How to Manage Complex Systems

So, what does an organization (or even an individual) do to get the best results in such a constantly changing environment? In my experience, the rule for driving success in complex systems is simple to state, but hard to execute:

You must see as much of the system as you can, and take action by how you build, instruct, and connect the agents you can control.

Let’s break that down. First, understanding complex systems is understanding relationships—how the pieces fit together, what they are saying to each other, and how they behave when they receive different inputs. If you look at the world of “observability” in computing today, this is what is being attempted. Gather as much information as you can to know as much as you can about the elements of the system you can “see” (including elements that you may depend on but do not control).

Next, remember that you can’t control the system as a whole, only how the agents you build and operate (or at least can configure) interact within the system. You may change how an agent responds to a certain input, you may change what agents your agents send signals to or from, or you may insert a new agent into a flow of activity to mitigate undesirable signals. Or any of a number of other actions you can take where you have some control.

The Gardening Approach To Systems Management

The best analogy I know for the way a developer or operator participates in a complex systems environment like this is a garden. You, as a gardener, may plan a certain layout for your garden, and you obviously can see and judge the garden’s appearance (and production) at any time. However, you can’t “plant a garden” as a single entity. You build that garden one plant at a time.

Furthermore, you don’t control all of the entities that may affect the outcome of your gardening. Critters love certain elements of what you planted, and will try to find a way to feed on them. Biology is in the soil or even the air that might make your plantings sick or yield less than you hoped for. Weather can become a detriment without warning.

So, what you do as a gardener is plant, observe, and take action as necessary to guide everything to an outcome you are happy with. New agent types can be introduced (pesticides, fences, you through weeding and pruning), bad or disappointing plants or other agents can be removed, and new plants can be added (either as replacements or to alter the outcome itself).

Furthermore, this is a never ending process. No one is ever “done” with a garden, they just “take a break” from change for a while until the urge to alter or replace the existing garden takes hold. A gardener is not an architect, but rather a curator of sorts.

Back to Jevon…

So, what does all of this have to do with Jevon’s Paradox? Well, as we see the number of applications, services, and agents explode in the coming decades, we will see the number of complex adaptive systems grow with it. (Actually, thanks to the Internet, it could be argued that almost everything is part of one huge complex adaptive system, but that’s a story for another time.) And as the number of systems grows, the number of “gardeners”—agentic engineers, software designers, operators, and other “outcome managers”—will explode with it.

I firmly believe that many, many people will find themselves with jobs that look a lot more like a combination of product manager, engineering manager, and “agentic psychologist” over the coming decade. Less coding, more guiding outcomes. No single executive can keep track of all of the different interactions, dependencies, and behaviors on which his or her business relies. They will need people they trust to do so.

Why not just more AI? Because we are already seeing the limits of what current LLM models can achieve, and academia is showing us that there is mathematical proof that these are hard limits. It is possible that new neural networks will get good at managing specific types of interactions, but I find it very hard to believe that entire complex systems will be managed at scale by AI in the next 30-40 years. Partly because we, as humans, don’t always know what we want from these systems.

I think the best advice to give a student thinking about what they should study to build a great future career is to tell them it is important to know how computers work, and the basics of coding, but that systems engineering (in the complex systems sense, not the hardware sense) is where it’s at. Perhaps a new discipline will combine software architecture, complex systems science, and business operations to enable practitioners to engineer great outcomes.

“Outcome Engineering”. I like the sound of that.

I write to learn, so please do not hesitate to comment with your observations or objections. We are in an era of rapid, sometimes scary change, and we need to explore the consequences continuously and critically. I welcome any and all constructive conversations to this end. Let me know what you think.

Update: Simon Wardley posted on LinkedIn about this same topic after I wrote this post, but before it was published. As usual, he is very articulate and makes some great points of his own. Very much worth a read.

Share on: