Intelligence Delivery versus Intelligence Consumption: Part 1

Why AI orchestration is the same as—and different from—application orchestration

One of the key observations I’ve made in over 20 years of building and selling distributed systems platforms is that the application platform market is segmented by the way compute capacity is delivered to software.  The integration point between the tools and platforms that deliver systems capacity (compute, network, and storage) and software that consumes that capacity (applications, services, and data analytics) is core to how platforms are acquired and operated.

capacity_diagram

I believe a similar evolution will be true for AI, as well, but unfortunately the market is still a bit confused about what it is that we are “consuming” and “delivering.” In this three part blog series, I’ll  break down my argument for why I believe the thing we are consuming and delivering is “intelligence.” I will then explain how Kamiwaza provides today’s enterprises with the best interface to intelligence.

But first, what do I mean by “consuming” and “delivering?” Let’s start by exploring the enterprise application and services world most of us are familiar with.

The Application Analog

A great example of the divide between capacity delivery and capacity consumption is how Kubernetes and CloudFoundry/Korifi have evolved from the needs of their respective markets.

Capacity Delivery: Kubernetes

Kubernetes was built originally to manage capacity for Google in its vast data centers. As Docker successfully defined a standard for container definition and packaging, enterprises were quickly running into challenges efficiently managing the mapping of those containers to available compute, network, and storage resources. Google had built their own orchestration for container-based applications, and open sourced it in 2014. After a brief battle between a few different commercial and open source container orchestration options, Kubernetes emerged as the de facto standard for Docker and OCI containers in the late 2010s.

The API Kubernetes provides allows for a standard way to pass containers and their configuration information to the appropriate compute hardware, network connectivity, and data access required to run them. Kubernetes not only automatically finds and configures that capacity, but manages it for availability, performance, and so on. Kubernetes does not care what is in the container. It simply finds room to run it, copies the necessary bits to the hardware, does some configuration, and says “go.”

Kubernetes does not include a development environment, a testing environment, or even an application packaging system natively. That’s not its purpose. It is simply meant to take Docker and OCI containers and run them. So, what can you use to build those containers and the software they contain?

Capacity Consumption: CloudFoundry and Korifi

As an open-source platform, CloudFoundry manages applications with little awareness of the specific hardware hosting them. Since its inception, CloudFoundry was built to take application code and dependencies, package it into containers, and deploy those containers into a container management environment.

Granted, for much of its life that container management environment was unique to CloudFoundry, but it was always carefully defined as something that could be replaced, if the community desired to do so. (Which, in fact, it did at least three times.)

Then, the CloudFoundry community built a new platform, Korifi, as a Kubernetes-centric development platform that brings much of the lessons learned in CloudFoundry to a Kubernetes consumption model. It abstracts and forms opinions about the structure, packaging, and deployment of distributed applications in exchange for greatly simplifying how those applications get built and deployed. The CloudFoundry community offloaded the container orchestration function completely to Kubernetes, which is what enterprise IT has selected as its container orchestration platform of choice.

Why Enterprise IT Markets Reflect This Division

Now, if you really work your way through the history of distributed systems development and look at who the biggest winners have been over the years, you’ll notice that the market divided itself between “capacity delivery” and “capacity consumption” pretty cleanly.

At any given time, the interfaces and technologies involved may have changed (the move from bare metal to virtualization to containerization, for example), and what exactly was included in “capacity” may have shifted. (I would argue shared data stores like Oracle or MongoDB are now “capacity” services, not application specific components.) But you’ll also see that the products that enable enterprises to simplify the delivery of capacity are by and large sold separately from the products that are built to enable the consumption of that capacity.

Why is that?

Well, a big factor is that as enterprise IT grew and needed to scale, the organizations that delivered capacity were separated from the organizations that built or bought things that consumed that capacity—both from budget and management perspectives. You can’t sell developer tools to the “rack and stack” ops teams, and you can’t sell network automation to application development teams.

But I hinted at another factor earlier. Computers, switches, disks, and all the related stuff that makes hardware useful don’t care what applications are using them. They generally don’t care what programming language is being used, what specific data is in the payload in a network packet, or what gets written to disk. They just do their job to make sure those things work as long as they meet some basic standardized criteria.

And, as we’ve seen, m evolution accelerates when those delivering capacity can build automation independently from the creators of the applications, services, and data that consume it.

How Does This Apply to AI?

In the next post in this series, I’ll break down how AI is experiencing a similar evolution, but how the “capacity” here differs from containers and compute capacity. Applications (and agents) seeking value from AI environments are not primarily seeking compute, network, or storage. Rather, all such software is seeking an environment where they can provide a prompt, and get an intelligent response that is useful for their purpose.

Thus, in AI platforms, 'capacity' is the intelligence itself: the models, data, and tools required to generate smart responses. More about this in part 2 of the series.

(By the way, I talked about this briefly in a recent article I wrote for InfoWorld, Rethinking operations in an agentic AI world. Spoiler alert: the impact that this division will have on both agent development and agent system operations is profound.)

As always, I write to learn, so if you have comments or questions, feel free to post them in the comments section below. I look forward to hearing from you.

Share on: