Logistics in the Age of Cyber Warfare

It’s an old military truism that amateurs worry about tactics, and professionals worry about logistics. The origin of the saying is attributed to remarks made by Gen Robert H. Barrow, Commandant of the Marine Corps, in 1980: “Amateurs talk about tactics, but professionals study logistics.”

The point is that supplying a military force is the hard part of operations, and takes  significant effort and expertise, while nearly anyone can learn the basics of tactics. The best tactics are of little use to an army that does not have the food, fuel, or ammunition to carry out the plan. The Allies won WWII because they could supply more aircraft, tanks, ammunition, and fuel to their forces, as well as keeping them fed.

Logistics is the art and science of overcoming terrain. Moving people and materiel means traversing the rivers, swamps, oceans, and mountains from the source of supply to the point where they are needed. If an army is unable to traverse the terrain, then resupplying their forces is impossible.

In the realm of cyber warfare, many make the mistake of thinking that logistics is not a significant part of the effort, but that is a mistaken view. Connectivity and bandwidth are the logistics of the cyber domain. To engage in cyber offense or defense, one must get to the portion of the network where the fight is taking place. The best cyber warriors in the world are of no use if they cannot connect to the network they are tasked with attacking or defending. Connectivity to the area of conflict, and sufficient bandwidth to deliver the intended payload, is essential to succeeding in cyber war.

By the same token, denying one’s adversary connectivity to resources that one wishes to attack or defend is critical to success. In physical warfare, interdicting an enemy’s supply lines is a key objective in most campaigns. If the enemy cannot resupply their forces, they will eventually be worn down and defeated just as the Germans were at Stalingrad in the winter of 1942-43. The German army was an excellent fighting force, but without food, fuel, and munitions they were doomed.

So when considering cyber warfare, remember that connectivity and bandwidth are not incidental considerations–they are what the professionals are worried about.

What is a Task?

In order to evolve from a view of architecture focused on systems to one focused on capabilities, we need to define what a “capability” is. And as I’ve mentioned, that means defining what a “task” is. After all, if a capability is the ability to complete a task, then it follows that we cannot define “capability” until we define “task.”

A task is a representation of the change in the state of the world. Put another way, a task is a potential change in some set of values that measure the state of the world. To complete a task is to change those values–that is, to change the state of the world. For example, if the task is to sharpen a pencil, then we can say that the current state of the world (i.e., the pencil is not sharp) is represented by the angle formed by the point of the pencil, say 45 degrees. We can define the task “sharpen the pencil” as a change in the angle formed by the point of the pencil to some value less than 45 degrees. If we change the angle formed by the point of the pencil to 40 degrees, then we have completed the task of sharpening the pencil.

Stated more generally, we can define a task as a change in some set of values measured at time t1 and time t2. This gives us the foundation for formally defining a capability, but as a capability is defined as being able to complete a tasks under specified conditions, it follows that we must also define what we mean by “specified conditions.” At least some of those conditions are restrictions on the task itself. Possible conditions include:

  • Thresholds for each value as measured at t1 and t2
  • The time elapsed between t1 and t2
  • The rate of change in any value over the interval t1 to t2

There are many other possibilities for “specified conditions” which I will explore in more detail as time permits.

Is “System” an Obsolete Concept?

I’ll cut to the chase: the short answer is “no.” But that comes with a really big caveat. In  reality, it’s more like “not entirely.”

We began developing software systems because in the beginning (around the late 1940s), there was no software. If you wanted to make a computer do something, you had to create the whole system: data input, storage, processing, output—all of it had to be created from the ground up. You couldn’t just create a new data analysis algorithm because there was nothing to install it into, no way of running it unless you built the whole system needed to get data to the algorithm and render the results to the user.

The world of software has changed since that time. Data input is readily available—every computer has a keyboard, every database has a query interface. Data storage is handled by either an off-the-shelf database product, or through disk read/write functions provided by the operating system. Many, if not most, of these functions, are available as services that can be invoked across the network. Data storage services like Amazon S3 make data storage just another external component that can be called as needed with no thought to how it is designed and implemented.

The proliferation of publicly-available services has led to the emergence of mashups—a new capability created by stringing together available services, adding a little glueware here and there, and presto! A new capability emerges, with 90% or more of that capability existing entirely outside the developer’s control. Thought of another way, we have a “system” that isn’t designed the way we traditionally think of a system: as a coherent set of components developed and assembled to perform a pre-defined set of functions. Traditional system architecture and engineering focused on developing the desired capability starting from the requirements and progressively building out the necessary infrastructure until the resulting assemblage could fulfill all those requirements. The result is a “system” that can be picked up and moved from one location to another with some minimal set of dependencies (e.g., a particular operating system as the installation location).

In a cloud environment, none of this makes sense. Most of the functions we think of as the infrastructure of a system (authentication, logging, data storage, etc.) area available as services. These are the things that often consumed the lion’s share of the engineering when developing a new system. Even user interface components are readily available as pre-packaged libraries, or even as third-party tools that will render any properly formatted data. In such an environment, are we even developing “systems” anymore? Yes, we are developing systems in the sense that we are defining the ways in which many different elements will be strung together to perform some function. But if the majority of those elements are outside our control—black boxes with interfaces we invoke as needed—are we really developing the whole system, or are we just developing some marginal increase in functionality in the context of a much larger environment? And does that marginal increase in functionality really require the heavyweight processes and frameworks that accompany traditional systems engineering?

No, we probably don’t need all those heavyweight processes and frameworks for much new capability development today. We need to find a new way to define and quantify how to make new capabilities available within a larger ecosystem, one that is light weight and focused on those marginal changes that are needed instead of documenting the vast corpus existing capability.

That said, there is still a place for the traditional concept of a system. Applications where a capability must operate by itself will still need those traditional processes and frameworks. Satellites, aircraft control systems, and similar applications that must be continue functioning even in the absence of any supporting elements will require a full-blown design process that fully thinks through all the elements needed to keep that system up and running. But that’s not the majority of software development these days.

No, “system” is not an –obsolete concept. But its usefulness is increasingly limited these days.

What is a Capability?

If we are going to define a “capability-focused architecture,” it follows that we must define what, exactly, a “capability” is. The US Department of Defense defines a capability as “the ability to complete a task or execute a course of action under specified conditions and level of performance.” This is a more specific definition than that provided by most dictionaries, and in the arena of solution architecture, specificity is important.

Given that definition, how do we specify what a “capability” in terms that are meaningful enough to support a formal architecture model? To do that, we have to break down what a capability really is.

To start, let us take it as axiomatic that a capability may be composed of other, lower-level capabilities. For instance, the capability to drive a car is composed of the capabilities to start the car, to keep the car in the selected lane, to accelerate, to slow down, etc. So, any definition of “capability” must be recursive–there will always be some lower level of capability that could be decomposed. Given this, our primary tsk is to define how to describe the lowest level of capability that we desire to break down.

First, we must define what we mean by “task” or “course of action.” For the sake of simplicity, let us confine ourselves to defining a “task” at this time. Another way of thinking of a task is to effect a change in the state of the world. That is, there is some characteristic of the world that we can measure, and a change in the value of that measurement by some specified value indicates the task has been completed.

Second, we must define “specified conditions.” This is another way of saying “the current state of the world.” That is, the specified conditions are in reality a set of measurements of the characteristics that we care about. Completing a task means changing one or more of those values by some specified amount. “Specified conditions” means the initial measurements of those values that define the state of the world (obviously, limiting our definition of “world” to those characteristics that are meaningful to the problem at hand).

Finally, we must define “level of performance.” Level of performance has two aspects. The first aspect of level of performance is to define the acceptable rate of change of the characteristic of the state of the world which defines completion of the task (e.g., complete the task faster than some established time). The second aspect of level of performance is to define the acceptable amount of change to all the other characteristics of the state of the world (e.g., change characteristics X by amount Y white characteristic Z remains constant).

Defining a capability in this fashion provides the foundation, but by no means the full structure, for a capability-focused architecture. In fixture posts, I will try to flesh this idea out in greater detail.