Research and Advances
Computing Applications Adaptive complex enterprises

Fractal Architecture For the Adaptive Complex Enterprise

An evolving process structure executes an effective sequence of decisions while providing real-time response to incoming requests and increasing business visibility across all requests.
  1. Introduction
  2. Characterization of the S-R Business
  3. ACE Patterns
  4. Fractal Structure-Enabled Performance Traceability
  5. Benefits of ACE Integration
  6. References
  7. Author
  8. Footnotes
  9. Figures

Today’s businesses must continuously adapt to external conditions in accelerated time frames. This requires businesses to shift from strategies that eliminate variation to those that embrace variation and changing conditions. Industrial-age Make-Sell businesses accomplished objectives by steadily eliminating variation. In contrast, today’s sense-and-respond (S-R) businesses must often execute by embracing variation and learn to perform given widely varying circumstances [4]. Specifically, the S-R business must adapt to external conditions by managing along the chain: “Sense changing opportunities→Request variation→Product variation→Response process variation→Resource variation and Information variation→Efficient delivery and feedback→Business growth and survival.” (Here, the implication symbol “→” reflects cause and effect.) Recognizing that IT holds the promise of enabling the S-R enterprise, transformation is often accompanied by enterprise integration projects but with limited success. Project experiences spanning the past 10 years suggest that IT must first meet key challenges.

In this article I show that for IT to overcome the limitations, the deeper issue of modeling the very process of change itself must be addressed. This can be accomplished with a complete fractal architecture that provides a single shared complete business-IT framework for the adaptive business. Such an architecture overcomes some typical limitations, as described here.

Business reengineering implementations of the previous decades began with models of various flavors. Often portions of the developed models actually made their way into improved manual practices and application interfaces to support the prevalent industrial-type businesses. Many of the resulting applications with inflexible processes are now also known as silo applications. Equally often, other models became obsolete and were never put to use. What are the underlying reasons?

One important reason is that the very processes of change were not captured. Linear models portray only a snapshot view of the world. Since such models themselves do not change over time, they become obsolete as events unfold and even small variations encountered in execution cause significant ripple effects. For example, consider the effect of a simple event like changing the delivery date or the delay of a shipment. This often results in a great impact to ongoing operations. It is not surprising that today’s increasingly demand-driven ecosystem causes significant variation in business processes; making static models and representations rapidly obsolete due to the effort required in maintaining them manually. By using nonlinear model representations of the real world that have the ability to change with external variation, the very process of adaptation can be electronically supported.

Another reason is that what cannot be measured cannot easily be improved upon. Adaptation requires a framework for performance feedback. Quantitative and qualitative feedback about ongoing processes helps improve decision making by the business agents. Such feedback about ongoing processes has been called reflective [11] and is key to the learning, adaptive organization. The need for ongoing real-time adaptation challenges today’s approaches that mine business intelligence with after-the-fact approaches. Visibility is also hampered by the inertia created by a plethora of poorly related prescriptions; disjoint inter- and intra-business unit models; enterprise systems; and methods discovered (and often rediscovered) across business, systems engineering, and computer science disciplines. There is a fundamental need for a self-consistent model enabling scaling from smallest to largest, simplest to arbitrarily complex, as well as supporting nondeterministic behaviors. Such a framework lends itself to measurement and interdisciplinary analysis for more rapid adaptation. The two requirements are addressed by the adaptive complex enterprise (ACE) architecture, which is integrative, dynamic, and measurement-centered in nature.

Back to Top

Characterization of the S-R Business

The primary response or transaction loop of the S-R business unit begins by sensing the ecosystem as shown in Figure 1. The sensing culminates in a request object (for example, an order, proposal request, or research requirements from the customer) that initiates the response processes within the business unit. The execution of the response commandeers the resources of the underlying infrastructure (shown in the lower half of Figure 1) to play roles that transform inputs and culminate in a deliverable object that satisfies the customer within the ecosystem. Within the business dimension, the transaction loop includes the prototypical requirements and planning, execution, and delivery (RED) steps that incrementally add value to inputs.

A business unit handles different request types (for example, product change, process change, returns, request for quote, order, corrective actions, and new requirements) from external and internal customers. Each type of request has its own primary transaction loop and RED steps. In addition to the necessary inputs, the steps also identify roles such as customer, provider, sales, and logistics that must be assigned resources for execution. Active resources (such as humans or automated software) fill RED roles and become the agents that make decisions and execute these steps under some external policies and constraints to produce outputs. The final output is called a deliverable.

When each request arrives, needed resources from the underlying infrastructure are assigned to roles to execute the RED steps. The resulting agents complete steps by applying transformations to inputs and producing outputs (a deliverable being the final output). The transformation tools and inputs are considered passive resources. Every incoming request causes interactions between the RED and business information objects (such as requests, other REDs, roles, resources, and deliverables). The interactions are reflected by terms such as ‘receive’, ‘initiate’, ‘assign’, ‘use of’, and ‘applied to’, and are also the basis of aforementioned variation during execution.

The response transaction loops in the S-R business unit are characterized as Request→RED→Role↓↑→Deliverable (where cause-and-effect interactions are represented by →, and the ‘assignment’ and ‘use’ of resources are represented as ‘↓’ and ‘↑’, respectively). The ACE patterns introduced next are based on this characterization.

Back to Top

ACE Patterns

The ACE patterns generate a nested executable structure based on the dynamically determined needs of each incoming request. The patterns enable business agents to address variation through emergent process structures and information. The structure also engages both organizations and systems in specific roles that meet commitments. The emergent processes are lean and efficient as they are directed by the needs of each specific customer request. Each pattern unit of structure is instrumented to measure and trace its own behavior. With this arrangement, the structure provides ongoing operational feedback for adaptation by agents. The structure is implementable with the latest middleware and workflow technologies that access information from existing systems, as shown here.

The patterns introduced here are locally defined and related to chaos and complexity theory1 [10], and are mined from different sources.2 Here, patterns and fractals are introduced intuitively rather than rigorously (as in [3]); related developments in model-driven architectures by the Object Management Group (OMG; see, hierarchical architectures for intelligent systems [1], and concepts for automating systems integration have also been influential. The four related patterns are:

  • Triage to handle request variation;
  • RED fractal to handle processing variation;
  • Agent assistance to handle resource variation; and
  • Infrastructure use to handle variation in the underlying information systems and components.

1: Triage pattern for handling request variation. The triage pattern logs and characterizes the incoming request, maps the request to an existing scheme of Request types→REDs. . .→Deliverables characterizing the initial processing and role requirements, and makes the initial assignment of resources from the underlying infrastructure. A familiar example of the triage pattern is hospital emergency handling. During admittance the patient’s request is triaged. An example execution is: Request :heart condition →R:Diagnosis and plan by cardiologist, E:Operation, D:Therapy →Deliverable:Rehabilitation. Appropriate initial resources, in this case a cardiologist and heart and lung equipment, are then assigned.

In S-R businesses, triage is started within the customer service center and is applied to external requests only. Consistent with our unifying approach, we generalize this to also include requests generated internally. Even requests for new offerings are introduced in this manner.

2: RED fractal pattern for handling processing variation. The initial understanding of requirements and assignments of resources may evolve due to local discovery. For example, during diagnosis the cardiologist might find that the patient has a diabetic condition and additional specialists must be requested. As will be demonstrated, the RED fractal (or RED 8623.gif ) allows agents to address such processing variation while retaining full cause-and-effect visibility during execution; the basic semantics of RED are introduced here.

RED execution: During RED execution, the customer and provider roles make co-commitments and jointly agree to progress (illustrated by the red arrows in Figure 2, part A) to the next step. For example, filing the return request of a laptop purchase results in a RED execution. Here the ‘R’ step proposes a reimbursement plan based on warranty and defects. The proposal and the authorization by the customer form the co-commitment that causes the back-office ‘E’ steps to replace the product. The next co-commitment is the ‘D’ or delivery by the provider, and the acceptance and payment by the customer. Note the roles for RED execution can be specialized. For example, the provider role for the first step could be the ‘product engineer’, but later the provider role can be ‘logistics’. In general, roles are careful characterizations of infrastructure capabilities provided by the business in order to execute and complete different types of REDs successfully. Roles can even be automated. Finally, each step has a status—completed, executing, or to be started—as illustrated in Figure 2, part C.

RED 8623.gif : RED 8623.gif is used to refer to the recursive (fractal) application of transaction loops and RED to refer to a single transaction loop. As illustrated by the fractal blueprint in Figure 2, part B, the transaction loops are recursively applied by the control unit and within any of the R, E, or D steps. The resulting sub-RED enlists other resources in support of the primary deliverable, and so on. This provides the emergent (not predetermined) processing structure for coordinating and tracking resources needed for discovered conditions. As shown in Figure 2, part C, the numbering of the REDs and sub-REDs identifies the transactions for execution by the Control Unit and other patterns.

Agents make decisions to complete the RED steps. Often they apply policies and local knowledge during the course of completing steps as follows:

  • Requirements and Planning: This begins after the initial triage of the customer request and with the assignment of a resource to play the provider role. The provider may continue to apply local knowledge to clarify needs. The outputs may include proposals, designs, and diagnostics. The step completes by obtaining a commitment from the customer role. Often this step is considered overhead and represents investment. The successful completion of this step adds value internally.
  • Execution: This begins after the customer authorizes execution. The provider role co-commits to use the necessary resources and inputs. Core business knowledge is typically applied by the transformations and value is added to the input raw materials. This step is straightforward at times (as in purchasing a book via the Web) and quite extensive (with many sub-REDs) in other industries like health care and manufacturing.
  • Delivery: In the ‘D’ step the deliverables are assembled and provided in the customer’s environment, as input. By assuring that the customer’s criteria for acceptance are met, the provider gets compensated. That is, the customer determines the RED’s value-add to business.

A new request and consequent triage can occur within any step and at any time to address local conditions. Consider the return of faulty laptops by a retailer to the distributor. After examination, the product engineer characterizes the response needed—use of supplier to fix the problems and complete a sub-RED. This decision could not be anticipated until the laptops were examined. Furthermore, the decision was based on real-time feedback—the time constraints of the request and the availability of internal resources. Thus, while executing a primary R, E, or D, an agent can triage and insert a small number of secondary REDs at the ‘frontal’ as illustrated in Figure 2, parts B—D. This dynamic (pull) nature of decision making is mirrored by the fractal expansion.

Secondary requests arise for different reasons. Often it is to create a sub-deliverable used in the primary deliverable. However, Murphy’s Law also prevails and secondary requests are often spawned due to failures and changes. In fact, the more dynamic the environment, the more secondary requests contribute to response turbulence. By making the processing of secondary requests explicit and formal, agents can now better manage variation. In some cases the sub-REDs must be completed before the parent proceeds. In other cases sub-REDs may be batched for more traditional processing. Finally, best-practice REDs can be used from a library.

3: Agent assistance pattern for resource variation and coordination. Resource variation is supported by allowing the dynamic assignment (Role↓) of active and passive resources to roles. Role↓ considers policies: decreasing the task variability, load balancing, leveraging knowledge, and preferences. Role↓ can also be just-in-time as needed for the transaction. This late binding provides flexibility in using internal (or external) resources and implements the virtual enterprise. Role↑ is recorded as part of the RED transaction’s costs.

Assistance, Workflow Coordination, and Instrumentation: Monitoring the completion of steps and assignments of next steps for the agents participating in the different ongoing RED roles is accomplished by the underlying workflow management technology and control unit. Electronic workcenters—eWorkcenters—are delivered as tasks (steps with all related information) to the assigned resources in the form of To-Do lists. The tasks are accessed from a portal, thus coordinating globally distributed resources. Security is enforced and the task-related delivery of documents and records from underlying adapted applications is automated. Real-time traceability information is the feedback to agents enabling them to adapt their decisions and start sub-transactions as needed. Upon the completion of tasks, rules capture decisions, update applications, and record the simultaneous value-add automatically.

4: Infrastructure use pattern. Finally, the variation in the underlying infrastructure is managed for effective use through the use of well-defined shared services and protocols for use. Business-IT components—organizations, software systems, and machines—are all adapted/trained to provide a predictable quality of services. The components strive to support their own business use based on desirable goals and policies:

  • Each component should provide shared services that can play roles in as many different RED transaction types as possible;
  • The management and monitoring of component services is facilitated through a single point of administration;
  • Role-based access to information in underlying components is supported;
  • The use and satisfaction with each service provided is measured; and
  • Use information is saved and viewed for different purposes such as feedback for agent decision making and compliance assurance.

Back to Top

Fractal Structure-Enabled Performance Traceability

The benefits of an emergent processing structure can now be explored. Traceability is defined as the history of interactions and the measurements of performance of transaction loops now made feasible by the instrumented RED structures. For example, we can now capture 9571.gif increase in # Requests of type X → 9571.gif increase RED span time → 9571.gif increase in $business value. (Here 9571.gif represents change.)

Traceability also integrates and builds upon successful frameworks for business improvement put forth by experts of the previous decades. At the core, these methods took a process-based approach to improving performance. Some suggested examples of traceability leading to processing improvement include: 9571.gif process performance measures → 9571.gif product quality → 9571.gif customer satisfaction → 9571.gif business performance → 9571.gif economic performance [4]; 9571.gif infrastructure performance → 9571.gif process performance → 9571.gif customer satisfaction → 9571.gif business performance [7]; and eliminate wasted time→lean operations→business performance [5].

However, before we can show how traceability enables management we must precisely define related words like process, activity, step, task, and transformation that have developed different meanings in various disciplines. This is accomplished by the concept of dimension to address the different agent viewpoints within a business (see Figure 3).

Dimensions and Measurements. Within the work dimension, the term process means transformations applied to inputs within R, E, or D steps. In the operating dimension, process means RED transactions. In the business dimension, process is the aggregation of all the completed, ongoing, and future RED transactions for one or more request types. The term step includes all three meanings. With these definitions, RED 8623.gif provides a single structure to address the value contributions to each dimension, as well as costs.

Work Dimension and Simultaneous Value-Add. Performance traceability naturally begins with each incoming request and its transaction contributions to different dimensions:

  • RED Product value-add is determined by the transformation performance within R, E, D steps and the customer’s satisfaction with the deliverable captured in the ecosystem dimension.
  • RED Process value-add is determined by transaction performance measured as span time or factors such as throughput and is captured and aggregated in the operating dimension.
  • RED Business value-add is determined by the compensation upon acceptance of the deliverable and is captured and aggregated in the business dimension.
  • RED Information value-add is the traceability information derived from the use of input objects, resources, and transformations within the RED and is captured in the information dimension. For example, Role↑ is determined by costs incurred (based on time used) for each resource playing a RED role.

Simultaneous value-add is defined as the contribution to each dimension at the close of a RED. As illustrated in Figure 2, part D, at the close of each transaction loop all related measurements and history are now co-related. For example, “Customer name, returning? etc.→Request type: requirements→RED: span/wait time→Role↑: what resources, how much, what information objects→Deliverable type: what quality, what lead-time, business value-add” is captured.

The capture of the quantitative and qualitative RED values as well as the aggregation of values for different dimensions is implemented by rules. The RED rules propagate information to nested REDs and also aggregate the performance of all ongoing instances of a request type for feedback and monitoring at the different dimensions as shown in Figure 3. Specifically, we can obtain the business value-add of each of the RED types in the business dimension as well as the RED throughput in the operating dimension [10]. At the same time, Role↑ costs are captured during the execution of REDs. The business margin is calculated by subtracting all aggregated infrastructure costs (referred to as secondary activities by Porter [10], it includes resources and all other investments) from the total business value-add due to all the responses delivered. Thus, it has been shown that RED 8623.gif provides a single structure and traceability that unifies the concepts such as value-add, activity-based costing, balanced scorecard, and process metrics.

End-to-end traceability information is quite difficult to collect and manage with silo-type organizations and systems. Businesses usually have different (and often multiple) enterprise systems managing objects and related processes. These systems are leveraged though the fractal architecture and infrastructure use patterns. The emergent response structures capture simultaneous value-add as agents attend to the unique needs of every request. The information dimension captures this and provides feedback meeting the objectives of agents in each of the other dimensions as shown in Figure 3.

Back to Top

Benefits of ACE Integration

ACE represents a shift from traditional top-down discrete planning-followed-by-execution cycles. Instead, RED 8623.gif provides a top-down primary planning structure, which is executed bottom-up by agents empowered with local planning-execution-monitoring-adaptation flexibility. Traceability and feedback empowers agents, yet they act as one from an executive point of view.

Fractal approaches are being increasingly applied to successfully streamline businesses. For example, Skoda (part of Volkswagen) implemented fractal production lines that caused profits for that year to increase 53% over the previous year [6] and Procter and Gamble improved response time from 4-to-1 through supply chain coordination [8]. These implementations illustrate the overall benefits of the ACE approach.

Today’s lean business practices3 create customer value through shop-floor processes that apply only the resources needed to satisfy the specific request. ACE architectures support the same principles of creating value across other areas of the business. Consequently, efficient electronic processes arise when the resource needs are determined and met dynamically for each request. Not only do ACE eWorkcenters enable just-in-time allocation of resources but also the delivery of work tasks leveraging the Internet. Finally, traceability supports most compliance and assurance requirements.4

It has been recognized, through workflow solution implementations over past years, that workflow processes such as engineering change, manufacturing planning, return handling, emergency handling, and even disaster recovery have much in common. However, the degree of reuse has been limited because the static models used were not flexible. By separating the concepts of requests, processes, resources, transformations, transactions, and components, we can better reuse each dynamically within the RED fractal structure and eventual solution.

ACE-type architectures have implications to emerging trends in highly distributed, embedded, as well as mobile systems. A fractal-based architecture has significant potential to provide the structural and analytic underpinning for distributed, highly configurable, self-describing structures to assemble themselves into useful systems.

Back to Top

Back to Top

Back to Top

Back to Top


F1 Figure 1. Characterization of the transaction loops as Request→RED→Role↓↑ Deliverable chains executed by the Sense-Response(S-R) business unit.

F2 Figure 2. RED transaction and RED 8623.gif concepts: A) RED co-commitments (illustrated with red arrows) between the Customer and Provider Roles during a RED transaction; B) a blueprint for fractal expansion; C) RED 8623.gif Transaction and Sub-Transaction; and D) A transaction’s simultaneous value-add contributions to the different dimensions.

F3 Figure 3. The S-R business unit dimensions defining the role of agents in terms of the scope of interest. At execution the ACE structures capture simultaneous value-add and traceability information for improved decision making in each and every dimension of the business.

Back to top

    1. Albus, J.A. and Meystel, A.M. Engineering of Mind: An Introduction to the Science of Intelligent Systems. Wiley Series on Intelligent Systems, 2001.

    2. Deming, W.E. Out of the Crisis. MIT Center for Advanced Engineering Study, Cambridge, MA, 1982.

    3. Gamma, E., Helm, R. Johnson, R., and Vlissides J. Design Patterns, Elements of Reuseable Object Oriented Software. Addison Wesley Professional Computing Series, 1995.

    4. Haeckel, S.H. Adaptive Enterprise: Creating and Leading Sense-And-Respond Organizations. Harvard Business School Press, 1999.

    5. Jones, D.T. and Womack, J.P. Lean Thinking: Banish Waste and Create Wealth in Your Corporation. Free Press, 2d ed., 2003.

    6. Jordan, J.A. Jr. and Michel, F.J. Next Generation Manufacturing: Methods and Techniques. Wiley, NY, 2000.

    7. Kaplan, R.S. and Norton, D.P. The Balanced Scorecard. Harvard Business School Press, Boston, MA, 1996.

    8. Mackenzie, D. The science of surprise: Can complexity theory help us understand the real consequences of a convoluted event like September 11? Discover 23, 2 (Feb. 2002).

    9. Peitgen, H., Jurgens, H., and Saupe, D. Fractals for the Classroom: Part One, Introduction to Fractals and Chaos. Springer-Verlag. 1992.

    10. Porter, M.E. Competitive Strategy: Techniques For Analyzing Industries And Competitors. Free Press, 1998.

    11. Schon, D.A. The Reflective Practitioner: How Professionals Think in Action. Basic Books, 1983.

    12. Winograd, T. and Flores, F. Understanding Computers and Cognition: A New Foundation for Design. Addison-Wesley, Reading, PA, 1987.

    1Complexity theory is the study of nonlinear dynamics. Fractals and complexity theory allow us to use nonlinear techniques to relate cause and effect in highly dynamic situations. Complexity theory also reassures us that complex (sophisticated) behaviors are not the consequence of elaborate theories and requirements. Benoit Mandelbrot showed fractal structures could imitate complex structures in nature. For example, when a coastline is reexamined at a magnified scale, the same pattern emerges. This is called self-similarity. This article hypothesizes that the complexity in business and information systems can be mimicked by fairly simple fractals and the underlying IT requirements are simple (and thus more easily implementable and supportable as a product).

    2Some sources are Speech Act Theory as proposed by Winograd and Flores [12], the Supply Chain Council ( reference framework developed by over 700 companies, and over 100 enterprise integration projects conducted during the previous decade.

    3It has been shown that this systematically minimizes all forms of waste. Traditional forms of waste include wasted capital (large inventory), wasted material (scrap), wasted time (due to lack of resources), too much work in process, transit time, wasted resources (inefficiency), human effort, rework applied (due to inaccurate information), wasted energy (energy inefficiency), and wasted environmental resources (pollution).

    4For example, the relationships between orders, design, work instructions, quality documents, corrective actions, and the numerous physical components must be maintained across the extended enterprise for standards like ISO 9000 series.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More