home       

Synthetic analysis:
How science combats complexity

PDF version

In the past two or three decades, complexity not only has been a hot research topic but has caught the popular imagination.  Terms such as chaos and bifurcation become so common they find their way into Hollywood movies.  What is complexity?  What is the theory of complexity or the science of complexity?  I do not think there is such a thing as the theory of complexity.  Not even a rigid definition of complexity exists in the natural sciences.  There are many theories trying to address various complex systems. What I try to do is to extract some general ideas that are implicit in these theories, and more generally, in the way that scientists face and think about complicated situations.

 

Neither reductionism nor parochialism

The April 2, 1999 issue of the journal Science features a special section on complex systems.  Its ten articles represent viewpoints from physics, chemistry, molecular biology, ecology, neuroscience, earth science, meteorology, and economics.  However, it contains no contribution from what its editors, Gallagher and Appenzeller, call "the small, elite group of scientists whose ideas provide the theoretical underpinning for much of what is reported here."  I wonder who these elites are.  What is the substantive theory that can underpin so many sciences?  Can it be very much more definite than the idea in the section headline: "Beyond Reductionism?"

Reductionism is a major point of contention in the philosophy of science.  Its chief proponent is logical positivism, which promotes “the unity of science,” a kind of imperial unity in which a set of universal principles governs all science, just as the laws of Rome governed many lands.

Positivism is in decline.  More in vogue now is a postmodern parochialism advocating “the disunity of science.”  It sees science as fragmenting into a host of incommensurate paradigms, each jealously guarding its own turf and fighting off the others by playing politics, for incommensurability precludes the possibility of rational discourse.

Neither extreme position adequately reflects the practice and contents of science.  The special issue heralding “beyond reductionism” does not subscribe to parochialism.  Instead, it brings various disciplines together, showing that they are not incommensurate.

The dream of reducing all science to a single foundation proves to be illusory.  Many scientific disciplines thrive, and their number increases as scientists tackle with complex systems, which exhibit overwhelming diversity of phenomena.  Each science posits its concepts and assumptions appropriate to its topic of investigation.  Most scientists agree that explanations seek their own levels, so that theories for wholes are usually not reducible to theories about their parts, as theories about brain functions are not reducible to those about neurons, and theories about solids are not reducible to those about atoms.  Irreducibility, however, is not incommensurability.  Research into complex systems does not fragment into insular and disjoint capsules.  Contrary to disunity, it encourages cooperation across academic boundaries.  Science’s special section on complex systems reports a “building boom in multidisciplinary centers” in major universities.

 Multidisciplinary centers draw researchers from the physical, biological, social, and engineering sciences, so that people with diverse expertise can together and pick each other's brains in the struggle to understand complex phenomena.  They indicate a kind of unity in science, not an imperial but a federal unity, not unlike the states of the United States of America each legislating its own laws but all uniting under broad principles of the constitution.  Some general ideas that enable scientists from disparate disciplines to understand and work with each other are discussed in this talk.

It is possible to connect theories for complex systems and theories for their constituents, but the connection is far more complicated than the simplistic prescription of reductionism.  For example, to connect thermodynamics to mechanics requires a completely new theory, statistical mechanics.  Statistical mechanics does not dispense with thermodynamic concepts; instead, it enlists them to join force with concepts in mechanics to explain the complexity of composition.  This nonreductive connection between different descriptive levels is synthetic analysis.

 

Complexity and “complexity”

What is complexity?  Formal definitions exist in computer and engineering sciences, which I will discuss shortly.  However, the formal definitions are not directly applicable to natural phenomena, which do not always fit into the strait jacket of computation.  The natural sciences offer no precise definition of complexity or degree of complexity.  The editors of Science invited ideas from contributors to the special section.  They filled a page, which I include in the handout.  Here are some excerpts [Science 284: 79 (1999)]:

“In one characterization, a complex system is one whose evolution is very sensitive to initial conditions or to small perturbations, one in which the number of independent interacting components is large, or one in which there are multiple pathways by which the system can evolve.  Analytical descriptions of such systems typically require nonlinear differential equations.  A second characterization is more informal; that is, the system is ‘complicated’ by some subjective judgment and is not amendable to exact description, analytical or otherwise.” – Whitesides and Ismagilov

 

“Complexity means that we have structure with variation. . . .  To extract physical knowledge from a complex system, one must focus on the right level of description.” – Goldenfeld and Kadanoff.

 

“Complexity arises from the large number of components, many with isoforms that have partially overlapping functions; from the connections among the components.” – Weng, Upinder, and Lyengar.

 

“Perhaps the most obvious thing to say about brain function from a ‘complex systems’ perspective is that continued reductionism and atomization will probably not, on its own, lead to fundamental understanding.” – Koch and Laurent

 

“Complexity theory indicates that large populations of unit can self-organize into aggregations that generate pattern, store information, and engage in collective decision making.” – Parrish and Keshet

 

“Complexity I natural landform patterns is a manifestation of two key characteristics.  Natural patterns form from processes that are nonlinear, . . .  and natural patterns form in systems that are open.” – Werner

 

“A complex system is literally one in which there are multiple interactions between many different components.” – Rind

 

“Common to all studies on complexity are systems with multiple elements adapting or reacting to the pattern these elements create.” – Arthur

 

I move the Whitesides and Ismagilov’s remark to the top because it makes a distinction of interest to philosophers.  They alone point out the popular notion in which complexity indicates something that overwhelms our understanding, something inexplicable or even unspeakable, something mysterious – things that we understand seem obvious, not complex.  Undoubtedly, many things baffle the best efforts of science, and they are complex.  It is well to remember the limits of science.  Nevertheless, I have the feeling that the idea of complexity as incomprehensibility is overused in the popular and philosophical literature, so that “complexity” sometimes becomes a front for simplistic ideas, a means to evade analysis and justify muddled thinking.

Instead of extolling the mysterious as profundity, scientists struggle to extend their understanding.  The notions of complexity they offer are down to earth, referring to things comprehensible, not exactly and completely, but approximately and to some degrees.  “Complex" and "complexity" intuitively describe self-organized systems that have many components and many characteristic aspects, exhibit many structures in various scales, undergo many processes in various rates, and have the capabilities to change abruptly and adapt to external environments.  Some these characteristics are captured in the formal definitions of complexity.

 

Two formal definitions of complexity

There are two definitions of complexity in the information and computation sciences.  They can help us to appreciate nonreductive strategy for studying complex systems.

The idea of complexity can be quantified in terms of information, understood as the specification of one case among a set of possibilities.  The basic unit of information is the bit.  One bit of information specifies the choice between two equally probable alternatives, for instance whether a pixel is black or white.

Now consider binary sequences in which each digit has only two possibilities, 0 or 1.  A sequence with n digits carries n bits of information.  The information-content complexity of a specific sequence is measured in terms of the length in bits of the smallest program capable of specifying it completely to a computer.  If the program can say of an n-digit sequence, "1, n times" or "0011, n/4 times," then the bits it requires are much less than n if n is large.  Such sequences with regular patterns have low complexity, for their information contents can be compressed into the short programs that specify them.  Maximum complexity occurs in sequences that are random or without patterns whatsoever.  To specify a random sequence, the computer program must repeat the sequence, so that it requires the same amount of information as the sequence itself carries.  The impossibility to squeeze the information content of a sequence into a more compact form manifests the sequence's high complexity.

 

Two formal definitions of complexity

 

Information-content complexity of a system:

 

 

The length in bits of the smallest program capable of specifying the system to a computer

  • simple (regular patterns):  001001001001001001001 . . . .
  • complex (random): 010101101001010111000 . . .

(Some characteristics of random systems can be represented quite simply by other means.)

 

Computation-time complexity of a problem:

 

 

How the computation time required for the most efficient algorithm for solving the problem varies with the problem’s size

  • tractable problem: polynomial time; e.g., n2 computation steps for a size n problem
  • intractable problem: exponential time; e.g., 2n steps for a size n problem

E.g., a problem intractable by searching for a specific configuration:

 

 

number of digits

n
4
40
400
 

number of configurations

2n
16
1.1 x 1012
2.6 x 10120

 

 

 

The second definition of complexity describes not systems but problems.  Suppose we have formulated a problem in a way that can be solved by algorithms or step-by-step procedures executable by computers.  Now we want to find the most efficient algorithm to solve it.  We classify problems according to their "size"; if a problem has n parameters, then the “size” of the problem is proportional to n.  We classify algorithms according to their computation time which, given a computer, translates into the number of steps an algorithm requires to find the worse case solution to a problem with a particular size.  The computation-time complexity of a problem is expressed by how the computation time of its most efficient algorithm varies with its size.

Two rough degrees of complexity are distinguished: tractable and intractable.  A problem is tractable if it has polynomial-time algorithms, whose computation times vary as the problem size raised to some power, for instance n2 for a size-n problem.  It is intractable if it has only exponential- time algorithms, whose computation times vary exponentially with the problem size, for instance 2n.  Exponential-time problems are deemed intractable because for sizable n, the amount of computation time they require exceeds any practical limit.

As an example, consider the problem of finding a specific sequence of binary digits among all its possible configurations.  The size of the problem is the length of the sequence of the number of digits it contains, n.  A sequence with 4 digits has 16 possible configurations; a sequence with 40 digits has a trillion configurations.  Generally, as the number of digits in the sequence n increases linearly, the number of possible configurations of the sequences increases exponentially.  This is the combinatorial explosion of composition.  If, given a certain criterion, we have to find a particular sequence by searching through all the possibilities, and then the combinatorial explosion makes the problem intractably complex.

Brute-force search is a venerable strategy in artificial intelligence (AI), and the combinatorial explosion explains why the progress of AI is rather slow.  Take chess for example.  Chess, a finite game with rigid rules, is conducive to the method of searching through all possible configurations to find the optimal move.  Shortly after the Soviets launched Sputnik in 1957, Simon predicted that a computer would be the world chess champion within ten years.  He was wrong by thirty years.  In the interim, computer technology developed so dramatically that the price of computing dropped by half every two to three years.  Economists estimate that if the rest of the economy progressed as rapidly, a Cadillac would now cost $4.98. It would almost be affordable to vacation on the moon.  Despite the unexpected advancement in hardware technology, computers' chess victory came so late because of the combinatorial explosion.  A chess game is a process made up of constituent moves, and the number of its possible configurations increases exponentially as one thinks more steps ahead.  The combinatorial explosion blunts the raw power of the computer to search through the possibilities.  This is why despite its victory in chess, the computer is still a novice in the board game go; there are simply too many possible go configurations.

Some people argue that chess and other AI problems are more difficult than physical science because there are more possible chess configurations than atoms in the universe.  The argument is wrong because it compares apples to oranges, or the number of possible configurations to the number of constituents – the two columns in the above table.  The proper comparison is between numbers within the same column.  We should compare the number of chess pieces, 32, to the number of atoms, or the possible chess configurations to the uncountably infinite possible configurations that the atoms in the universe can make up.  The comparison would show that physical science would have gone nowhere if scientists were as one track minded as chess machines.

Scientists have managed to understand the universe because they do not rely on brute force enumeration of atomic configurations but can adopt different intellectual perspectives.  They are like human chess players.  Human players do search, but unlike chess machines, they also recognize strategic patterns, discern good moves, and concentrate on them.  Similarly, scientists are not bogged down in microscopic details.  To solve complex problems regarding complex systems, they adopt different perspectives and different strategies. 

The laws of large numbers provide an example of how scientists combat complexity by the strategy of shifting to a higher level of organization.  Remember that according to the information-content definition, the highest degree of complexity belongs to random systems, whose information cannot be compressed or simplified in any way.  However, large and totally random systems also exhibit certain types of regularity than can be characterized rather simply, for instance by using the laws of large numbers and the probability calculus.

The probability calculus and other high-level characterizations leave out much detail information about the constituents, but not all; they capture what is important.  Their conceptual frameworks provide some room for developing connections with theories about the constituents.  By themselves, they capture salient features of composite systems as wholes and are invaluable in explaining and predicting their behaviors.  The ability of scientists to expand their conceptual horizon to encompass various levels of organization, not loosing sight of the connections between the levels even when they concentrate on one level, is essential to the study of complex systems.

 

Ontology and epistemology

Three ideas stand out in the formal definitions of complexity: composition, relation, and size.  These are also ideas you find in the scientists’ explanations cited earlier.  Complex systems are composed of large numbers of interrelated constituents.  Of course, not all large composite systems are complex; those that exhibit repetitive patterns are simple according to the information-contents definition.  Yet composition, relation among constituents, and large number of constituents is ingredients found in most if not all complex systems.

If large simply implies overwhelming, then complex systems would not be so fascinating.  We would give up trying to understand them.  Fortunately, large size can also generate simplicity of a novel kind.  Patterns can emerge on a higher level as results of the self-organization of myriad jostling constituents.  These patterns can often be represented and explained rather simply on their own level.

In talking about emergent properties, we should distinguish between metaphysical and epistemological judgments.  Ontologically, let us we all agree that a complex system is solely composed of the constituents and their interrelations; there is no extra mysterious substance, no extra higher power such as God.  Thus, we accept the ontological assertion below.  Does that imply that we should also accept the epistemological assertion?

  • Ontological assertion:  the states and interactions of all atoms in the universe completely determine the universe’s structure.
  • Epistemological assertion:  knowledge about the states and interactions of all atoms in the universe exhausts knowledge about the universe’s structure.

Consider a black-and-white screen with ten trillion pixels.  Suppose you have spent a week to learn the color of every pixel.  Do you know everything there is to know?  Would you know, by citing pixel colors WWWBBWW . . . , “Your son has an accident is now in the hospital”?  If you see the message, you are no longer stuck with colored pixels and their arrangement; you have jumped to a higher level of organization where texts emerge.

We are mainly concerned with epistemological questions of how we understand the world and how science explains structures of the universe.  Perhaps God can grasp everything in the universe from a single perspective, but we mortals cannot.  Ideologies that pretend to attain God’s position are illusory.  To understand world and cope with its vagaries, we human being cannot avoid adopting multiple perspectives and see things at different levels of organization.  It is in connecting perspectives that the notion of emergent properties becomes significant.

 

Holism, reductionism, synthetic analysis

 Because large systems and their constituents are on two organization levels, their properties can be quite different.  To recognize the system and its constituents and talk about them, we must have already used some concepts.  These concepts may be intuitive, they may constitute what philosophers call “folk theories,” or they may constitute scientific theories.  Historically, system theories (ST) and constituent theories (CT) are often developed independently, and there is no guarantee that their concepts will mesh.  What is the general nature of the relation between the theories?

 

                       

 

Systems (S) are represented (reps) by system theories (ST).  Constituents (C) are represented (repc) by constituent theories (CT), which dualists argue to be unrelated to system theories, and which reductionists insist to be also sufficient for all there is to know about systems.  Synthetic analysts argue that system and constituent theories can be connected to explain how systems are composed (comp) of constituents.  System theories explain how the system exerts macro constraints (mc) on its constituents, and how its own properties are determined by the micro mechanisms (mm) of the constituents.

At least three common attitudes exist.  The first asserts that ST and CT each generalizes in its own way, and theoretical connection between them is impossible.  Without theoretical connection, the notion of composition becomes obscures, and it is meaningless to talk of constituents.  We are left with two distinct types of systems described by to disjoint theories.  The result is dualism.

While dualism opts for isolation, reductionism opts for conquest. It asserts that system concepts and theories are in principle superfluous.  They can be dispensed with and their territories annexed by constituent theories.  A single representation in terms of CT suffices.  It is a purely bottom up approach, where all the properties of the system are nothing but the mathematical consequences of the constituent theories.  The deductive and constructive approach is fruitful for small and simple systems, but does not work for large and complex systems, because the combinatorial explosion generates overwhelming details and complexity, so that a bottom up approach will quickly get lost among all the trees and undergrowth.

For complex cases, the practical approach is to first get an aerial view of the forest, so that one does not lost his way when he descents among the tress.  The aerial view, which reductionism spurns, is crucial to synthetic analysis.  To gain an aerial view, you need proper equipment.  This is where the synthetic conceptual frameworks of probability calculus and dynamics come in.  They accommodate bottom-up deduction, but guide it by a top down perspective.

 

Analysis in a synthetic framework

If you examine how the sciences of large composite systems work, you will find that they do not put together constituents but take apart the systems they aim to understand.  They do not take the parts for granted but analyze the whole to find the parts appropriate for the mechanism underlying specific properties of the whole.  In short, their general theoretical approach is not constructive but analytic, analytic within a synthetic view.

 

                                        


Unlike holism that stays at the top and reductionism that sticks to the bottom, synthetic analysis takes a round trip from the top to the bottom and back.  It encompasses two perspectives, looking at the system on its own level and looking at it on the level of its constituents.  To connect the two levels, it employs includes two kinds of explanations: macroexplanations and microexplanations.

Macroexplanations develop scientific concepts and theories for composite systems without mentioning their constituents.  They delineate system properties, represent them precisely, and find the causal regularities and laws among them.  Macroexplanations constitute the primary explanatory level of systems, and they enjoy a high degree of autonomy.  Hydrodynamics and thermodynamics can operate on their own.  However, for a full understanding of the systems including their composition, macroexplanations are necessary but not sufficient. For this we also need microexplanations that connect the properties delineated in macroexplanations to the properties of the constituents. Microexplanation depends on macroexplanation, which first set out what needs microexplanation.  Thus thermodynamics and hydrodynamics, which provide macroexplanations, matured before the development of statistical mechanics, which provides microexplanations.

Microexplanations use mathematical deduction as much as possible, but it also depends on ample realistic approximations.  They usually introduce their own postulates and assumptions that are not found in CT.  For example, statistical mechanics has its own postulate of equal weight.  Such extra postulates ensure the irreducibility of ST. Microexplanations use both ST and CT essentially.  They explain system properties without explaining them away as reductionism does.  They not only find the micromechanisms underlying various macroscopic properties, they also explain how the large structures of the systems constrain the behaviors of individual constituents.  They look at the whole causal structure spanning the system and the constituents from all angles, upward causation, downward causation, to get a comprehensive grasp of the complexity of composition.

 In short, the actual scientific approach to complex systems does not reduce the theoretical framework but expands it to accommodate more perspectives, more postulates, and more theoretical tools to filter out irrelevant microscopic details and define novel emergent macroscopic properties.  Multiplicity of approaches and models is a characteristic of sciences that wrestle with complex phenomena.

Multiplicity does not imply insularity.  Instead of incommensurate turfs, multidisciplinary centers proliferate.  For interdisciplinary cooperation to be possible, researchers must share certain general ideas that enable them to learn the specific knowledge acquired by alien disciplines.  These general ideas are most interesting to the philosophy of science.

 

Two classes of complexity: Mass phenomenon and nonlinear dynamics

Many natural phenomena are too complex for theorization.  Among those that have yielded to comprehensive theoretical representations, two classes stand out: mass phenomena and nonlinear dynamics.  They appear different, but in a general sense, they share the idea of complexity arising from composition, relation, and large size.

Mass phenomena occur in many-body systems, large systems made up of a great many interacting constituents belonging to a few types and interacting by a few types of relations.  Many-body systems are ubiquitous in the world; two examples are a solid made up of a few kinds of atoms and a national economy of consumers and producers 

Dynamics describes the temporal evolution of a dynamical system as a dynamical process governed by a dynamical equation.  The system itself may be a unitary entity without parts.  However, we can expand our conceptual framework to include time as the fourth dimension of the world.  Then a dynamical process appears as a composite whole comprising temporal parts.  Like a sequence of digits, a process is a one-dimensional entity made up of successive stages, each stage being the system’s state at a particular time.  The relation between two successive stages is governed by the dynamical equation.

Consider for example logistical systems governed by the dynamical equation

                        xn+1 = axn (1 -  xn),

where xn represents a logistic system’s state at the nth time instant.  Do not be deceived by its simple appearance; logical systems can exhibit chaos and other complex behaviors, as we see later.  Here it suffices to note that we can regard a logistic process as an entity made up of successive stages, (x0, x1, x2, . . . , xn-1, xn, xn+1 . . . ), the relation between xn and xn+1 being provided by the logistical equation.  In this case, the “constituent level” describes the characteristics of individual stages xn, and the “system level” describes dynamic processes as wholes.  It is by introducing “system level” concepts for processes wholes that modern dynamics comes to grip the complexity of nonlinear dynamics.

Chaos implies unpredictability, and logistic systems are chaotic when a assumes certain values.  However, one look at the equation will convince you that given xn, xn+1 are easily predicted.  Indeed, that is why logistics systems and all dynamical systems are deterministic.  Unpredictability and the complexity it implies are significant only if we compare various processes in the long run, after the processes accumulated many stages.

Mass phenomena and nonlinear dynamics cover a very wide range of topics in many sciences.  Nevertheless, they share some general commonality.  The systems are large; many-body systems consist millions or zillions of constituents; and novel features such as chaos show up in the long run.  The steps and constituents may be monotonous and predictable, but the systems they constitute may be highly volatile and unstable.  Chaos appears in deterministic processes.  When ice melts, its structure is destroyed.  Chaos and meltdown are examples of emergent properties.  Theoretically, the emergence of such large-scale properties is apparent only in a synoptic view that grasps the systems as wholes as well as their constituents.  Let us examine it more closely.

How science comprehends chaos
How science treats complexities of composition

First part of a talk presented in Department of History and Philosophy of Science
University of Sydney
May 1999

 

Sunny Y. Auyang