Mass phenomena
and complexities of composition

PDF version


Much complexity we see around us stems from a similar source, structures generated by the interactive combination of many constituents.  The constituents themselves can be rather simple, so can the relation between any two.  However, because there are so many constituents in a large system, their multiple relations generate a relational network that can be highly complex, variegated, and surprising. 

Most interestingly, through countless relations pulling and tucking at all direction, sometimes the constituents organize themselves in such ways that the system as a whole exhibits large-scale patterns that can be conceptualized rather simple, just as crazy swirls of colors crystallize into a meaningful picture when we step back from the wall and take a broader view of a mural.  These salient patterns are the emergent properties of the systems.  They are absent in individual constituents, because they belong mainly to the structures arising from the relations among the constituents.  The rigidity of solids and turbulence of fluids emerge from the intangible quantum phases of elementary particles; life emerges from inanimate matter; consciousness emerges from neural organization; social institutions emerge from individual actions.  Without emergent properties, the world would be dull indeed, but then we would not be there to be bored.

This talk examines two sciences of mass phenomena: economics and condensed matter physics.  Instead of trying to reduce everything to a single set of principles, each science employs a host of models and descriptive levels to explain the great variety of complexities and emergent properties arising in large-scale composition.


The combinatorial explosion and complexit

Notions of composition, self-organization, and the part-whole relation are inherent in most sciences.  Scientists everywhere wrestle with how things – atoms, cells, galaxies, societies – are made up of smaller entities.  The problem consumes so much effort research effort not only because everything in the universe, except electrons and quarks, is composite.  Composition, especially large-scale composition, is a major source of complexity.  This is most apparent in the combinatorial explosion.

Consider a simple composite system made up of two kinds of constituent, e.g., black or white pixels.  If we string the pixels in one dimension, we get a binary string or the analogue of a sequence of coin tosses.  However, let us arrange the pixels in a square array.  If we neglect the relations among the pixels, then a system with n pixels is trivially the sum of x black pixels and n-x white ones.  What makes the system interesting is the relative arrangement of its constituents.  Taking account of the spatial relation among the pixels, a 2 × 2 array with 4 elements has 16 possible configurations; a 5 ×5 array has more than 33 million.  Generally, an array with n pixels has 2n possible configurations.  As the number of a system’s constituents increases linearly, the number of its possible configurations increases exponentially and quickly exceeds even the astronomical.  A mediocre printer has 320 pixels per inch, and good printers can quadruple that.  The combinatorial explosion explains why there are practically infinite varieties in pictures of any size.

The combinatorial explosion


Number of pixels

Number of configurations

2 × 2



3 × 3



5 × 5



20 × 20


2.6 × 10120




Notice that the combinatorial explosion is spectacular only for large systems.  Why are there only a hundred odd kinds of atom but virtually infinite kinds of larger things?  A major reason is that the strong nuclear interaction, which binds nucleons to form atomic nuclei, has a very short range.  The short range limits the number of nucleons in nuclei, and the size limitation puts a cap on combinatorial possibilities.  By contrast, the electromagnetic and gravitational interactions, which bind together atoms and larger entities, both have infinite range, so there is no limit to the size of composition, hence no limit to variety.


Emergent properties of composite systems

Faced with the combinatorial explosion, how do scientists systematically characterize and classify the properties of the composite systems?  One possible way, called the microdescription, is to describe the system by describing each of its constituents, e.g., by specifying the color of each pixels in the array.  It works for small systems; we describe the solar system in terms of planetary orbits.  It is also useful in specifying particular large systems; that is the way pictures are encoded digitally.  However, science is not so much interested in particular cases as in typical variations.  Here microdescriptions become futile.  Of the 33 million possible configurations of a 5 × 5 array, only microscopic details separate one from another.  Overwhelmed by details, we see no meaningful bottom-up way for systematic classification.

Fortunately, large numbers can also spawn simplicity if viewed from proper perspectives.  As the number of pixels in the array increases, sometimes new regularities appear that make the idea of “picture” significant for the first time.  Switching from the level of pixels to the level of pictures, we recognize system-wide patterns describable concisely in their own terms, as the array is gray, or salt-and-pepper, or a geometric figure, a landscape, a portrait, a text.  You do not see myriad black and white pixels in this slide; you see letters.  The letters, figures, and other recognizable patterns are the emergent properties most interesting to theoretical science.  They constitute the macrodescriptions of the systems.

So we have two types of descriptions for a composite system, how do we connect them?  The reductionist prescription is bottom up: one should define macrodescriptions in terms of microdescriptions.  The actual scientific practice is top down: scientists classify microdescriptions according to macro properties.

In synthetic analysis, we jump to the top for a proper perspective of the system as a whole, discern system-wide regularities, then use these regularities as criteria to classify and define the myriad possible configurations differing only in microscopic details.  For instance, we pick a group of arrays under the notion of “circle,” and use the characteristic of circles to define their typical configuration.  Such top down approach is the secret of the probability calculus.  It picks gross features of a sequence of coin tosses, say it contains 55% heads, then use it to carve a chunk of the sequences’s state space and calculate its probability.  This approach is inherited by statistical mechanics.  At the heart of statistical mechanics are three “ensembles,” each ensemble uses a macroscopic property to pick out a group of microscopic configurations.  For instance, the canonical ensemble picks out a class of microscopic configurations by their conformity to the macroscopic property of a certain temperature.


Two sciences of mass phenomena

Many phenomena are too complex for theorization.  Among those that have yielded to comprehensive theoretical representations, two classes stand out: mass phenomena and nonlinear dynamics.  Condensed matter physics and economics are sciences of mass phenomena 


Two sciences of large-scale composition



Condensed-matter physics




solid, fluid

decentralized economy

  System properties

strength, conductivity, . . .

allocation, production, . . .

  Constituents  i

ions, electrons

consumers, producers

  Number of Constituents



  Constituent properties  Pi

energy, momentum, . . .

taste, budget, profit, . . .

  Interaction   Rij

electromagnetism, collision

trade, contract




  Many-body problem:



  Known:  Typical properties and interactions of constituents represented in terms of variables Pi, Rij

  To find:
                1.  Specific properties of individual constituents.
                2.  Typical properties of systems as wholes.
                3.  Relations between typical system and constituent properties.



Mass phenomena are the behaviors of many-body systems, large systems made up of a great many interacting constituents belonging to a few types and interacting by a few types of relations.  Many-body systems are ubiquitous in the physical, ecological, political, and socioeconomic spheres.  Familiar examples are solids, say gold bars.  The atoms in the solid may decompose into ions and electrons, which interact with each other via electromagnetism.  The solid has macroscopic properties such as strength, ductility, electric and thermal conductivities, thermal expansion coefficients.

The modern individualistic and egalitarian society is a many-body system.  The systems studied in theoretical economics are decentralized economies made up of millions of consumers and producers.  Centrally planned economies are more suitable for structuralism and functionalism than many-body theories, for the central planner has a controlling status, violating the requirement that the constituents of a many-body system are almost similar in status.  Real life consumers and producers are more complicated than electrons and ions.  In economic theories, however, they are grossly simplified and represented by variables such as the consumer’s taste and budget and the producer’s plan, technology, and profit.  Some theories also include partial information and knowledge.  The consumers and producers interact via trade, contract, and other commercial activities.  Together they constitute a free market economy with a certain resource allocation, national product, inflation, unemployment, and other macroeconomic properties familiar from economic news.

A common reaction to comparisons between physics and other sciences is that peoples and organisms are not all the same, they vary; and people change as they interact with each other.  True, but electrons too vary and electronic property too changes radically in different situations. 

Some scholars deem many-body systems trivial because they consist of only a few types of constituents.  They tend to identify complexity with “heterogeneity,” a buzzword in technology studies.  Heterogeneity can undoubtedly contribute to complexity, but it seems to be neither necessary nor sufficient.  Insufficient, as a laundry list of the most heterogeneous miscellany hardly makes a complex system.  Unnecessary, as all the complexities of the digital world is generated by combing 0s and 1s, the penultimate candidate for homogeneous constituency.  Furthermore, the constituents of a many-body system are not as homogeneous as the scholars think.

To say that the constituents belong to one type does not mean that they are all the same.  We all belong to the species Homo sapiens, but that does not exclude infinite diversity among us.  Diversity among electrons is no less great; a law of quantum mechanics states that no two electrons can share exactly the same properties.  Entities of a type are similar in that their behaviors exhibit certain typicality that can be represented in general terms.  In theoretical sciences, a typical property is often represented by a mathematical variable or function, which can assume infinitely many values.  The function’s many values express the variety of specific properties of entities belonging to the type.  Examples of typical properties represented by mathematical functions are energy and momentum in physics and consumer taste and production plan in economics.

In a many-body problem, we assume that we know the typical properties and relations of the constituents.  We know how electrons typically move and repel one another, how consumers typically manage their budgets and trade with one another.  In physics at least, the typical properties are extrapolated from the studies of small systems.  They are usually well known and can be written down quite easily in terms of variables.  We do not know the specific properties of individual constituents in the system; specific properties such as how particular consumers fare in an economy belong to the solution of the many-body problem.

The central aim of many body problems is the microexplanation that relates typical macroscopic properties of the systems to the typical properties of the constituents.  As it turns out, this is a very difficult problem.  There are many methods of solving the problem. 


Models of many-body systems

Holding a magnifying glass to the wall, you see  patches of cracked colors.  Standing back, you see figures with various facial expressions.  Retreating down the aisle, you see a mural, The Last Judgment, and understand why the expressions vary so drastically from joy to despair.  Similarly, the ability to adopt various intellectual focuses and perspectives suitable for various topics is essential to the study of complexity.  Scientists use different concepts and theories to describe large composite systems and their constituents, e.g., thermodynamics describes macroscopic systems and mechanics describes their molecular constituents.  The different theories are like the telescopes and microscopes we use to see distant and microscopic objects.

A striking feature of sciences of complexity is the diversity of their theoretical perspectives, models, and levels of description.  Economics divide into microeconomics and macroeconomics.  Condensed matter physics makes heavy use of statistical mechanics to connect mechanics on the micro level and thermodynamics on the macro level.


Three classes of equilibrium models with increasing systemic cohesion






  Independent individual

self-consistent field
electron gas

perfectly competitive market


theory of Fermi liquid
phonon, plasmon
elementary excitation

industrial organization
information economics
game theory


phase transition


endogenous growth
structural unemployment


Here I can barely touch a couple of models in physics and economics.  Both boast dynamic models for temporal changes and equilibrium models for unchanging configurations of composite systems.  Of equilibrium models, here are three classes that describe increasing systemic cohesion.

The perfectly competitive market theory in economics and the self-consistent field theory in physics are perhaps the most widely known models.  They are suitable for relatively simple many-body systems, where the bonds between the constituents are so weak that methods exist to approximate the constituents as independent individuals.

Going from independent individuals through collective phenomena to emergent properties, the bonds between constituents tighten and the systems become more cohesive.  Tighter integrations generate more textures and complexities.  Their theoretical treatments become more difficult. 

In both sciences, independent individual models matured first.  The Hartree-Fock approximation in physics appeared in the late 1920s, shortly after the advent of quantum mechanics.  Although phase transitions have been familiar, their microexplanations did not take off until 1970, with the introduction of the renormalization group.  Similar progress occurred in economics.  The Arrow-Debreu model in microeconomics appeared in the early 1950s.  Although Von Neumann and Morgenstein introduced game theory into economics in 1944, its application to information economics and industrial organization mushroomed only in the last twenty years.  Emergent properties such as endogenous growth and structural unemployment are still at the frontier of research.  The three waves are not merely academic fashions.  The models address different type of objective phenomenon, and their successive appearance reflects how science proceeds step by step to confront more and more difficult problems.


Self-consistent independent-individual models



FIG 1. (a)  In a many-body system, each constituent i has property Pi and binary relation Rij with every other constituent j.  The result is a complicated relational network.  (b) In independent-individual models, a part of the relations is folded into the situated property P*i of each constituent, another part of the relations is fused into a situation S created by all.  The reformed constituents have no explicit connection to each other.  They influence each other only through their individual contribution and response to the common situatio

Consider a system that consists of a single type of constituent with typical property P and a single type of binary relation R.  Each constituent i in the system has property Pi and relation Rij to every other constituent j.  Although each binary relation connects only two constituents, each constituent can engage in as many binary relations as there are partners.  This forms a complicated relational network.  Suppose we change a single constituent.  The effect of its change is passed on to all its relational partners, which change accordingly, and in turn pass their effects to their relational partners.  To track all the changes would be an intractable problem.  Without some simplifying approximations, scientists would be stuck in the relational network like a butterfly trapped to a spider's web.

Relations generate complexity and make the problem difficult.  The crudest approximations simply throw them away, but this usually does not work.  For relations cement the constituents, ensure the integrity of the composite system, and make the whole more than the sum of its parts.  To take account of relations, scientists reformulate the many-body problem.

In a widely used strategy, scientists analyze the system to find new constituents whose behaviors automatically harmonize with each other so that they naturally fit together without explicit relations.  In doing so, they divide the effects of the original relations into three groups.  The first group of relational effects is absorbed into the situated properties P*i for the newly defined constituents.  The second group is fused into a common situation S to which the new constituents respond.  Whatever relational effect not accounted in these two ways is neglected.  These steps transform a system of interacting constituents into a more tractable system of noninteracting constituents with situated properties responding independently to a common situation jointly created by all.

In the reformulation of the problem, the original constituents are replaced by new entities with typical situated property P*i that is custom made for a situation S.  The constituents respond only to the situation and not to each other.  The situation in turn is created by the aggregate behaviors of the situated entities.  In the new formulation, the troublesome double indices in the relation Rij are eliminated.  There is only the single index indicating individual constituents.  Once we find the typical situated property P*i and the distribution of constituents having various values of the property, statistical methods enable us to sum them to obtain system properties.  The result is the independent individual model.  It is akin to Leibniz's monadology featuring a set of monads with no window to communicate with each other but with properties that automatically cohere into a harmony.  The difference is that here the harmony, the situation S, is not pre-established by God but falls out self consistently with the monadic properties P*i.

By absorbing relations, the situated property P*i of reformed constituents can be very different from the property Pi of original constituents.  Take for example the electron, whose property is essentially its energy-momentum relation E(k).  In free space, the electronic property is quite simple.  Now consider an electron in a solid, e.g., a silicon crystal.  The crystal is a lattice of positively charged ions.  Each ion exerts an electromagnetic force on the negatively charged electron, and the force’s strength varies as the distance between them.  It would be hopeless to keep track of all the electromagnetic interactions as the electron moves among zillions of ions.  Physicists examine the interactions, observe their regularities, and fold their essential effects into the electron’s energy-momentum relation.  As you can see, the situated property E*(k) of an electron-in-silicon is much more complicated than the property E(k) of an electron in free space – without the complications, electronic processes that make possible computer chips and most other electronic gadgets would not occur.  Theoreticians can treat the electron with E*(k) as if it is an independent entity oblivious of the silicon lattice – it has internalized the lattice effects in its situated property to a reasonable approximation.  The independent-electron approximation greatly simplified the calculation of electronic processes.  (This example illustrates only situated property P*i but not the common situation S).



An electron with property E(k) in a silicon lattice is subject to electromagnetic forces from zillions of lattice ions.  In the independent-particle approximation, the effects of the electron-lattice interaction are folded into the situated property E*(k) of the electron-in-silicon, which is theoretically treated as if it is “free,” i.e., as if the lattice is not there.

Independent particle approximations flourish in many branches of physics.  They have a variety of names, one of which is self-consistent field theory, for the situation S takes the form of an effective field that is determined self consistently with particle properties.  Another common name is the Hartree-Fock approximation for quantum mechanical systems generally.

The self-consistent field theory in physics has a close analog in microeconomics, the general equilibrium theory of perfectly competitive market, commonly referred to as the Arrow-Debreu model.  It has been the corner stone of microeconomics for a long time, and many economists still take it to be so.  In real life, people bargain and trade with each other, and corporations compete with each other by price wars and things like that.  However, these commercial interactions, represented by Rij in Fig. 1a, are absent in perfectly competitive market models.  Instead, the models feature the market with a set of uniform commodity prices, which constitute the situation S in my terminology.  The constituents of the economy, consumers and producers, have their situated properties P*i defined in terms of the prices.  In economics, the property of an individual is literally what he possesses, and the prices determine the property by constraining the consumer's budget and the producer's profit.  The consumers and producers do not bother with each other, all they see are the market prices of commodities, according to which they decide what they want to buy or produce.  They respond only to the market and to no one else.  It is crucial that the commodity prices are not proclaimed by a central planner.  Rather, prices are determined by the equilibration of aggregate demand and supply, that is, by the properties of the consumers and producers.  In this way, individual properties and the common situation are determined self consistently.  And the economy becomes a Leibnizian world with windowless monads in a market-established harmony.

The independent individual approximation has its limitations, but it is powerful and versatile.  The Hartree Fock approximation is still a working horse in physics; in many complex cases, it is the only manageable approximation, although physicists know that it is not satisfactory.  I think the independent-individual approximation will spread to other social sciences.  Families and communities are collective unities, as they break up and people are individually drawn in the impersonal media of televisions and the internet, our society is becoming more and more suitable for independent-individual models.  To interpret the models correctly, we must remember that the characteristics of the rugged individuals have already internalized much social interaction, and that their situation is not externally imposed but endogenously determined.


Synthetic analysis of phase transition

I cannot go into other models, but will use phase transition as an example to sum up the major points discussed earlier

We call a catastrophe a meltdown, as in the recent meltdown of the Asian economies.  Such catastrophes occur whenever you put ice cubes in your drink.  In phase transition such as melting, the entire structure of a system changes radically; that's instability.  The transformations of solid into liquid, and liquid into gas are not the only phase transitions.  Another example is the transformation of iron from a paramagnetic phase to a ferromagnetic phase, where it becomes a bar magnet.  A third example is the structural transformation in binary alloys such as brass.  Phase transitions occur only in large systems; a few H2O molecules constitute neither solid nor liquid, not to mention transformation.

Melting and evaporation are familiar, and intuitively we know they must involve some radical rearrangement of atoms.  However, physicists do not write down the quantum mechanics equation for a bunch of H2O molecules and try to deduce phase transition; brute-force reductionism does not work.  The intuitive notion of phase transition is too vague to give hints about what one should look for in the jungle of combinatorial explosion.  To understand phase transition, physicists started from macroexplanations in thermodynamics.  They systematically study the macroscopic behaviors of all kinds of phase transition, and introduce concepts such as the order parameter and broken symmetry to represent the causal regularities in the synoptic view.  They discovered that as the systems approach their phase transition temperatures, their thermodynamic variables change in a similar way that can be represented by certain parameters called the critical exponents.  Furthermore, the critical exponents of different thermodynamic variables are related by simple laws called scaling laws.  Most interestingly, the critical exponents and scaling laws are universal.  Systems as widely different as liquid-gas or paramagnetic-ferromagnetic transitions share the same exponents and laws.  Never mind about the technical details, just notice the contribution of macroexplanations.  They make the notion of phase transition precise by bringing out the important features that call for microexplanation: critical exponents and scaling laws.  Furthermore, they offer a clue about what to look for: universality implies that the details of microscopic mechanisms are irrelevant on the large scale.  Why?

Microexplanations that answer the questions posed by macroexplanations appeared in the early 70s with the renormalization group.  They explain how the peculiarities of the microscopic mechanisms are screened out as we systematically proceed to coarser and coarser grained views.

In sum, to understand a class of familiar phenomena, a macroscopic science was first developed to represent clearly their causal regularities, and then a microexplanation was developed to find the underlying mechanism. The irrelevancy of micro details and the universality of macro laws justify the autonomy of macroexplanations.  On the other hand, the microexplanations cement the federal unity of two descriptive levels.  In phase transition, we have a classic example of synthetic analysis, of how theoretical science tackles with complex systems.

Talk presented at Eidgenossische Technische Hochschule
December 6, 1999

Sunny Y. Auyang