Progress Report

Modeling in Engineering—The Challenge of Multiple Scales

by Rob Phillips

PDF / Table of Contents

Professor Rob Phillips

Whether we consider the design of a new generation of airliners such as the Boeing 777 or the development of the latest microprocessors, engineering relies increasingly on the use of mathematical models to characterize these technologies. In the case of the 777, sophisticated models of the fluid mechanics of air flow over the wings were an integral part of the design process, just as structural mechanics models ensured that flight in turbulence leads to nothing more grave than passenger discomfort.

Models of complex materials that make up our modern technologies also pose a wide range of scientific challenges. Indeed, many of the most important recent advan-ces in the study of materials resulting in entirely new classes of materials such as the famed oxide high-temperature superconductors or fullerenes, and their structural partners known as carbon nanotubes, have engendered a flurry of modeling efforts.

Important problems that such modeling must confront are those of an intrinsically multiscale nature. What this means is that analysis of a given problem requires simultaneous consideration of several spatial or temporal scales. This idea is well represented in drawings made more than 500 years ago by Leonardo da Vinci, in which the turbulent flow of a fluid is seen to involve vortices within vortices over a range of scales. This sketch (see Fig. 1) serves as the icon for the new Caltech center known as the Center for Integrative Multi-scale Modeling and Simulation (CIMMS) [see article this issue]. CIMMS brings together faculty members from several different Options and Divisions including Professors K. Bhattacharya (Mechanical Engineering), E. Candes (Applied & Computational Mathematics), J. Doyle (Control & Dynamical Systems, Electrical Engineering, and Bioengineering), M. Gharib (Aeronautics and Bioengineering), T. Hou (Applied & Computational Mathematics), H. Mabuchi (Physics and Control & Dynamical Systems), J. Marsden (Control & Dynamical Systems), R. Murray (Control & Dynamical Systems and Mechanical Engineering), M. Ortiz (Aeronautics and Mechanical Engineering), N. Pierce (Applied & Computational Mathematics), R. Phillips (Mechanical Engineering and Applied Physics) and P. Schröder (Computer Science and Applied & Computational Mathematics). The aim of multiscale modeling is to construct models of relevance to macroscopic scales usually observed in experiment and tailored in the engineering process without losing sight of the microscopic processes which may dictate processes at the macroscale. Although the relation between force and extension can be observed macroscopically, it is often complex microscopic processes that give rise to the macroscopic force-extension curves. Examples include the breaking of hydrogen bonds during protein deformation, and the motion of defects in the deformation of crystalline solids.

Turbulent flow by Leonardo da Vinci Figure 1. Sketch by Leonardo da Vinci illustrates the sense in which turbulent flow of a fluid is a multiscale phenomenon. Parcels of fluid in a turbulent flow with a net rotation, vortices, are organized hierarchically in such a way that there are vortices within vortices.

A key outcome of the use of computers in science and engineering has been the ability to solve problems of ever-increasing complexity. Whereas the tools of nineteenth-century mathematical physics emphasized geometries of high symmetry (such as spheres and cylinders, each of which is aligned with a set of special functions such as the Legendre polynomials or Bessel functions), current modeling is aimed at considering problems in their full three-dimensional complexity. The key advance enabling such calculations is high-speed computation. As a representative case study of the high level to which such models have been taken, Fig. 2 shows the computational grid (finite-element mesh) used to model a human kidney when subjected to ultrasonic shock waves. The aim is to degrade kidney stones (shock-wave lithotripsy). As noted above, no assumptions are required concerning the symmetry of the body. The level of spatial resolution needed to construct models of systems of interest may vary from one position in the system to another. Indeed, the finite-element method serves as a powerful tool in the multiscale modeling arsenal. Efforts in the Phillips group and that of Michael Ortiz are aimed at bringing these methods to bear on problems ranging from the deformation of dense metals such as tungsten to the fragmentation of human bone to the deformation of individual proteins.

One of the precepts which presides over the field of computational science and engineering is Moore's law, which calls for a doubling in the number of transistors per integrated circuit every 18 months. For those of us who exploit computers to solve complex problems, this enables ever-increasing computational resources. From many perspectives, Moore's law should be seen as an expression of unbridled optimism which has set the agenda respected in the semiconductor technology roadmap (http://public.itrs.net). It serves as a guide to understanding the way in which the resources of computational scientists have increased since the first models were solved on primitive vacuum-tube computers.

kidney Figure 2. Computational mesh used to evaluate the mechanical response of a kidney to ultrasonic shock waves (courtesy of Kerstin Weinberg and Michael Ortiz). state of gas Figure 3. Illustration of the relation between molecular and continuum descriptions of the internal state of a gas. This figure comes from the original paper of Daniel Bernoulli, one of the developers of the multiscale modeling paradigm known as the kinetic theory of gases.

On the other hand, for those interested in brute-force atomic-level calculation of the properties of materials (or any of a wide range of other problems occurring in fluid mechanics, meteorology, computational biology, etc.), Moore's law paints an altogether more gloomy picture. To see this, we need only remark that the number of atoms in a cubic micron of material is roughly 1010 since about 3,000 atoms will fit onto each edge of such a cube. Calculations of this size are at least three orders of magnitude larger than the 10 million atoms reached on today's best supercomputers in the case of the simplest materials. Worse yet, this is but one facet of the problem. Just as the maximum size accessible by direct numerical calculation is too small, so too are the intervals of time being simulated, with the current standard being that a nanosecond worth of simulation time (10-9 seconds) represents long simulation time. To drive home this point, we note that if our interest is in the simulation of semiconductor processing, we will need to simulate micron size regions for times much in excess of the nanosecond simulation times described above. Similarly, should our interest be in simulating the properties of the basic building blocks of life, what Francis Crick referred to as the "two great polymer languages," nucleic acids and proteins, there too we are faced with the simulation of scales in both space and time that will continue to defy our current brute-force computational schemes.

As an antidote to this scourge on the face of computational science, workers from a host of different fields ranging from applied mathematics to meteorology to computational biology are engaged in work that has been dubbed "multiscale modeling." From a computational perspective, the premise of multiscale modeling is that new methods must be developed in which alternatives to the full brute-force ideas described above are examined. Though this vibrant field has been hyped by giving it a special name, I suggest that multiscale modeling is really as old as science itself and was being practiced by Newton when he treated the Earth as a point mass, by Hooke when he treated a spring as an elastic continuum, by Bernoulli in the development of the kinetic theory of gases, by Lorentz in his early and primitive models of the absorption of light in crystalline solids, and by Einstein in his treatment of both Brownian motion in liquids and specific heats of crystalline solids. What all of these modeling efforts have in common is the idea of starting with a picture of the material of interest which is oppressively complex and finding a way to replace that complexity with a "coarse grained" model. Said differently, such models can be thought of as viewing the problem of interest with lower resolution. An example from everyday experience is gained by looking out the window of an airplane when flying at 30,000 feet. At this resolution, forests are smeared out and the various topographical features with a scale less than several meters are no longer observable. Nevertheless, from the perspective of understanding the overall forestation and topography of a given region, understanding at this level of resolution is likely more useful than a more accurate rendering with resolution at the meter scale.

experimental apparatus Figure 4. Experimental apparatus used by Robert Hooke in his elucidation of the laws of elasticity.

History is replete with beautiful examples in which multiscale modeling ideas have been used to characterize a range of problems. One such example is related to the following question: given that a gas is a collection of atoms, is it possible to replace models of the gas which acknowledge the underlying graininess of matter by those in which the atomic degrees of freedom are smeared out into continuous fields such as density, temperature, and pressure? Of course, it is well known that the answer to this query can be posited in the affirmative. Further, it is through the multiscale vehicle of the kinetic theory of gases that this transformation in perspective is made.

As illustrated in Fig. 3, a gas may be thought of as a collection of molecules, each engaged in its own jiggling dance until, by chance, one molecule collides either with another molecule or the surrounding walls. The realization of the early thermodynamicists was that the accumulation of all such collisions per unit time corresponds to our macroscopic impression of the pressure exerted on the walls by all of the gas molecules. Through a well-defined statistical formalism, statistical mechanics and the kinetic theory of gases instruct us how to compute the macroscopic average quantities measured in the lab as a function of the underlying molecular coordinates. For the present argument, the key point is that by evaluating the molecular mechanics of the various collisions between molecules, it is possible to compute parameters such as viscosity, which show up in higher level continuum descriptions of the fluid. The existence of simple parameters (such as viscosity) capture the details of the underlying microscopic collisions and allow us to replace these microscopic details with continuum notions, an example of multiscale modeling at its best.

Work in the same vein as the kinetic theory of gases has continued unabated and now represents a cornerstone of the modern ap-proach to understanding materials ranging from steel to proteins. In the remainder of this article, we examine one corner of this vast field which has understanding as its first objective and, later, designing and controlling the response of materials when they are subjected to an applied force.

One of the key ways to understand different materials is to subject them to different external stimuli and watch their attendant responses. One classic example of this strategy is embodied in the formulation of the laws of elasticity. Using experimental apparatus like that shown in Fig. 4, Robert Hooke measured the extension of material bodies as a function of the imposed load and thereby formulated his justly famous law which he expressed as an anagram CEIIINOSSITTUV, which when unscrambled reads Ut tensio, sic vis—"As the extension, so is the force." In modern parlance, this is written

σ = Eε:

stress is proportional to strain with the constant of proportionality given by the Young's modulus, E. This basic idea jibes with our intuition: the harder you pull on something, the more it stretches. Similar proportionalities have been formulated for material response in other settings such as the relation between current and voltage (Ohm's law) and that between diffusion and the chemical gradient (Fick's law). In each of these cases, the basic idea can be couched in the following terms:

response = material parameter x stimulus

However, as one might guess, once the driving force (i.e., the stimulus) becomes too large, the simple linear relation between force and response breaks down and calls for more sophisticated analysis. A particularly compelling example of these ideas is presented in the emerging field of single-molecule biomechanics in which the force-extension curves for individual molecules such as the protein titin found in muscle are measured using the atomic-force microscope. An example of such a curve is shown in Fig. 5. The vertical axis in this curve shows the applied force (measured in piconewtons) while the horizontal axis shows the extension of the molecule (measured in nanometers). What is remarkable is that the molecule goes through a series of processes in which the load increases (corresponding to the elastic stretching of the various domains) followed by a precipitous drop in the load (corresponding to the breaking of collections of hydrogen bonds in one of the globular domains of the protein).

force-extension Figure 5. Schematic of the force-extension curve measurement procedure and the force-extension curve for the muscle protein titin. As shown in (A), the molecule is stretched using the atomic-force microscope and leads to (B), a force-extension spectrum which is a mechanical fingerprint for the molecule of interest (courtesy of Julio Fernandez).

A second example of this same type of massively nonlinear deformation is revealed by the process used to create the tungsten filaments that light our homes every evening. In this case, a cylindrical specimen of tungsten, roughly a meter long and several centimeters in diameter, is put through a series of deformation steps in which the tungsten is progressively elongated. By the end of this process, the tungsten rod of original length on the order of a meter has now been stretched to a length of hundreds of kilometers. This process takes place without changing the overall volume of the rod. We leave it to the reader to work out what this implies about the final diameter of the tungsten filament.

The nonlinear deformation of either proteins or tungsten (and most everything in between) is an intrinsically multiscale problem because in each case the macroscopic force response is engendered by microscopic processes. In the case of the deformation of a protein like that shown in Fig. 5, it is the breaking of particular sets of hydrogen bonds that give rise to steep drops in the force-extension curve, bonds which are characterized by a length scale of 10-10 m and not the 10-8 m typical of the measured force-extension curves. Similarly, in the deformation of tungsten, it is the motion of atomic-scale defects known as dislocations that give rise to the overall plastic deformation. As a result, in both of these cases a bridge is required which allows for a modeling connection to be made between the "microscopic" processes such as bond breaking and the macroscopic observables such as the force-extension curve. Efforts in the Phillips group and that of Michael Ortiz have been aimed at constructing multiscale models which are sufficiently general to be able to treat the force-extension curves in materials ranging from proteins to tungsten.

An intriguing alternative to the atom-by-atom simulation of force-extension curves like those discussed above has been the development of new techniques in which high resolution is kept only in those parts of the material where it is really needed. We close this essay with a brief exposition of the use of these methods to examine the way in which defects give rise to plastic deformation in strained materials, and how by virtue of entanglements of these defects, such materials are hardened. Without entering into a detailed exposition of the character of defects that populate materials, we note again that the plastic deformation of materials is often mediated by defects known as dislocations. Roughly speaking, dislocations are the crystal analog of the trick one might use to slide an enormous carpet. If we imagine such a carpet and we wish to slide it a foot in some direction, one way to do so is by injecting a bulge from one side as shown schematically in Fig. 6. Hence, rather than having to slide the whole carpet homogeneously, we are faced instead with only having to slide a little piece with a width equal to that of the bulge. Nevertheless, the net result of this action is overall translation of the carpet. This same basic idea is invoked in the setting of stressed crystals where the sliding of one crystal plane with respect to another is mediated by a line object (like the bulge described above) on which atomic bonds are being rearranged.

junction between two dislocations Figure 6. The sliding of a carpet by injecting a bulge is analogous to deformation of crystals by injecting dislocations.

One of the key features of deformed crystals is the fact that the defects described above can encounter other such defects which exist on different crystal planes. The net result is the formation of a local entanglement known as a dislocation junction. The formation of such entanglements has the observable consequence that the crystal is harder--the critical stress needed to permanently deform the material (i.e., the plastic threshold) is raised by the presence of junctions. Although this entanglement is ultimately and intrinsically a particular configuration of the various atoms that make up a material, by exploiting ideas from elasticity theory it is possible to represent all of this atomic-level complexity in terms of two interacting lines. For present purposes, the replacement of the all-atom perspective by an elastic theoretical surrogate is exactly the type of multiscale analysis argued for earlier in this essay.

Figure 7 shows the structure of such a dislocation junction as computed not by considering the atoms that make up the material, but rather as a collection of interacting lines. Just as the various molecules that make up a gas can be eliminated from consideration by invoking an equation of state and exploiting hydrodynamics, so too in the context of modeling the deformation of materials may we replace defects that are intrinsically atomistic by elastic surrogates which allow us to answer the multiscale challenge of material response. As a result of exploiting the correspondence between the atomic-level and elastic description of junctions, we have been able to evaluate the critical stress needed to disentangle the two dislocations that make up a given dislocation junction. One example presented here (that of interactions between dislocations), ferrets out the nature of the conspiracy between the various defects such as dislocations, grain boundaries, and cracks that make up materials and that are responsible for observed macroscopic material response. Some of the other problems we have examined using multiscale models are the nucleation of dislocations at crack tips, the interactions of dislocations with grain boundaries, and the response of proteins to external forcing (Fig. 5).

junction between two dislocations Figure 7. A junction between two dislocations as modeled using the same theory of elasticity first developed by Robert Hooke and derived using the experimental apparatus of Fig. 4.

This essay has attempted to convey some of the excitement that has arisen because of the advent of the ability to build models of systems of interest to scientists and engineers that intrinsically involve multiple scales in either space or time or both. Though we have argued that multiscale modeling has always been a part of the theoretical arsenal used to investigate problems ranging from turbulent flow to the magnetic properties of materials, high-speed computation has led to a resurgence of interest in the construction of coarse-grained models. This represents an amusing twist of fate since naively one might have expected that such computational resources would allow for the "first principles" simulation of processes without the need for theoretical surrogates. On the other hand, I have argued that as it has always been, the development of compelling models of the world around us must be based upon the realization of a tasteful distinction between those features of a system which are really necessary and those that are not. This idea served as a cornerstone in many of the great historical examples of multiscale modeling and serves as an embodiment of Einstein's dictum that "Things should be made as simple as possible—but not simpler."


Rob Phillips is Professor of Mechanical Engineering and Applied Physics.