Next: Engineering Design


Lecture 12: Room at the Bottom

In this part of the course, we are examining the different divisions of engineering. The old, familiar divisions are Civil, Mechanical, Electrical and Chemical. But we should not take these divisions too seriously. For example, we should not consider them as parallel to the division of the sciences into physics, chemistry, etc. The divisions of engineering, like the laws of engineering, are local to a particular place and a particular time. There are conservative forces -- university programs, professional associations -- which tend to preserve the existing classification, so these categories always lag behind actual practice. In today's lecture, we will look at two new types of engineering, both of which are just coming into existence.

I want to direct your attention to the second half of the Eames film, ``Powers of Ten'', in which we descend to the realm of the extremely small. This is a realm which engineers are now starting to enter. And it is a realm of immense potential. Space engineering, which has existed for more than half a century, is exciting because it offers the prospect of unlimited room for growth. But the microscopic also offers unlimited room for growth, since it allows us to find room on the head of a pin to store information that would otherwise fill many libraries.

In recent years, some very ambitious claims have been made for `nanotechnology'. One of our aims in this talk is to distinguish between devices which are in principle possible from devices which we can shortly expect to see in the stores.

Richard Feynmann illustrated the potential for growth in this direction in a talk called ``There's Plenty of Room at the Bottom", given in 1959. This was long before the term `nanotechnology' was invented, and long before the first micromachines were built. So Feynman is lookng at the question as a physicist rather than an engineer, and is asking `What is physically possible?', not `What is it practical to build?' Let me summarise some of his arguments:

We'll start with something easy, like the 24-volume Encyclopaedia Britannica. Each volume has about 1,000 pages, and each page is about 200 mm by 280 mm. So if we laid all the pages out flat, they'd cover an area of about 1500 square metres; about the floor area of the East Gym. Suppose we could shrink that rectangle 25,000 times in width and 25,000 times in length. This would give us an area about the size of a pinhead. Could we read the shrunken text?

The smallest detail in the Encyclopaedia would be the dots making up the half-tone illustrations. In real life, these dots are about a quarter of a millimeter across. After our 25,000-fold reduction, they'd be about 8 nanometers across. That's small enough that we can count individual atoms; it would take about 32 atoms to span a dot, so the total area of the dot would contain about a thousand atoms. We can imagine that we define letters on the pinhead by raising them above their surroundings.

To read the encyclopaedia, we just press the pinhead into some plastic material, then carefully peel the plastic off, vacuum-deposit gold atoms onto it, then examine it through an electron microscope. And if we want more copies of the encyclopaedia, we press the pinhead into the plastic again.

That puts the Encyclopaedia Britannica onto a pinhead, but I originally said that we'd deal with whole Libraries. In fact, let's go further; let's deal with all the books in the world. How many different books are there?

The Library of Congress in the USA has about 9,000,000 volumes; the British Museum library has about 5,000,000, and the National Library of France has another 5,000,000. Altogether there may be around 24,000,000 volumes; that's a million times as many as the Britannica, so we need a million pinheads. These million pinheads will fit in a square a thousand pinheads by a thousand pinheads, or about three square metres -- less than one issue of the Peak, and with considerably higher information content.

But this is still just scratching the surface. Suppose we replace the pictures by some form of encoding that turns them into sequences of letters, then replace each letter by a Morse-code-like sequence of atoms. For example, we could use gold and platinum atoms, like this:

code.eps

These are three-dimensional blocks, so we build up our message in a volume of space, rather than across a surface. Each block is a cube, five atoms on a side, so we have 125 atoms per letter. In our 24 million books, we may have ten to the fifteenth letters. So we need about ten to the seventeenth atoms. But that's nothing! A gram-mole of any substance contains more than a million times that many atoms. So our coded block of gold and platinum weighs a few hundred micrograms -- it's a speck only just visible to the naked eye.

This is nice, but the dust-speck library is a passive entity -- it doesn't do anything for us, and it requires some work to read. Can we make tiny devices that actually do things?

Before answering that question, let's ask another one: why would we want to? Part of the answer lies in one of our laws of engineering: in long production runs, unit cost depends only on unit mass and material. So if we can mass-produce tiny machines, this law says they'll be almost free.

For many applications, this is still not good enough. A microscopic power station will produce only microscopic amounts of power; and, thanks to another law of engineering, it will do so less efficiently than one of conventional size. But, as we have seen in the past few weeks, engineering deals not only with the transformation of power, but also the transformation of information. And information does not obey the same scaling laws as power; the information content of a sentence is the same whether it's written on a billboard or on the head of a pin.

The best example of an information-transforming machine is the computer. And computer technology has been the first to take advantage of miniaturisation. This is the main reason why the cost of computing power has been falling steadily over four decades. Where the computers of the 1950's had stout copper wire, we now have conducting pathways less than a micron across.

Sensors and actuators are other examples of information-transforming machines; a sensor turns variations in the physical world into information, while an actuator does the reverse. Miniaturising these has the same advantages as miniaturising the computer. But it's more difficult, because sensors and actuators must interact with the outside world, while a computer can be self-contained. Constructing these sensors and actuators is a task for micromachining, a new area of engineering. Micromachining exploits the progress made in chip design.

Micromachining is a new discipline of engineering. It's concerned with mechanical design, but it uses techniques developed by manufacturers of electronic devices. And it has its own laws of engineering. You'll remember, in the third lecture in the course, I mentioned that the engineer may classify the world in terms of non-dimensional numbers; at certain critical values of these numbers, the rules change qualitatively. As we go to smaller scales, one of the things that happens is that viscosity becomes more important. Recall how viscosity is defined:

visc.eps

This means that the values of Reynolds number characteristic of microscale engineering are very much lower than we're used to; we're in a new world, where materials don't behave the way we expect. In this new world, air is a sticky fluid, like partially set jello. Water behaves like molasses in an Ontario winter. So lubrication becomes a problem: the lightest lubricating oil will bind a micromachine like glue.

The rules of this new world aren't entirely unfamiliar. In fact, there is a group of engineers who'd find them commonplace. These are the designers of slurry pumps -- civil engineers, petroleum engineers and food technologists who routinely design pumps to move viscous fluids. Some of their design rules can be transferred from pipes full of mud to capillaries full of air.

Few of our intuitions can still be relied upon. We're used to seeing a hot object cooled by convection. In the micro-world, this is unlikely to happen; the fluid around the object is just too thick and sticky to be set in motion by thermally induced differences in density. If heat transfer takes place on the micro-scale, it will be by conduction or radiation. But these will be unusually effective, because of another scale relationship: remember how we discussed that a mouse must work harder to keep warm than an elephant, because more of it's on the outside? By comparison, a micromachine is all outside, so it will lose heat very effectively.

There are no new physics on the microscale -- we're still too large for quantum effects to be relevant -- but there is new engineering. We need new rules of thumb, new examples of what a design should look like.

Let's go down in scale by another three orders of magnitude. This puts us in another new world, with different engineering laws. We are now on a scale where we can count the atoms in our machines. The most interesting thing about engineering on this scale is that we have proof that nanoscale devices can exist. In fact, we have some of them inside the theatre right now, being manufactured while we speak.

ribo.eps

Our bodies contain the equivalent of an integrated CAD/CAM system. DNA supplies a master set of blueprints. Particular designs can be copied from the master set onto RNA. The RNA then acts as an instruction tape for an assembly machine known as a ribosome; each triplet of nucleic acids on the RNA sequence directs the ribosome to find and attach one amino acid building block to the protein being assembled. Finally, the dipole moments of the amino acids cause the protein to fold into a particular three-dimensional shape, and that shape allows it to do a particular job -- for example, to form a rope, or to form a gripper that seizes and pulls on that rope.

This fact -- that nanomachines exist -- justifies the whole field of nanoengineering. Because if these machines can be built by a blind process, the trial and error of evolution, then surely we can build them by deliberate design. What are these tiny machines going to do for us? I hinted at one thing they could do in Lecture 2, when discussing the uselessness of quantum mechanics to the bridge designer. The quantum mechanicist is useless because the figure that quantum mechanics gives for the strength of structural steel is at least ten times too high. Steel is weaker than we'd predict from knowledge of the iron-iron bond strength because its crystal structure is imperfect; any macroscopic sample of steel contains dislocations, breaks in the atomic lattice like runs in a stocking. When we pull on the ends of a steel bar, separate parts of the lattice slide over each other at these dislocations, and the steel stretches, grows thinner, and fails. But if we could build the steel part up, atom by atom, we could achieve the theoretical strength.

In fact, though, steel is not the best material for nanomachines. We want nanomachines to have a rigid structure; we want their atoms to stay put, not to wander under the influence of thermal vibrations. The metallic bond is not very suitable for this, being non-directional. Covalent bonds are better-behaved. So the ideal nanomaterial is diamond, or something like diamond -- a rigid, strong, light structure. Atomic-level control of the assembly process would make it possible to design objects -- for example, an aerospace plane -- far lighter and stronger than existing technology. It would also be possible to develop new materials, unlike anything that currently exists. The `buckyballs' recently produced by conventional chemical engineering give a hint of what could be possible in this direction.

A second possibility for nanomachines is evolutionary design. This idea was borrowed from natural evolution by software engineering, under the title of `Genetic Algorithms." To produce an optimum algorithm for an application, the engineer writes several sub-optimal algorithms, and tests them on a set of sample problems. A second generation of algorithms is then produced by combining elements from the most successful of the first generation. This process is repeated automatically for several generations; the algorithms finally emerging have better performance than any in the first generation, though they have been produced without the intervention of a programmer.

We don't do the same thing with hardware, because it would be slow and expensive. (Also, there is no obviously sensible way to `interbreed' different designs.) But this approach might work with nanomachines, since they are cheap and fast-working. Suppose we want a family of nanomachines that will break down crude oil to some biologically inert form. We have a crude design for such a machine. This leads to a small nanomachine factory, perhaps the size of a sugar cube. Put it in a beaker of oil and let it run. It's programmed to vary the design slightly every few hundred machines. At intervals it samples the surviving machines, and makes more of those that are thriving. Eventually it may develop a design better than anyone could have thought up.

A third reason for building nanomachines is the one I referred to last week: they offer the possibility of redesigning and upgrading the human body. Harmful chemicals can be identified, then made harmless or removed from the body. Bacterial and viral invaders can be attacked and overwhelmed by swarms of specially designed machines. The natural wear and tear of the ageing process can be repaired as it happens. Microprocessors can be interfaced with the brain, hearing and sight can be augmented. We can expect, if not immortality, at least an extended and healthier life.

So far we've said why we think nanomachines can exist, and why they'd be useful. But how, exactly, can they be built? There are two obvious routes. The first is biochemical, and we are already working on this. We can hijack the existing CAD/CAM systems of bacteria, editing the DNA tapes and re-programming them to produce goods we value, such as human insulin. The pharmaceutical company Eli Lilly, for example, uses this method to produce the product `Humulin'. The difficulty lies in predicting how the proteins produced from our edited tapes will fold. Drexler, writing in ``Engines of Creation'', has this comment:

``Most biochemists work as scientists, not as engineers. They work at predicting how natural proteins will fold, not at designing proteins that will fold predictably. These tasks may sound similar, but they differ greatly: the first is a scientific challenge, the second is an engineering challenge. Why should natural proteins fold in a way that scientists will find easy to predict? All that nature requires is that they in fact fold correctly, not that they fold in a way obvious to people."

A subsequent generation of nanomachines can be built from these artificial proteins. This second generation will be made of stronger materials -- for example, diamond-like carbon -- but will still be programmed by RNA. Here's a possible design for a shaft and bearing developed in this way.

The second route to nanotechnology is via this device [slide]: the Scanning Tunneling microscope, invented in 1981 and awarded half a Nobel prize in 1986. The microscope contains a needle with an extremely fine point. This point is held very close -- less than a nanometer -- from the surface of a material. At this distance, the electrons of the material can quantum-mechanically tunnel across the gap between the surface and the point. If a small potential is placed between the surface and the point, a measurable current will flow. This current falls off exponentially with distance, so it's a very sensitive measure of how close the tip is to the surface: changing the distance by the diameter of a single atom changes the current by a factor of a thousand.

The tip position can be adjusted by fine increments, using piezoelectric actuators. By setting up a feedback circuit, its distance from the surface can be kept constant as it scans back and forth, thus building up a three-dimensional surface plot.

The STM can also be used to move atoms on a surface. The most famous example of this may be spelling out the letters `IBM' with 45 atoms of xenon on a platinum surface, by a team of IBM researchers in 1990. Researchers at the Aono Atomcraft Project in Japan, using an STM, are now able to extract a single silicon atom from the surface of a silicon crystal and rebond it to the surface at a different location. Atoms translocated in this manner can be re-removed without disarranging the underlying atomic layers. Atoms brought from afar can be used to repair holes in the silicon surface, or they can be used to build structures on top of the surface. ( J. Vac. Sci. Technol. B 12(4): 2429-2433, Jul/Aug94)

A group of researchers at Cornell University, led by Noel MacDonald, has developed a micro-machine version of the STM. Their techniques make it possible to fabricate tens of thousands of STM's on a single chip. The short-term application is for data storage and circuit design, but similar techniques could be used to build a nano-machine factory.

The advantages claimed for nanomachines are sometimes overstated, so it's useful to remember some things that nanomachines can't do. They can't, for example, produce any nuclear reactions. So they can't be used to synthesise elements not locally available.

Many of the most exciting envisaged applications for nanotechnology require more than the ability to place individual atoms accurately. For example, Drexler's `Engines of Creation' contains a memorable passage in which nanomachines create a rocket motor. Drexler describes a reaction tank which is gradually filled with a milky fluid -- a slurry of billions of nanomachines suspended in water. Some of these machines read the blueprints of a rocket engine from a pad at the bottom of the tank, and latch on to other nanomachines to form a scaffolding. Other fluids containing building materials and nanomachine fuel then flow into the tank, and the nanomachines construct an rocket motor structurally optimised down to the atomic level. Finally the nanomachines are flushed away, leaving a shining new motor.

To see how far away such a scenario is, consider a modified scenario: at a building site somewhere in Surrey, a truck pulls up and thousands of brick-size machines hop out of the back. Some of these machines position themselves around the building site, while others form a bucket brigade, passing bricks and mortar to the static machines. Yet other small machines distribute batteries or liquid fuel to their colleagues. After a few hours, the machines all hop back into the truck, which drives away, leaving a finished, custom-built house.

Everything about this second scenario should be easier than the rocket-engine-building scenario: we can build brick-sized robots right now, and they would have much more room than nanobots to store instructions. Tolerances in building a house are much coarser than building a rocket engine. Nevertheless, we don't build houses this way. We can't -- even if we had a supply of free brick-sized robots, we have no clue how to program them to cooperate on the task.

So before we can realise the more exciting applications of nanotechnology, we have to solve harder problems than building atomic-sized machines. Specifically, we have to develop techniques for getting many simple agents to cooperate on a large task.

Again, we can derive some hope from the fact that the natural world contains existence proofs for this goal. Millimetre-sized ants cooperate to build metres-tall anthills, and these anthills may have macroscopic features, such as north-south orientation. Microscopic coral animals cooperate to build a circular coral reef hundreds of metres in diameter. The corals built by different species of scleractinia [stony coral organisms] have distinctive macroscopic structures, even though the coral organisms themselves look similar to one another.

Several routes are being pursued in the hope of discovering how to get complex behaviour to emerge from a few simple rules. One route is that of cellular automata: artificial environments in which successive generations of simple cells are generated from a small number of rules. The best-known cellular automaton is probably the `Life' game developed by the Cambridge mathmatician John Horton Conway in the 1960's: cells on a two-dimensional grid die, survive, or reproduce depending on how many neighbours they have. (See, for example, the Web page or .) Cellular automata can also be defined in a single dimension, or in three dimensions. (See `Artificial Life', Steven Levy, Vintage Books, 1993.)

Other researchers are simulating the behaviour of social animals: for example, Chris Langton at the Centre for Non-Linear Studies at Los Alamos has created `boids': computer simulations of birds in which each boid follows simple rules, such that a collection of boids will fly in a flock, avoiding collisions with obstacles and with each other.

The trouble with cellular automata is that, although we can get quite complex behaviour to emerge from simple rules, we can't get particular complex behaviours, such as building a rocket engine, to emerge.

Conclusions

To illustrate the changing nature of engineering, two new fields of engineering have been introduced. These fields require a mix of talents not covered by any traditional engineering discipline. Moreover, although they involve no new science, they require new laws of engineering and new principles of design.

Suggestions for Further Work

1. An alien graduate student from an advanced civilization spends several centuries on Earth, studying our knowledge and culture. Before returning to her home planet, she writes up her thesis; as an appendix, she summarises the totality of human knowledge and belief. The appendix would cover many millions of pages if written out in English. Using a simple encoding scheme, the alien converts the message into a number, several trillion digits long. She writes this number down, then adds a decimal point to the left of the leftmost figure. She takes a bar of an extraordinarily pure metal, places it in an enclosure cooled to within a fraction of a degree of absolute zero, then slices it into two parts with a laser beam. One part is a fraction of the original length, that fraction being exactly equal to the trillion-digit decimal. The other part she throws away. Now she returns to her home planet with the fractional bar in her pocket, its length encoding all of our knowledge.

Comment on the feasibility of this recording mechanism.

2. Nanotechnology can be used to reproduce an existing structure down to the atomic level. Therefore, once we have a functioning nanotechnology, money will be worthless, since perfect counterfeits can be made. Rare works of art, such as the Mona Lisa, will also become worthless, since copies can be made indistinguishable from the original. Lastly, individuals will be able to clone themselves.

Comment.


Next: Engineering Design

John Jones
Thu Oct 17 17:19:39 PDT 1996