CHAPTER 2. THE REALM OF PHYSICS.
Physics is the study of the properties of matter and energy. We begin with physics, not because it is easier, harder, or more important than other sciences; but because it is, in a specific sense, more fundamental.
More fundamental, in that from the laws of physics we can construct the laws of chemistry; from the laws of chemistry and the laws of physics together we can in turn build the laws of biology, of properties of materials, of meteorology, computer science, medicine, and anything else you care to mention. The process cannot be reversed. We cannot deduce the laws of physics from the laws of chemistry, or those of biology.
In practice, we have a long way to go. The properties of atoms and small molecules can be calculated completely, from first principles, using quantum theory. Large molecules present too big a computational problem, but it is considered to be just that, not a failure of understanding or principles. In the same way, although most biologists have faith in the fact that, by continuing effort, we will at last understand every aspect of living systems, we are a huge distance away from explaining things such as consciousness.
A number of scientists, such as Roger Penrose, believe that this will never happen, at least with current physical theories (Penrose, 1990, 1995; see also Chapter 13). Others, such as Marvin Minsky, strongly disagree; our brains are no more than "computers made of meat." Some scientists, believers in dualism, strongly disagree with that, asserting the existence of a basic element of mind quite divorced from the mechanical operations of the brain (Eccles, 1977).
Furthermore, there is a "more is different" school of scientists, led by physicist Philip Anderson and evolutionary biologist Ernst Mayr. Both argue (Anderson, 1972; Mayr, 1982) that one cannot deduce the properties of a large, complex assembly by analysis of its separate components. In Mayr's words, "the characteristics of the whole cannot (even in theory) be deduced from the most complete knowledge of the components, taken separately or in other partial combinations." For example, study of single cells would never allow one to predict that a suitable collection of those cells, which we happen to call the human brain, could develop self-consciousness.
Who is right? The debate goes on, with no end in sight. Meanwhile, this whole area forms a potential gold mine for writers.
2.1 The small world: atoms and down. It was Arthur Eddington who pointed out that, in size, we are slightly nearer to the atoms than the stars. It's a fairly close thing. I contain about 8 x 1027 atoms. The Sun would contain about 2.4 x 1028 of me. We will explore first the limits of the very small, and then the limits of the very large.
A hundred years ago, atoms were regarded as the ultimate, indivisible elements that make up the universe. That changed in a three-year period, when in quick succession Röntgen in 1895 discovered X-rays, in 1896 Becquerel discovered radioactivity, and in 1897 J.J. Thomson discovered the electron. Each of these can only be explained by recognizing that atoms have an interior structure, and the behavior of matter and radiation in that sub-atomic world is very different from what we are used to for events on human scale.
The understanding of the micro-world took a time to appear, and it is peculiar indeed. In the words of Ilya Prigogine, a Nobel prize-winner in chemistry, "The quantum mechanics paradoxes can truly be said to be the nightmares of the classical mind."
The next step after Röntgen, Becquerel and Thomson came in 1900. Some rather specific questions as to how radiation should behave in an enclosure had arisen, questions that classical physics couldn't answer. Max Planck suggested a rather ad hoc assumption that the radiation was emitted and absorbed in discrete chunks, or quanta (singular, quantum; hence, a good deal later, quantum theory). Planck introduced a fundamental constant associated with the process. This is Planck's constant, denoted by h, and it is tiny. Its small size, compared with the energies, times, and masses of the events of everyday life, is the reason we are not aware of quantum effects all the time.
Most people thought that the Planck result was a gimmick, something that happened to give the right answer but did not represent anything either physical or of fundamental importance. That changed in 1905, when Einstein used the idea of the quantum to explain another baffling result, the photoelectric effect.
Einstein suggested that light must be composed of particles called photons, each with a certain energy decided by the wavelength of the light. He published an equation relating the energy of light to its wavelength, and again Planck's constant, h, appeared. (It was for this work, rather than for the theory of relativity, that Einstein was awarded the 1921 Nobel Prize in physics. More on relativity later.)
While Einstein was analyzing the photoelectric effect, the New Zealand physicist Ernest Rutherford was studying the new phenomenon of radioactivity. Rutherford found that instead of behaving like a solid sphere of electrical charges, an atom seemed to be made up of a very dense central region, the nucleus, surrounded by an orbiting cloud of electrons. In 1911 Rutherford proposed this new structure for the atom, and pointed out that while the atom itself was small -- a few billionths of an inch -- the nucleus was tiny, only about a hundred thousandth as big in radius as the whole atom. In other words, matter, everything from humans to stars, is mostly empty space and moving electric charges.
The next step was taken in 1913 by Niels Bohr. He applied the "quantization" idea of Planck and Einstein -- the idea that things occur in discrete pieces, rather than continuous forms -- to the structure of atoms proposed by Rutherford.
In the Bohr atom, electrons can only lose energy in chunks -- quanta -- rather than continuously. Thus they are permitted orbits only of certain energies, and when they move between orbits they emit or absorb radiation at specific wavelengths (light is a form of radiation, in the particular wavelength range that can be seen by human eyes). The electrons can't have intermediate positions, because to get there they would need to emit or absorb some fraction of a quantum of energy; by definition, fractions of quanta don't exist. The permitted energy losses in Bohr's theory were governed by the wavelengths of the emitted radiation, and again Planck's constant appeared in the formula.
It sounded crazy, but it worked. With his simple model, applied to the hydrogen atom, Bohr was able to calculate the right wavelengths of light emitted from hydrogen.
More progress came in 1923, when Louis de Broglie proposed that since Einstein had associated particles (photons) with light waves, wave properties ought to be assigned to particles such as electrons and protons. He tried it for the Bohr atom, and it worked.
The stage was set for the development of a complete form of quantum mechanics, one that would allow all the phenomena of the subatomic world to be tackled with a single theory. In 1925 Schrödinger employed the wave-particle duality of Einstein and de Broglie to come up with a basic equation that applied to almost all quantum mechanics problems; at the same time Heisenberg, using the fact that atoms emit and absorb energy only in finite and well-determined pieces, produced another set of procedures that could also be applied to almost every problem.
Soon afterwards, in 1926, Paul Dirac, Carl Eckart, and Schrödinger himself showed that the Heisenberg and Schrödinger formulations can be viewed as two different approaches within one general framework. In 1928, Dirac took another important step, showing how to incorporate the effects of relativity into quantum theory.
It quickly became clear that the new theory of Heisenberg, Schrödinger and Dirac allowed the internal structure of atoms and molecules to be calculated in detail. By 1930, quantum theory, or quantum mechanics as it was called, became the method for performing calculations in the world of molecules, atoms, and nuclear particles. It was the key to detailed chemical calculations, allowing Linus Pauling to declare, late in his long life, "I felt that by the end of 1930, or even the middle, that organic chemistry was pretty well taken care of, and inorganic chemistry and mineralogy -- except the sulfide minerals, where even now more work needs to be done." (Horgan, 1996, p. 270).
2.2 Quantum paradoxes. Quantum theory was well-formulated by the end of the 1920s, but many of its mysteries persist to this day. One of the strangest of them, and the most fruitful in science fiction terms, is the famous paradox that has come to be known simply as "Schrödinger's cat." (We are giving here a highly abbreviated discussion. A good detailed survey of quantum theory, its history and its mysteries, can be found in the book, IN SEARCH OF SCHRODINGER'S CAT; Gribbin, 1984).
The cat paradox was published in 1935. Put a cat in a closed box, said Schrödinger, with a bottle of cyanide, a source of radioactivity, and a detector of radioactivity. Operate the detector for a period just long enough that there is a fifty-fifty chance that one radioactive decay will be recorded. If such a decay occurs, a mechanism crushes the cyanide bottle and the cat dies.
The question is, without looking in the box, is the cat alive or dead? Quantum indeterminacy insists that until we open the box (i.e. perform the observation) the cat is partly in the two different states of being dead and being alive. Until we look inside, we have a cat that is neither alive nor dead, but half of each.
There are refinements of the same paradox, such as the one known as "Wigner's friend" (Eugene Wigner, born in 1902, was an outstanding Hungarian physicist in the middle of the action in the original development of quantum theory). In this version, the cat is replaced by a human being. That human being, as an observer, looks to see if the glass is broken, and therefore automatically removes the quantum indeterminacy. But suppose that we had a cat smart enough to do the same thing, and press a button? The variations -- and the resulting debates -- are endless.
With quantum indeterminacy comes uncertainty. Heisenberg's uncertainty principle asserts that we can never know both of certain pairs of variables precisely, and at the same time. Position and speed are two such variables. If we know exactly where an electron is located, we can't know its speed.
With quantum indeterminacy we also have the loss of another classical idea: repeatability. For example, an electron has two possible spins, which we will label as "spin up" and "spin down." The spin state is not established until we make an observation. Like Schrödinger's half dead/half alive cat, an electron can be half spin up and half spin down pending a measurement.
This has practical consequences. At the quantum level an experiment, repeated under what appear to be identical conditions, may not always give the same result. Measurement of the electron spin is a simple example, but the result is quite general. When we are dealing with the subatomic world, indeterminacy and lack of repeatability are as certain as death and taxes.
Notice that the situation is not, as you might think, merely a statement about our state of knowledge, i.e. we know that the spin is either up or down, but we don't know which. The spin is up and down at the same time. This may sound impossible, but quantum theory absolutely requires that such "mixed states" exist, and we can devise experiments which cannot be explained without mixed states. In these experiments, the separate parts of the mixed states can be made to interfere with each other.
To escape the philosophical problem of quantum indeterminacy (though not the practical one), Hugh Everett and John Wheeler in the 1950's offered an alternative "many-worlds" theory to the paradox of Schrödinger's cat. The cat is both alive and dead, they say -- but in different universes. Every time an observation is made, all possible outcomes occur. The universe splits at that point, one universe for each outcome. We see one result, because we live in only one universe. In another universe, the other outcome took place. This is true not only for cats in boxes, but for every other quantum phenomenon in which a mixed state is resolved by making a measurement. The change by measurement of a mixed state to a single defined state is often referred to as "collapsing the wave function."
An ingenious science fiction treatment of all this can be found in Frederik Pohl's novel, THE COMING OF THE QUANTUM CATS (Pohl, 1986).
Quantum theory has been defined since the 1920s as a computational tool; but its philosophical mysteries continue today. As Niels Bohr said of the subject, "If you think you understand it, that only shows you don't know the first thing about it."
To illustrate the continuing presence of mysteries, we consider something which could turn out to be the most important physical experiment of the century: the demonstration of quantum teleportation.
2.3 Quantum teleportation. Teleportation is an old idea in science fiction. A person steps into a booth here, and is instantly transported to another booth miles or possibly lightyears away. It's a wonderfully attractive concept, especially to anyone who travels often by air.
Until 1998, the idea seemed like science fiction and nothing more. However, in October 1998 a paper was published in SCIENCE magazine with a decidedly science-fictional title: Unconditional Quantum Teleportation. In that paper, the six authors describe the results of an experiment in which quantum teleportation was successfully demonstrated.
We have to delve a little into history to describe why the experiment was performed, and what its results mean. In 1935, Einstein, Podolsky, and Rosen published a "thought experiment" they had devised. Their objective was to show that something had to be wrong with quantum theory.
Consider, they said, a simple quantum system in which two particles are coupled together in one of their quantum variables. We will use as an example a pair of electrons, because we have already talked about electron spin. Einstein, Podolsky and Rosen chose a different example, but the conclusions are the same.
Suppose that we have a pair of electrons, and we know that their total combined spin is zero. However, we have no idea of the spin of either individual electron, and according to quantum theory we can not know this until we make an experiment. The experiment itself then forces an electron to a particular state, with spin up or spin down.
We allow the two electrons to separate, until they are an arbitrarily large distance apart. Now we make an observation of one of the electrons. It is forced into a particular spin state. However, since the total spin of the pair was zero, the other electron must go into the opposite spin state. This happens at once, no matter how far apart the electrons may be.
Since nothing -- including a signal -- can travel faster through space than the speed of light, Einstein, Podolsky, and Rosen concluded that there must be something wrong with quantum theory.
Actually, the thought experiment leads to one of two alternative conclusions. Either there is something wrong with quantum theory; or the universe is "non-local" and distant events can be coupled by something other than signals traveling at or less than the speed of light.
It turns out that Einstein, Podolsky and Rosen, seeking to undermine quantum theory, offered the first line of logic by which the locality or non-locality of the universe can be explored; and experiments, first performed in the 1970s, came down in favor of quantum theory and a non-local universe. Objects, such as pairs of electrons, can be "entangled" at the quantum level, in such a way that something done to one instantaneously affects the other. This is true in principle if the electrons are close together, or lightyears apart.
To this time, the most common reaction to the experiments demonstrating non-locality has been to say, "All right. You can force action at a distance using "entangled" particle pairs; but you can't make use of this to send information." The new experiment shows that this is not the case. Quantum states were transported (teleported) and information was transferred.
The initial experiment did not operate over large distances. It is not clear how far this technique can be advanced, or what practical limits there may be on quantum entanglement (coupled states tend to de-couple from each other, because of their interactions with the rest of the universe). However, at the very least, these results are fascinating. At most, this may be the first crack in the iron strait-jacket of relativity, the prodigiously productive theory which has assured us for most of the 20th century that faster-than-light transportation is impossible.
We now consider relativity and its implications.
2.4 Relativity. The second great physical theory of the twentieth century, as important to our understanding of Nature as quantum theory, is relativity. Actually, there are in a sense two theories of relativity: the special theory, published by Einstein in 1905, and the general theory, published by him in 1915.
2.5 Special relativity. The special theory of relativity concentrates on objects that move relative to each other at constant velocity. The general theory allows objects to be accelerated relative to each other in any way, and it includes a theory of gravity.
Relativity is often thought to be a "hard" subject. It really isn't, although the general theory calls for a good deal of mathematics. What relativity is, more than anything, is unfamiliar. Before the effects of relativity are noticed, things need to be moving relative to each other very fast (a substantial fraction of the speed of light), or they must involve a very strong gravitational field. We are as unaware of relativity as a moving snail is unaware of wind resistance, and for the same reason; our everyday speeds of motion are too slow for the effects to be noticed.
Everyone from Einstein himself to Bertrand Russell has written popular accounts of relativity. We name just half a dozen references, in increasing order of difficulty: EINSTEIN'S UNIVERSE (Calder, 1979); THE RIDDLE OF GRAVITATION (Bergmann, 1968); RELATIVITY AND COMMON SENSE (Bondi, 1981); EINSTEIN'S THEORY OF RELATIVITY (Born, 1965); THE MEANING OF RELATIVITY (Einstein, 1994); and THEORY OF RELATIVITY (Pauli, 1981). Rather than talk about the theory itself, we are going to confine ourselves here to its major consequences. In the case of special relativity, there are six main ones to notice and remember.
1) Mass and energy are not independent quantities, but can be converted to each other. The formula relating the two is the famous E = mc2.
2) Time is not an absolute measure, the same for all observers. Instead, time passes more slowly on a moving object than it does relative to an observer of that object. The rule is, for an object traveling at a fraction F of the speed of light, when an interval T passes onboard the object, an interval of 1/Ö(1 - F2) of T passes for the observer. For example, if a spaceship passes you traveling at 99 percent of the speed of light, your clock will register that seven hours pass while the spaceship's clocks show that only one hour has passed on board. This phenomenon is known as "time dilation", or "time dilatation," and it has been well-verified experimentally.
3) Mass is not an absolute measure, the same for all observers. For an object traveling at a fraction F of the speed of light, its mass will appear to be increased by a factor of 1/Ö(1 - F2) so far as an outside observer is concerned. If a spaceship passes you traveling at 99 percent of the speed of light, its mass will appear to have increased by a factor of seven over its original value. This phenomenon has also been well-verified experimentally.
4) Nothing can be accelerated to travel faster than light. In fact, to accelerate something to the speed of light would take an infinite amount of energy. This is actually a consequence of the previous point. Note: this does not say that an object cannot vanish from one place, and appear at another, in a time less than it would take light to travel between those locations. Hence this is not inconsistent with the quantum teleportation discussion of the previous section.
5) Length also is not an absolute measure, the same for all observers. If a spaceship passes you traveling at 99 percent of the speed of light, it will appear to be foreshortened to one-seventh of its original length. This phenomenon is known as "Lorentz contraction," or "Fitzgerald-Lorentz contraction."
6) The speed of light is the same for all observers, regardless of the speed of the light source or the speed of the observer. This is not so much a consequence of the special theory of relativity as one of the assumptions on which the theory is based.
The consequences of special relativity theory are worked out more simply if instead of dealing with space and time separately, calculations are performed in a merged entity we term "spacetime." This is also not a consequence of the theory, but rather a convenient framework in which to view it.
After it was proposed, the theory of relativity became the subject of much popular controversy. Detractors argued that the theory led to results that were preposterous and "obvious nonsense." That is not true, but certainly some of the consequences of relativity do not agree with "intuitive" common-sense evolved by humans traveling at speeds very slow compared with the speed of light.
Let us consider just one example. Suppose that we have two spaceships, A and B, each traveling toward Earth (O), but coming from diametrically opposite directions. Also, suppose that each of them is moving at 4/5 of the speed of light according to their own measurements. "Common-sense" would then insist that they are moving toward each other at 4/5 + 4/5 = 8/5 of light speed. Yet one of our tenets for relativity theory is that you cannot accelerate an object to the speed of light. But we seem to have done just that. Surely, A will think that B is approaching at 1.6 times lightspeed.
No. So far as A (or B) is concerned, we must use a relativistic formula for combining velocities in order to calculate B's speed relative to A. According to that formula, if O observes A and B approaching with speeds u and v, then A (and B) will observe that they are approaching each other at a speed U = (u + v)/(1 + uv/c2), where c = the speed of light. In this case, we find U = 40/41 of the speed of light.
Can U ever exceed c, for any permitted values of u and v? That's the same as asking, if u and v are less than 1, can (u + v)/(1 + uv) ever be greater than 1? It's easy to prove that it cannot.
Now let us take the next step. Let us look at the passage of time. Suppose that A sends a signal ("Hi") and nine seconds later, according to his time frame, sends the same message again. According to rule 2), above, the time between one "Hi!" and the next, as measured by us, will be increased by a factor 5/3. For us, 15 seconds have passed. And so far as B is concerned, since B thinks that A is traveling at 40/41 of lightspeed, an interval 9/Ö(1 - 40/412) = 41 seconds have passed.
If you happen to one of those people who read a book from back to front, you may now be feeling confused. In discussing the expansion of the universe in Chapter 4, we point out that signals from objects approaching us have higher frequencies, while signals from objects receding from us have lower frequencies. But here we seem to be saying the exact opposite of that: the time between "Hi!" signals, which is equivalent to a frequency, seems to be less for O and B than it is for A.
In fact, that is not the case. We have to allow for the movement of A between transmission of successive signals. When A sends the second "Hi", nine seconds later than the first according to his measurements, he has moved 15 seconds closer according to O. That is a distance 15x4/5 = 12 light-seconds (a light-second, in analogy to a lightyear, is the distance light travels in one second). Thus the travel time of the second "Hi" is decreased by 12 seconds so far as O is concerned. Hence the time between "Hi"'s as measured by O is three seconds. The signal frequency has increased.
The same is true for B. The time between transmission of "Hi"'s is 41 seconds as perceived by B, but in that time the distance between A and B as measured by B has decreased by 40 light-seconds. The time between successive "Hi"'s is therefore just one second for B. The signal frequency so far as B is concerned has increased, more than it did for O.
If the preceding few paragraphs seem difficult, don't worry about them. My whole point is that the results of relativity theory can be very counter-intuitive when your intuition was acquired in situations where everything moves much less than the speed of light. The moral, from a story-teller's point of view, is be careful when you deal with objects or people moving close to light-speed. An otherwise good book, THE SPARROW (Russell, 1996) was ruined for me by a grotesque error in relativistic time dilation effects. It could have been corrected with a simple change of target star.
Just for the fun of it, let us ask what happens to our signals between A, B, and O if we have a working quantum teleportation device, able to send signals instantaneously. What will the received signals have as their frequencies? No one can give a definite answer to this, but a likely answer is that quantum teleportation is totally unaffected by relative velocities. If that's the case, everyone sends and receives signals as though they were all in close proximity and at rest relative to each other. As a corollary, for quantum teleportation purposes the universe lacks any spatial dimension and can be treated as a single point.
2.6 General relativity. For the general theory of relativity, the main consequences to remember are:
1) The presence of matter (or of energy, which the special theory asserts are two forms of one and the same thing) causes space to curve. What we experience as gravity is a direct measure of the curvature of space.
2) Objects move along the shortest possible path in curved space. Thus, a comet that falls in toward the Sun and then speeds out again following an elongated elliptical trajectory is traveling a minimum-distance path in curved space. In the same way, light that passes a massive gravitational object follows a path significantly bent from the straight line of normal geometry. Light that emanates from a massive object will be lengthened in wavelength as it travels "uphill" out of the gravity field. Note that this is not the "red shift" associated with the recession of distant galaxies, which will be discussed in Chapter 4.
3) If the concentration of matter is high, it is possible for spacetime itself to curve so much that a knowledge of some regions becomes denied to the rest of the universe. The interior of a black hole is just such a region. We are unaware of this in everyday life, simply because the concentrations of matter known to us are too low for the effects to occur.
4) Since matter curves space, the total amount of matter in the universe has an effect on its overall structure. This will become profoundly important in Chapter 4, when we consider the large-scale structure and eventual fate of the universe.
To truly space-faring civilizations, the effects of special and general relativity will be as much a part of their lives as sun, wind, and rain are to us.
2.7 Beyond the atom. Quantum theory and the special theory of relativity together provide the tool for analysis of sub-atomic processes. But we have not defined the sub-atomic world to which it applies.
Before the work of Rutherford and J.J. Thomson, the atom was considered a simple, indivisible object. Even after Rutherford's work, the only known sub-atomic particles were electrons and protons (the nucleus of an atom was regarded as a mixture of electrons and protons).
The situation changed in 1932, with the discovery of the positron (a positively charged electron) and the neutron (a particle similar in mass to the proton, but with no charge). At that point the atom came to be regarded as a cloud of electrons encircling a much smaller structure, the nucleus. The nucleus is made up of protons (equal in number to the electrons) and neutrons.
However, this ought to puzzle and worry us. Electrons and protons attract each other, because they are oppositely charged; but protons repel each other. So how is it possible for the nucleus, an assembly of protons and neutrons, to remain intact?
The answer is, through another force of nature, known as the strong force. The strong force is attractive, but it operates only over very short distances (much less than the size of an atom). It holds the nucleus together - most of the time. Sometimes another force, known as the weak force, causes a nucleus to emit an electron or a positron, and thereby become a different element. To round out the catalog of forces, we also have the familiar electromagnetic force, the one that governs attraction or repulsion of charged particles; and finally, we have the gravitational force, through which any two particles of matter, no matter how small or large, attract each other. The gravitational force ought really to be called the weak force, since it is many orders of magnitude less powerful than any of the others. It dominates the large-scale structure of the universe only because it applies over long distances, to every single bit of matter.
We have listed four fundamental forces. Are there others?
We know of no others, but that might only be an expression of our ignorance. From time to time, experiments hint at the existence of a "fifth force." Upon closer investigation, the evidence is explained some other way, and the four forces remain. However, it is quite legitimate in science fiction to hypothesize a fifth force, and to give it suitable properties. If you do this, however, be careful. The fifth force must be so subtle, or occur in such extreme circumstances, that we would not have stumbled over it already in our exploration of the universe. You can also, if you feel like it, suggest modifications to the existing four forces, again with suitable caution.
The attempt to explain the four known forces in a single framework, a Theory Of Everything, assumes that no more forces will be discovered. This strikes me as a little presumptuous.
Returning to the discovery of fundamental particles, after the neutron came the neutrino, a particle with neither charge nor mass (usually; recently, some workers have discovered evidence suggesting a small mass for the neutrino). The existence of the neutrino had been postulated by Pauli in 1931, in order to retain the principle of the conservation of energy, but it was not actually discovered until 1953.
Then too quickly for the theorists to feel comfortable about it came the muon (1938), pions (predicted 1935, discovered 1947), the antiproton (1955), and a host of others, etas and lambdas and sigmas and rhos and omegas.
Quantum theory seemed to provide a theoretical framework suitable for all of these, but in 1960 the basic question "Why are there so many animals in the "nuclear zoo"? remained unanswered. In the early 1960s, Murray Gell-Mann and George Zweig independently proposed the existence of a new fundamental particle, the quark, from which all the heavy sub-atomic particles were made.
The quark is a peculiar object indeed. First, its charge is not a whole multiple of the electron or proton charge, but one-third or two-thirds of such a value. There are several varieties of quarks: the "up" and "down" quark, the "top" and "bottom" quark, and the "strange" and "charmed" quarks; each may have any of three "colors," red, green, or blue (the whimsical labels are no more than that; they lack physical significance). Taken together, the quarks provide the basis for a theory known as quantum chromodynamics, which is able to describe very accurately the forces that hold the atomic nucleus together.
A theory to explain the behavior of lighter particles (electrons, positrons, and photons) was developed earlier, mainly by Richard Feynman, Julian Schwinger, and Sinitiro Tomonaga. Freeman Dyson then proved the consistency and equivalence of the seemingly very different theories. The complete synthesis is known as quantum electrodynamics. Between them, quantum electrodynamics and quantum chromodynamics provide a full description of the sub-atomic world down to the scale at which we are able to measure.
However, the quark is a rather peculiar particle to employ as the basis for a theory. A proton consists of three quarks, two "up" and one "down;" a neutron is one "up" and two "down." Pions each contain only two quarks. An omega particle consists of three strange quarks. This is all based purely on theory, because curiously enough, no one has ever seen a quark. Theory suggests that we never will. The quark exists only in combination with other quarks. If you try to break a quark free, by hitting a proton or a neutron with high-energy electrons or a beam of radiation, at first nothing appears. However, if you keep increasing the energy of the interaction, something finally does happen. New particles appear - not the quarks, but more protons, pions, and neutrons. Energy and mass are interchangeable; apply enough energy, and particles appear. The quark, however, keeps its privacy.
I have often thought that a good bumper sticker for a particle physicist would be "Free the Quarks!"
The reluctance of the free quark to put in an appearance makes it very difficult for us to explore its own composition. But we ask the question: What, if anything, is smaller than the quark?
Although recent experiments suggest that the quark does have a structure, no one today knows what it is. We are offshore of the physics mainland, and are allowed to speculate in fictional terms as freely as we choose.
Or almost. There are two other outposts that we need to be aware of in the world of the ultra-small. The proton and the neutron have a radius of about 0.8 x 10-15 meters. If we go to distances far smaller than that, we reach the realm of the superstring.
A superstring is a loop of something not completely defined (energy? force?) that oscillates in a space of ten dimensions. The string vibrations give rise to all the known particles and forces. Each string is about 10-35 meters long. We live in a space of four dimensions (three space and one time), and the extra six dimensions of superstring theory are "rolled up" in such a way as to be unobservable. In practice, of course, a superstring has never been observed. The necessary mathematics to describe what goes on is also profoundly difficult.
Why is the concept useful? Mainly, because superstring theory includes gravity in a natural way, which quantum electrodynamics and quantum chromodynamics do not. In fact, superstring theory not only allows gravity to be included, it requires it. We might be closing in on the "Theory Of Everything" already mentioned, explaining the four known fundamental interactions of matter in a single set of equations.
There is a large literature on superstrings. If the concept continues to prove useful, we will surely find ways to make the mathematics more accessible. Remember, the calculus needed to develop Newton's theories was considered impossibly difficult in the seventeenth century. Meanwhile, the science fiction writer can be more comfortable with superstrings than many practicing scientists.
On the same small scale as the superstring we have something known as the Planck length. This is the place where vacuum energy fluctuations, inevitably predicted by quantum theory, radically affect the nature of space. Rather than a smooth, continuous emptiness, the vacuum must now be perceived as a boiling chaos of minute singularities. Tiny black holes constantly form and dissolve, and space has a foam-like character where even the concept of distance may not have meaning. (We have mentioned black holes but not really discussed them, though surely there is no reader who has not heard of them. They are so important a part of the science fiction writer's arsenal that they deserve a whole section to themselves. They can be found in Chapter 3.)
So far as science is concerned, the universe at the scale of the Planck length is true terra incognita, not to be found on any map. I know of no one who has explored its story potential. You, as story-teller, are free to roam as you choose.
2.8 Strange physics: Superconductivity. I was not sure where this ought to be in the book. It is a phenomenon which depends on quantum level effects, but its results show up in the macroscopic world of everyday events. The one thing that I was sure of is that this is too fascinating a subject to leave out, something that came as an absolute and total surprise to scientists when it was discovered, and remained a theoretical mystery for forty years thereafter. If superconductivity is not a fertile subject for writers, nothing is.
Superconductivity was first observed in materials at extremely low temperatures, so that is the logical place to begin.
Temperature, and particularly low temperature, is in historical terms relatively new. Ten thousand years ago, people already knew how to make things hot. It was easy. You put a fire underneath them. But as recently as two hundred years ago, it was difficult to make things cold. There was no "cold generator" that corresponded to fire as a heat generator. Low temperatures were something that came naturally, they were not man-made.
The Greeks and Romans knew that there were ways of lowering the temperature of materials, although they did not use that word, by such things as the mixture of salt and ice. But they had no way of seeking progressively lower temperatures. That had to wait for the early part of the nineteenth century, when Humphrey Davy and others found that you could liquefy many gases merely by compressing them. The resulting liquid will be warm, because turning gas to liquid gives off the gas's so-called "latent heat of liquefaction." If you now allow this liquid to reach a thermal balance with its surroundings, and then reduce the pressure on it, the liquid boils; and in so doing, it drains heat from its surroundings -- including itself. The same result can be obtained if you take a liquid at atmospheric pressure, and put it into a partial vacuum. Some of the liquid boils, and what's left is colder. This technique, of "boiling under reduced pressure," was a practical and systematic way of pursuing lower temperatures. It first seems to have been used by a Scotsman, William Cullen, who cooled ethyl ether this way in 1748, but it took another three-quarters of a century before the method was applied to science (and to commerce; the first refrigerator was patented by Jacob Perkins in 1834).
Another way to cool was found by James Prescott Joule and William Thomson (later Lord Kelvin) in 1852. Named the Joule-Thomson effect, or the Joule-Kelvin Effect, it relies on the fact that a gas escaping from a valve into a chamber of lower pressure will, under the right conditions, suffer a reduction in temperature. If the gas entering the valve is first passed in a tube through that lower-temperature region, we have a cycle that will move the chamber to lower and lower temperatures.
Through the nineteenth century the Joule-Thomson effect and boiling under reduced pressure permitted the exploration of lower and lower temperatures. The natural question was, how low could you go?
A few centuries ago, there seemed to be no answer to that question. There seemed no limit to how cold something could get, just as today there is no practical limit to how hot something can become.
The problem of reaching low temperatures was clarified when scientists finally realized, after huge intellectual efforts, that heat is nothing more than motion at the atomic and molecular scale. "Absolute zero" could then be identified as no motion, the temperature of an object when you "took out all the heat." (Purists will object to this statement since even at absolute zero, quantum theory tells us that an object still has a zero point energy; the thermodynamic definition of absolute zero is done in terms of reversible isothermal processes).
Absolute zero, it turns out, is reached at a temperature of -273.16 degrees Celsius. Temperatures measured with respect to this value are all positive, and are said to be in Kelvins (written K). One Kelvin is the same size as one degree Celsius, but it is measured with respect to a reference point of absolute zero, rather than to the Celsius zero value of the freezing point of water. We will use the two scales interchangeably, whichever is the more convenient at the time.
Is it obvious that this absolute zero temperature must be the same for all materials? Suppose that you had two materials which reached their zero heat state at different temperatures. Put them in contact with each other. Then thermodynamics requires that heat should flow from the higher temperature body to the other one, until they both reach the same temperature. Since there is by assumption no heat in either material (each is at its own absolute zero), no heat can flow; and when no heat flows between two bodies in contact, they must be at the same temperature. Thus absolute zero is the same temperature for every material.
Even before an absolute zero point of temperature was identified, people were trying to get down as low in temperature as they could, and also to liquefy gases. Sulfur dioxide (boiling point -100 C) was the first to go, when Monge and Clouet liquefied it in 1799 by cooling in a mixture of ice and salt. De Morveau produced liquid ammonia (boiling point -330C) in 1799 using the same method, and in 1805 Northmore claimed to have produced liquid chlorine (boiling point -350 C) by simple compression.
In 1834, Thilorier produced carbon dioxide snow (dry ice, melting point -78.50C) for the first time using gas expansion. Soon after that, Michael Faraday, who had earlier (1823) liquefied chlorine, employed a carbon dioxide and ether mixture to reach the record low temperature of -110 degrees Celsius (163 K). He was able to liquefy many gases, but not hydrogen, oxygen, or nitrogen.
In 1877, Louis Cailletet used gas compression to several hundred atmospheres, followed by expansion through a jet, to produce liquid mists of methane (boiling point -1640C), carbon monoxide (boiling point -1920C), and oxygen (boiling point -1830 C). He did not, however, manage to collect a volume of liquid from any of these substances.
Liquid oxygen was finally produced in quantity in 1883, by Wroblewski and Olszewski, who reached the lowest temperature to date (-1360C). Two years later they were able to go as low as -1520C, and liquefied both nitrogen and carbon monoxide. In that same year, Olszewski reached a temperature of -2250C (48 K), which remained a record for many years. He was able to produce a small amount of liquid hydrogen for the first time. In 1886, Joseph Dewar invented the Dewar flask (which we think of today as the thermos bottle) that allowed cold, liquefied materials to be stored for substantial periods of time at atmospheric pressure. In 1898, Dewar liquefied hydrogen in quantity and reached a temperature of 20 K. At that point, all known gases had been liquefied.
I have gone a little heavy on the history here, to make the point that most scientific progress is not the huge intellectual leap favored in bad movies. It is more often careful experiments and the slow accretion of facts, until finally one theory can be produced which encompasses all that is known. If a story is to be plausible and involves a major scientific development, then some (invented) history that preceded the development adds a feeling of reality.
However, we have one missing fact in the story so far. What about helium, which has not been mentioned?
In the 1890's, helium was still a near-unknown quantity. The gas had been observed in the spectrum of the Sun by Janssen and Lockyer, in 1868, but it had not been found on earth until the early 1890's. Its properties were not known. It is only with hindsight that we can find good reasons why the gas, when available, proved unusually hard to liquefy.
The periodic table had already been formulated by Mendeleyev, in about 1870. Forty years later, Moseley showed that the table could be written in terms of an element's atomic number, which corresponded to the number of protons in the nucleus of that element.
As other gases were liquefied, a pattern emerged. TABLE 2.1 shows the temperatures where a number of gases change from the gaseous to the liquid state, under normal atmospheric pressure, together with their atomic number and molecular weights.
What happens when we plot the boiling point of an element against its atomic number in the periodic table? For gases, there are clearly two different groups. Radon, xenon, krypton, argon and neon remain gases to much lower temperatures than other materials of similar atomic number. This is even more noticeable if we add a number of other common gases, such as ammonia, acetylene, carbon dioxide, methane, and sulfur dioxide, and look at the variation of their boiling points with their molecular weights. They all boil at much higher temperatures.
Now, radon, xenon, krypton and the others of the low-boiling point group are all inert gases, often known as noble gases, that do not participate in any chemical reactions. TABLE 2.1 also shows that the inert gases of lower atomic number and molecular weight liquefy at lower temperatures. Helium, the second lightest element, is the final member of the inert gas group, and the one with the lowest atomic number. Helium should therefore have an unusually low boiling point.
It does. All through the late 1890's and early 1900's, attempts to liquefy it failed.
When the Dutch scientist Kamerlingh Onnes finally succeeded, in 1908, the reason for other people's failure became clear. Helium remains liquid until -268.9 Celsius -- 16 degrees lower than liquid hydrogen, and only 4.2 degrees above absolute zero. As for solid helium, not even Onnes' most strenuous efforts could produce it. When he boiled helium under reduced pressure, the liquid helium went to a new form -- but it was a new and strange liquid phase, now known as Helium II, that exists only below 2.2 K. It turns out that the solid phase of helium does not exist at atmospheric pressure, or at any pressure less than 25 atmospheres. It was first produced in 1926, by P. H. Keeson.
The liquefaction of helium looked like the end of the story; it was in fact the beginning.
2.9 Super properties. Having produced liquid helium, Kamerlingh Onnes set about determining its properties. History does not record what he expected to find, but it is fair to guess that he was amazed.
Science might be defined as assuming something you don't know using what you know, and then measuring to see if it is true or not. The biggest scientific advances often occur when what you measure does not agree with what you predict. What Kamerlingh Onnes measured for liquid helium, and particularly for Helium II, was so bizarre that he must have wondered at first what was wrong with his measuring equipment.
One of the things that he measured was viscosity. Viscosity is the gooiness of a substance, though there are more scientific definitions. We usually think of viscosity as applying to something like oil or molasses, but non-gooey substances like water and alcohol have well-defined viscosities.
Onnes tried to determine a value of viscosity for Helium II down around 1 K. He failed. It was too small to measure. As the temperature goes below 2 K, the viscosity of Helium II goes rapidly towards zero. It will flow with no measurable resistance through narrow capillaries and closely-packed powders. Above 2.2 K, the other form of liquid helium, known as Helium I, does have a measurable viscosity, low but highly temperature-dependent.
Helium II also conducts heat amazingly well. At about 1.9 K, where its conductivity is close to a maximum, this form of liquid helium conducts heat about eight hundred times as well as copper at room temperature -- and copper is usually considered an excellent conductor. Helium II is in fact by far the best known conductor of heat.
More disturbing, perhaps, from the experimenter's point of view is Helium II's odd reluctance to be confined. In an open vessel, the liquid creeps in the form of a thin film up the sides of the container, slides out over the rim, and runs down to the lowest available level. This phenomenon can be readily explained, in terms of the very high surface tension of Helium II; but it remains a striking effect to observe.
Liquid helium is not the end of the low-temperature story, and the quest for absolute zero is an active and fascinating field that continues today. New methods of extracting energy from test substances are still being developed, with the most effective ones employing a technique known as adiabatic demagnetization. Invented independently in 1926 by a German, Debye, and an American, Giauque, it was first used by Giauque and MacDougall in 1933, to reach a temperature of 0.25 K. A more advanced version of the same method was applied to nuclear adiabatic demagnetization in 1956 by Simon and Kurti, and they achieved a temperature within a hundred thousandth of a degree of absolute zero. With the use of this method, temperatures as low as a few billionths of a degree have been attained.
However, the pursuit of absolute zero is not our main objective, and to pursue it further would take us too far afield. We are interested in another effect that Kamerlingh Onnes found in 1911, when he examined the electrical properties of selected materials immersed in a bath of liquid helium. He discovered that certain pure metals exhibited what is known today as superconductivity.
Below a few Kelvins, the resistance to the passage of an electrical current in these metals drops suddenly to a level too small to measure. Currents that are started in wire loops under these conditions continue to flow, apparently forever, with no sign of dissipation of energy. For pure materials, the cut-off temperature between normal conducting and superconducting is quite sharp, occurring within a couple of hundredths of a degree. Superconductivity today is a familiar phenomenon. At the time when it was discovered, it was an absolutely astonishing finding -- a physical impossibility, less plausible than anti-gravity. Frictional forces must slow all motion, including the motion represented by the flow of an electrical current. Such a current could not therefore keep running, year after year, without dissipation. That seemed like a fundamental law of nature.
Of course, there is no such thing as a law of nature. There is only the Universe, going about its business, while humans scurry around trying to put everything into neat little intellectual boxes. It is amazing that the tidying-up process called physics works as well as it does, and perhaps even more astonishing that mathematics seems important in every box. But the boxes have no reality or permanence; a "law of nature" is useful until we discover cases where it doesn't apply.
In 1911, the general theories that could explain superconductivity were still decades in the future. The full explanation did not arrive until 1957, forty-six years after the initial discovery.
To understand superconductivity, and to explain its seeming impossibility, it is necessary to look at the nature of electrical flow itself.
2.10 Meanwhile, electricity. While techniques were being developed to reach lower and lower temperatures, the new field of electricity and magnetism was being explored in parallel -- sometimes by the same experimenters. Just three years before the Scotsman, William Cullen, found how to cool ethyl ether by boiling it under reduced pressure, von Kleist of Pomerania and van Musschenbroek in Holland independently discovered a way to store electricity. Van Musschenbroek did his work at the University of Leyden -- the same university where, 166 years later, Kamerlingh Onnes would discover superconductivity. The Leyden Jar, as the storage vessel soon became known, was an early form of electrical capacitor. It allowed the flow of current through a wire to take place under controlled and repeatable circumstances.
Just what it was that constituted the current through that wire would remain a mystery for another century and a half. But it was already apparent to Ben Franklin by 1750 that something material was flowing. The most important experiments took place three-quarters of a century later. In 1820, just three years before Michael Faraday liquefied chlorine, the Danish scientist Oersted and then the Frenchman Ampère found that there was a relation between electricity and magnetism -- a flowing current would make a magnet move. In the early 1830's, Faraday then showed that the relationship was a reciprocal one, by producing an electric current from a moving magnet. However, from our point of view an even more significant result had been established a few years before, when in 1827 the German scientist Georg Simon Ohm discovered Ohm's Law: that the current in a wire is given by the ratio of the voltage between the ends of the wire, divided by the wire's resistance.
This result seemed too simple to be true. When Ohm announced it, no one believed him. He was discredited, resigned his position as a professor at Cologne University, and lived in poverty and obscurity for several years. Finally, he was vindicated, recognized, and fourteen years later began to receive awards and medals for his work.
Ohm's Law is important to us because it permits the relationship between the resistance and temperature of a substance to be explored, without worrying about the starting value of the voltage or current. It turns out that the resistance of a conducting material is roughly proportional to its absolute temperature. Just as important, materials vary enormously in their conducting power. For instance, copper allows electricity to pass through it 1020 times as well as quartz or rubber. The obvious question is, why? What makes a good conductor, and what makes a good insulator? And why should a conductor pass electricity more easily at lower temperatures?
The answers to these questions were developed little by little through the rest of the nineteenth century. First, heat was discovered to be no more than molecular and atomic motion. Thus changes of electrical resistance had somehow to be related to those same motions.
Second, in the 1860's, James Clerk Maxwell, the greatest physicist of the century, developed Faraday and Ampère's experimental results into a consistent and complete mathematical theory of electricity and magnetism, finally embodied in four famous differential equations. All observed phenomena of electricity and magnetism must fit into the framework of that theory.
Third, scientists began to realize that metals, and many other materials that conduct electricity well, have a regular structure at the molecular level. The atoms and molecules of these substances are arranged in a regular three-dimensional grid pattern, termed a lattice, and held in position by inter-atomic electrical forces.
Finally, in 1897, J.J. Thomson found the elusive carrier of the electrical current. He originally termed it the "corpuscle," but it soon found its present name, the electron. All electrical currents are carried by electrons.
Again, lots of history before we have the tools in hand to understand the flow of electricity through conductors -- but not yet, as we shall see, to explain superconductivity.
Electricity is caused by the movement of electrons. Thus a good conductor must have plenty of electrons readily able to move, which are termed free electrons. An insulator has few or no free electrons, and the electrons in such materials are all bound to atoms.
If the atoms of a material maintain exact, regularly-spaced positions, it is very easy for free electrons to move past them, and hence for current to flow. In fact, electrons are not interfered with at all if the atoms in the material stand in a perfectly regular array. However, if the atoms in the lattice can move randomly, or if there are imperfections in the lattice, the electrons are then impeded in their progress, and the resistance of the material increases.
This is exactly what happens when the temperature goes up. Recalling that heat is random motion, we expect that atoms in hot materials will jiggle about on their lattice sites with the energy provided by increased heat. The higher the temperature, the greater the movement, and the greater the obstacle to free electrons. Therefore the resistance of conducting materials increases with increasing temperature.
This was all well-known by the 1930's. Electrical conduction could be calculated very well by the new quantum theory, thanks largely to the efforts of Arnold Sommerfeld, Felix Bloch, Rudolf Peierls and others. However, those same theories predicted a steady decline of electrical resistance as the temperature went towards absolute zero. Nothing predicted, or could explain, the precipitous drop to zero resistance that was encountered in some materials at their critical temperature. Superconductivity remained a mystery for another quarter of a century. To provide its explanation, it is necessary to delve a little farther into quantum theory itself.
2.11 Superconductivity and statistics. Until late 1986, superconductivity was a phenomenon never encountered at temperatures above 23 K, and usually at just a couple of degrees Kelvin. Even 23 K is below the boiling point of everything except hydrogen (20 K) and helium (4.2 K). Most superconductors become so only at far lower temperatures (see TABLE 2.2). Working with them is thus a tiresome business, since such low temperatures are expensive to achieve and hard to maintain. Let us term superconductivity below 20 K "classical superconductivity," and for the moment confine our attention to it. TABLE 2.2 shows the temperature at which selected materials become superconducting when no magnetic field is present.
Note that all these temperatures are below the temperature of liquid hydrogen (20 K), which means that superconductivity cannot be induced by bathing the metal sample in a liquid hydrogen bath, although such an environment is today readily produced. For many years, the search was for a material that would sustain superconductivity above 20 K.
For another fifteen years after the 1911 discovery of superconductivity, there seemed little hope of explaining it. However, in the mid-1920's a new tool, quantum theory, encouraged physicists to believe that they at last had a theoretical framework that would explain all phenomena of the sub-atomic world. In the late 1920's and 1930's, hundreds of previously-intractable problems yielded to a quantum mechanical approach. And the importance of a new type of statistical behavior became clear.
On the atomic and nuclear scale, particles and systems of particles can be placed into two well-defined and separate groups. Electrons, protons, neutrons, positrons, muons and neutrinos all satisfy what is known as Fermi-Dirac statistics, and they are collectively known as fermions. For our purposes, the most important point about such particles is that their behavior is subject to the Pauli Exclusion Principle, which states that no two identical particles obeying Fermi-Dirac statistics can have the same values for all physical variables (so, for example, two electrons in an atom cannot have the same spin, and the same energy level). The Pauli Exclusion Principle imposes very strong constraints on the motion and energy levels of identical fermions, within atoms and molecules, or moving in an atomic lattice.
The other kind of statistics is known as Bose-Einstein statistics, and it governs the behavior of photons, alpha particles (i.e. helium nuclei), and mesons. These are all termed bosons. The Pauli Exclusion Principle does not apply to systems satisfying Bose-Einstein statistics, so bosons are permitted to have the same values of all physical variables; in fact, since they seek to occupy the lowest available energy level, they will group around the same energy.
In human terms, fermions are loners, each with its own unique state; bosons love a crowd, and they all tend to jam into the same state.
Single electrons are, as stated, fermions. At normal temperatures, which are all well above a few Kelvins, electrons in a metal are thus distributed over a range of energies and momenta, as required by the Pauli Exclusion Principle.
In 1950, H. Fröhlich suggested a strange possibility: that the fundamental mechanism responsible for superconductivity was somehow the interaction of free electrons with the atomic lattice. This sounds at first sight highly improbable, since it is exactly this lattice that is responsible for the resistance of metals at normal temperatures. However, Fröhlich had theoretical reasons for his suggestion, and in that same year, 1950, there was experimental evidence -- unknown to Fröhlich -- that also suggested the same thing: superconductivity is caused by electron-lattice interactions.
This does not, of course, explain superconductivity. The question is, what does the lattice do? What can it possibly do, that would give rise to superconducting materials? Somehow the lattice must affect the free electrons in a fundamental way, but in a way that is able to produce an effect only at low temperatures.
The answer was provided by John Bardeen, Leon Cooper, and Robert Schrieffer, in 1957 (they got the physics Nobel prize for this work in 1972). They showed that the atomic lattice causes free electrons to pair off. Instead of single electrons, moving independently of each other, the lattice encourages the formation of electron couplets, which can then each be treated as a unit. The coupling force is tiny, and if there is appreciable thermal energy available it is enough to break the bonds between the electron pairs. Thus any effect of the pairing should be visible only at very low temperatures. The role of the lattice in this pairing is absolutely fundamental, yet at the same time the lattice does not participate in the pairing -- it is more as if the lattice is a catalyst, which permits the electron pairing to occur but is not itself affected by that pairing.
The pairing does not mean that the two electrons are close together in space. It is a pairing of angular momentum, in such a way that the total angular momentum of a pair is zero. The two partners may be widely separated in space, with many other electrons between them. Like husbands and wives at a crowded party, paired electrons remain paired, even though they may be widely separated, and have many other electrons between them.
Now for the fundamental point implied by the work of Cooper, Bardeen, and Schrieffer. Once two electrons are paired, that pair behaves like a boson, not a fermion. Any number of these electron pairs can be in the same low-energy state. More than that, when a current is flowing (so all the electron pairs are moving) it takes more energy to stop the flow than to continue it. To stop the flow, some boson (electron pair) will have to move to a different energy level; and as we already remarked, bosons like to be in the same state.
To draw the chain of reasoning again: superconductivity is a direct result of the boson nature of electron pairs; electron pairs are the direct result of the mediating effect of the atomic lattice; and the energy holding the pairs together is very small, so that they exist only at very low temperatures, when no heat energy is around to break up the pairing.
2.12 High-temperature superconductors. We now have a very tidy explanation of classical superconductivity, one that suggests we will never find anything that behaves as a superconductor at more than a few degrees above absolute zero. Thus the discovery of materials that turn into superconductors at much higher temperatures is almost an embarrassment. Let's look at them and see what is going on.
The search for high-temperature superconductors began as soon as superconductivity itself was discovered. Since there was no good theory before the 1950's to explain the phenomenon, there was also no reason to assume that a material could not be found that exhibited superconductivity at room temperature, or even above it. That, however, was not the near-term goal. The main hope of researchers in the field was more modest, to find a material with superconductivity well above the temperature of liquid hydrogen. Scientists would certainly have loved to find something better yet, perhaps a material that remained superconducting above the temperature of liquid nitrogen (77 K). That would have allowed superconductors to be readily used in many applications, from electromagnets to power transmission. But as recently as December, 1986, that looked like an impossible dream.
The first signs of the breakthrough had come early that year. In January, 1986, Alex Müller and Georg Bednorz, at the IBM Research Division in Zurich, Switzerland, produced superconductivity in a ceramic sample containing barium, copper, oxygen, and lanthanum (one of the rare-earth elements). The temperature was 11 K, which was not earth-shaking, but much higher than anyone might have expected. Müller and Bednorz knew they were on to something good. They produced new ceramic samples, and little by little worked the temperature for the onset of superconductivity up to 30 K. The old record, established in 1973, had been 23 K. By November, Paul Chu and colleagues at the University of Houston, and Tanaka and Kitazawa at the University of Tokyo had repeated the experiments, and also found the material superconducting at 30 K.
Once those results were announced, every experimental team engaged in superconductor research jumped onto the problem. In December, Robert Cava, Bruce van Dover, and Bertram Batlogg at Bell Labs had produced superconductivity in a strontium-lanthanum-copper-oxide combination at 36 K. Also in December, 1986, Chu and colleagues had positive results over 50 K.
In January, 1987, there was another astonishing breakthrough. Chu and his fellow workers substituted yttrium, a metal with many rare-earth properties, for lanthanum in the ceramic pellets they were making. The resulting samples went superconducting at 90 K. The researchers could hardly believe their result, but within a few days they had pushed up to 93 K, and had a repeatable, replicable procedure. Research groups in Tokyo and in Beijing also reported results above 90 K in February.
Recall that liquid nitrogen boils at 77 K. For the first time, superconductors had passed the "nitrogen barrier." In a bath of that liquid, a ceramic wire using yttrium, barium copper, and oxygen was superconducting.
The end of the road has still not been reached. There have been hints of superconductive behavior at 234 K. This is only -40 C, just a few degrees below the temperature at which ammonia boils.
Fascinating, and the natural question is, can room-temperature superconductors, the Holy Grail of this field, ever be produced?
Unfortunately, the question cannot be answered. There is no accepted model to explain what is going on, and it would not be unfair to say that at the moment experiment is still ahead of theory.
The Bardeen, Cooper and Schrieffer (BCS) theory of superconductivity leads to a very weak binding force between electron pairs. Thus according to this theory the phenomenon ought not to occur at 90 K, still less at 240 K. At the same time, the theory tells us that any superconductivity, high-temperature or otherwise, is almost certainly the result of free electrons forming into pairs, and then behaving as bosons. In classical superconductivity, at just a few degrees above absolute zero, the mediating influence that operates to form electron pairs can be shown to be the atomic lattice itself. That result, in quantitative form, comes from the Cooper, Bardeen, and Schrieffer approach. The natural question to ask is, What other factor could work to produce electron pairs? To be useful, it must produce strong bonding of electron pairs, otherwise they would be dissociated by the plentiful thermal energy at higher temperatures. And any electron pairs so produced must be free to move, in order to carry the electric current
Again, we are asking questions that take us beyond the frontiers of today's science. Any writer has ample scope for speculation.
2.13 Making it work. Does this mean that we now have useful, work-horse superconductors above the temperature of liquid nitrogen, ready for industrial applications? It looks like it. But there are complications.
Soon after Kamerlingh Onnes discovered superconductivity, he also discovered (in 1913) that superconductivity was destroyed when he sent a large current through the material. This is a consequence of the effect that Oersted and Ampère had noticed in 1820; namely, that an electric current creates a magnetic field. The temperature at which a material goes superconducting is lowered when it is placed in a magnetic field. That is why the stipulation was made in TABLE 2.2 that those temperatures apply only when no magnetic field is present. A large current creates its own magnetic field, so it may itself destroy the superconducting property.
For a superconductor to be useful in power transmission, it must remain superconducting even though the current through it is large. Thus we want the critical temperature to be insensitive to the current through the sample. One concern was that the new high-temperature superconductors might perform poorly here, and the first samples made were in fact highly affected by imposed magnetic fields. However, some of the new superconductors have been found to remain superconducting at current up to 1,000 amperes per square millimeter, and this is more than adequate for power transmission.
A second concern is a practical one: Can the new materials be worked with, to make wires and coils that are not too brittle or too variable in quality? Again the answers are positive. The ceramics can be formed into loops and wires, and they are not unduly brittle or fickle in behavior.
The only thing left is to learn where the new capability of high-temperature superconductors will be most useful. Some applications are already clear.
First, we will see smaller and faster computers, where the problem of carrying off heat caused by dissipation of energy from electrical currents in small components is a big problem. This application will exist, even if the highest temperature superconductors cannot tolerate high current densities.
Second, as Faraday discovered, any tightly-wound coil of wire with a current running through it becomes an electromagnet. Superconducting coils can produce very powerful magnets of this type, ones which will keep their magnetic properties without using any energy or needing any cooling. Today's electromagnets that operate at room temperature are limited in their strength, because very large currents through the coils also produce large heating effects.
Third, superconductors have another important property that we have not so far mentioned, namely, they do not allow a magnetic field to be formed within them. In the language of electromagnetic theory, they are perfectly diamagnetic. This is known as the Meissner Effect, and it was discovered in 1933. It could have easily been found in 1913, but it was considered so unlikely a possibility that no one did the experiment to test superconductor diamagnetism for another twenty years.
As a consequence of the Meissner Effect, a superconductor that is near a magnet will form an electric current layer on its surface. That layer is such that the superconductor is then strongly repelled by the magnetic field, rather than being attracted to it. This permits a technique known as magnetic levitation to be used to lift and support heavy objects. Magnets, suspended above a line of superconductors, will remain there without needing any energy to hold them up. Friction-free support systems are the result, and they should be useful in everything from transportation to factory assembly lines. For many years, people have talked of super-speed trains, suspended by magnetic fields and running at a fraction of the operating costs of today's locomotives. When superconductors could operate only when cooled to liquid hydrogen temperatures and below, such transportation ideas were hopelessly expensive. With high-temperature superconductors, they become economically attractive.
And of course, there is the transmission of electrical power. Today's transmission grids are full of transformers that boost the electrical signal to hundreds of thousands of volts for sending the power through the lines, and then bring that high-voltage signal back to a hundred volts or so for household use. However, the only reason for doing this is to minimize energy losses. Line heating is less when electrical power is transmitted at low current and high voltage, so the higher the voltage, the better. With superconductors, however, there are no heat dissipation losses at all. Today's elaborate system of transformers will be unnecessary. The implications of this are enormous: the possible replacement of the entire electrical transmission systems of the world by a less expensive alternative, both to build and to operate.
However, before anyone embarks on such an effort, they will want to be sure that the technology has gone as far as it is likely to go. It would be crazy to start building a power-line system based on the assumption that the superconductors need to be cooled to liquid nitrogen or liquid ammonia temperatures, if next year sees the discovery of a material that remains superconducting at room temperature and beyond.
Super-computers, heavy lift systems, magnetically cushioned super-trains, cheap electrical power transmission; these are the obvious prospects. Are there other important uses that have not yet been documented?
Almost certainly, there are. We simply have to think of them; and then, before scientists prove that our ideas are impossible, it would be nice to write and publish stories about them. It will not be enough, by the way, to simply refer to room-temperature superconductors. That was done long ago, by me among others ("A Certain Place In History," GALAXY, 1977).
The boiling points of gases.
TABLE 2.2. Temperatures at which materials become superconducting (in the absence of a magnetic field).