The Fabric of the Cosmos: Space, Time, and the Texture of Reality - Brian Greene (2004)
Part II. TIME AND EXPERIENCE
Chapter 7. Time and the Quantum
INSIGHTS INTO TIME’S NATURE FROM THE QUANTUM REALM
When we think about something like time, something we are within, something that is fully integrated into our day-to-day existence, something that is so pervasive, it is impossible to excise—even momentarily—from common language, our reasoning is shaped by the preponderance of our experiences. These day-to-day experiences are classical experiences; with a high degree of accuracy, they conform to the laws of physics set down by Newton more than three centuries ago. But of all the discoveries in physics during the last hundred years, quantum mechanics is far and away the most startling, since it undermines the whole conceptual schema of classical physics.
So it is worthwhile to expand upon our classical experiences by considering some experiments that reveal eyebrow-raising features of how quantum processes unfold in time. In this broadened context, we will then continue the discussion of the last chapter and ask whether there is a temporal arrow in the quantum mechanical description of nature. We will come to an answer, but one that is still controversial, even among physicists. And once again it will take us back to the origin of the universe.
The Past According to the Quantum
Probability played a central role in the last chapter, but as I stressed there a couple of times, it arose only because of its practical convenience and the utility of the information it provides. Following the exact motion of the 1024 H2O molecules in a glass of water is well beyond our computational capacity, and even if it were possible, what would we do with the resulting mountain of data? To determine from a list of 1024 positions and velocities whether there were ice cubes in the glass would be a Herculean task. So we turned instead to probabilistic reasoning, which is computationally tractable and, moreover, deals with the macroscopic properties— order versus disorder; for example, ice versus water—we are generally interested in. But keep in mind that probability is by no means fundamentally stitched into the fabric of classical physics. In principle, if we knew precisely how things were now—knew the positions and velocities of every single particle making up the universe—classical physics says we could use that information to predict how things would be at any given moment in the future or how they were at any given moment in the past. Whether or not you actually follow its moment-to-moment development, according to classical physics you can talk about the past and the future, in principle, with a confidence that is controlled by the detail and the accuracy of your observations of the present.1
Probability will also play a central role in this chapter. But because probability is an inescapable element of quantum mechanics, it fundamentally alters our conceptualization of past and future. We’ve already seen that quantum uncertainty prevents simultaneous knowledge of exact positions and exact velocities. Correspondingly, we’ve also seen that quantum physics predicts only the probability that one or another future will be realized. We have confidence in these probabilities, to be sure, but since they are probabilities we learn that there is an unavoidable element of chance when it comes to predicting the future.
When it comes to describing the past, there is also a critical difference between classical and quantum physics. In classical physics, in keeping with its egalitarian treatment of all moments in time, the events leading up to something we observe are described using exactly the same language, employing exactly the same attributes, we use to describe the observation itself. If we see a fiery meteor in the night sky, we talk of its position and its velocity; if we reconstruct how it got there, we also talk of a unique succession of positions and velocities as the meteor hurtled through space toward earth. In quantum physics, though, once we observe something we enter the rarefied realm in which we know something with 100 percent certainty (ignoring issues associated with the accuracy of our equipment, and the like). But the past—by which we specifically mean the “unobserved” past, the time before we, or anyone, or anything has carried out a given observation—remains in the usual realm of quantum uncertainty, of probabilities. Even though we measure an electron’s position as right here right now, a moment ago all it had were probabilities of being here, or there, or way over there.
And as we’ve seen, it is not that the electron (or any particle for that matter) really was located at only one of these possible positions, but we simply don’t know which.2 Rather, there is a sense in which the electron was at all of the locations, because each of the possibilities—each of the possible histories—contributes to what we now observe. Remember, we saw evidence of this in the experiment, described in Chapter 4, in which electrons were forced to pass through two slits. Classical physics, which relies on the commonly held belief that happenings have unique, conventional histories, would say that any electron that makes it to the detector screen went through either the left slit or the right slit. But this view of the past would lead us astray: it would predict the results illustrated in Figure 4.3a, which do not agree with what actually happens, as illustrated in Figure 4.3b. The observed interference pattern can be explained only by invoking an overlap between something that passes through both slits.
Quantum physics provides just such an explanation, but in doing so it drastically changes our stories of the past—our descriptions of how the particular things we observe came to be. According to quantum mechanics, each electron’s probability wave does pass through both slits, and because the parts of the wave emerging from each slit commingle, the resulting probability profile manifests an interference pattern, and hence the electron landing positions do, too.
Compared with everyday experience, this description of the electron’s past in terms of criss-crossing waves of probability is thoroughly unfamiliar. But, throwing caution to the wind, you might suggest taking this quantum mechanical description one step further, leading to a yet more bizarre-sounding possibility. Maybe each individual electron itself actually travels through both slits on its way to the screen, and the data result from an interference between these two classes of histories. That is, it’s tempting to think of the waves emerging from the two slits as representing two possible histories for an individual electron—going through the left slit or going through the right slit—and since both waves contribute to what we observe on the screen, perhaps quantum mechanics is telling us that both potential histories of the electron contribute as well.
Surprisingly, this strange and wonderful idea—the brainchild of the Nobel laureate Richard Feynman, one of the twentieth century’s most creative physicists—provides a perfectly viable way of thinking about quantum mechanics. According to Feynman, if there are alternative ways in which a given outcome can be achieved—for instance, an electron hits a point on the detector screen by traveling through the left slit, or hits the same point on the screen but by traveling through the right slit—then there is a sense in which the alternative histories all happen, and happen simultaneously. Feynman showed that each such history would contribute to the probability that their common outcome would be realized, and if these contributions were correctly added together, the result would agree with the total probability predicted by quantum mechanics.
Feynman called this the sum over histories approach to quantum mechanics; it shows that a probability wave embodies all possible pasts that could have preceded a given observation, and illustrates well that to succeed where classical physics failed, quantum mechanics had to substantially broaden the framework of history.3
To Oz
There is a variation on the double-slit experiment in which the interference between alternative histories is made even more evident because the two routes to the detector screen are more fully separated. It is a little easier to describe the experiment using photons rather than electrons, so we begin with a photon source—a laser—and we fire it toward what is known as a beam splitter. This device is made from a half-silvered mirror, like the kind used for surveillance, which reflects half of the light that hits it while allowing the other half to pass through. The initial single light beam is thus split in two, the left beam and the right beam, similar to what happens to a light beam that impinges on the two slits in the double-slit setup. Using judiciously placed fully reflecting mirrors, as in Figure 7.1, the two beams are brought back together further downstream at the location of the detector. Treating the light as a wave, as in the description by Maxwell, we expect—and, indeed, we find—an interference pattern on the screen. The length of the journey to all but the center point on the screen is slightly different for the left and right routes and so while the left beam might be reaching a peak at a given point on the detector screen, the right beam might be reaching a peak, a trough, or something in between. The detector records the combined height of the two waves and hence has the characteristic interference pattern.
Figure 7.1 (a) In a beam-splitter experiment, laser light is split into two beams that travel two separate paths to the detector screen. (b) The laser can be turned down so that it fires individual photons; over time, the photon impact locations build up an interference pattern.
The classical/quantum distinction becomes apparent as we drastically lower the intensity of the laser so that it emits photons singly, say, one every few seconds. When a single photon hits the beam splitter, classical intuition says that it will either pass through or will be reflected. Classical reasoning doesn’t even allow a hint of any kind of interference, since there is nothing to interfere: all we have are single, individual, particulate photons passing from source to detector, one by one, some going left, some going right. But when the experiment is done, the individual photons recorded over time, much as in Figure 4.4, do yield an interference pattern, as in Figure 7.1b. According to quantum physics, the reason is that each detected photon could have gotten to the detector by the left route or by going via the right route. Thus, we are obliged to combine these two possible histories in determining the probability that a photon will hit the screen at one particular point or another. When the left and right probability waves for each individual photon are merged in this way, they yield the undulating probability pattern of wave interference. And so, unlike Dorothy, who is perplexed when the Scarecrow points both left and right in giving her directions to Oz, the data can be explained perfectly by imagining that each photon takes both left and right routes toward the detector.
Prochoice
Although we have described the merging of possible histories in the context of only a couple of specific examples, this way of thinking about quantum mechanics is general. Whereas classical physics describes the present as having a unique past, the probability waves of quantum mechanics enlarge the arena of history: in Feynman’s formulation, the observed present represents an amalgam—a particular kind of average— of all possible pasts compatible with what we now see.
In the case of the double-slit and beam-splitter experiments, there are two ways for an electron or photon to get from the source to the detector screen—going left or going right—and only by combining the possible histories do we get an explanation for what we observe. If the barrier had three slits, we’d have to take account of three kinds of histories; with 300 slits, we’d need to include the contributions of the whole slew of resulting possible histories. Taking this to the limit, if we imagine cutting an enormous number of slits—so many, in fact, that the barrier effectively disappears—quantum physics says that each electron would then traverse every possible path on its way to a particular point on the screen, and only by combining the probabilities associated with each such history could we explain the resulting data. That may sound strange. (It is strange.) But this bizarre treatment of times past explains the data of Figure 4.4, Figure 7.1b, and every other experiment dealing with the microworld.
You might wonder how literally you should take the sum over histories description. Does an electron that strikes the detector screen really get there by traveling along all possible routes, or is Feynman’s prescription merely a clever mathematical contrivance that gets the right answer? This is among the key questions for assessing the true nature of quantum reality, so I wish I could give you a definitive answer. But I can’t. Physicists often find it extremely useful to envision a vast assemblage of combining histories; I use this picture in my own research so frequently that it certainly feels real. But that’s not the same thing as saying that it is real. The point is that quantum calculations unambiguously tell us the probability that an electron will land at one or another point on the screen, and these predictions agree with the data, spot on. As far as the theory’s verification and predictive utility are concerned, the story we tell of how the electron got to that point on the screen is of little relevance.
But surely, you’d continue to press, we can settle the issue of what really happens by changing the experimental setup so that we can also watch the supposed fuzzy mélange of possible pasts melding into the observed present. It’s a good suggestion, but we already know that there has to be a hitch. In Chapter 4, we learned that probability waves are not directly observable; since Feynman’s coalescing histories are nothing but a particular way of thinking about probability waves, they, too, must evade direct observation. And they do. Observations cannot tease apart individual histories; rather, observations reflect averages of all possible histories. So, if you change the setup to observe the electrons in flight, you will see each electron pass by your additional detector in one location or another; you will never see any fuzzy multiple histories. When you use quantum mechanics to explain why you saw the electron in one place or another, the answer will involve averaging over all possible histories that could have led to that intermediate observation. But the observation itself has access only to histories that have already merged. By looking at the electron in flight, you have merely pushed back the notion of what you mean by a history. Quantum mechanics is starkly efficient: it explains what you see but prevents you from seeing the explanation.
You might further ask: Why, then, is classical physics—commonsense physics—which describes motion in terms of unique histories and trajectories, at all relevant to the universe? Why does it work so well in explaining and predicting the motion of everything from baseballs to planets to comets? How come there is no evidence in day-to-day life of the strange way in which the past apparently unfolds into the present? The reason, discussed briefly in Chapter 4 and to be elaborated shortly with greater precision, is that baseballs, planets, and comets are comparatively large, at least when compared with particles like electrons. And in quantum mechanics, the larger something is, the more skewed the averaging becomes: All possible trajectories do contribute to the motion of a baseball in flight, but the usual path—the one single path predicted by Newton’s laws—contributes much more than do all other paths combined. For large objects, it turns out that classical paths are, by an enormous amount, the dominant contribution to the averaging process and so they are the ones we are familiar with. But when objects are small, like electrons, quarks, and photons, many different histories contribute at roughly the same level and hence all play important parts in the averaging process.
You might finally ask: What is so special about the act of observing or measuring that it can compel all the possible histories to ante up, merge together, and yield a single outcome? How does our act of observing somehow tell a particle it’s time to tally up the histories, average them out, and commit to a definite result? Why do we humans and equipment of our making have this special power? Is it special? Or might the human act of observation fit into a broader framework of environmental influence that shows, quantum mechanically speaking, we aren’t so special after all? We will take up these perplexing and controversial issues in the latter half of this chapter, since not only are they pivotal to the nature of quantum reality, but they provide an important framework for thinking about quantum mechanics and the arrow of time.
Calculating quantum mechanical averages requires significant technical training. And understanding fully how, when, and where the averages are tallied requires concepts that physicists are still working hard to formulate. But one key lesson can be stated simply: quantum mechanics is the ultimate prochoice arena: every possible “choice” something might make in going from here to there is included in the quantum mechanical probability associated with one possible outcome or another.
Classical and quantum physics treat the past in very different ways.
Pruning History
It is totally at odds with our classical upbringing to imagine one indivisible object—one electron or one photon—simultaneously moving along more than one path. Even those of us with the greatest of self-control would have a hard time resisting the temptation to sneak a peek: as the electron or photon passes through the doubly slit screen or the beam splitter, why not take a quick look to see what path it really follows on its way to the detector? In the double-slit experiment, why not put little detectors in front of each slit to tell you whether the electron went through one opening, the other, or both (while still allowing the electron to carry on toward the main detector)? In the beam-splitter experiment, why not put, on each pathway leading from the beam splitter, a little detector that will tell if the photon took the left route, the right route, or both routes (again, while allowing the photon to keep going onward toward the detector)?
The answer is that you can insert these additional detectors, but if you do, you will find two things. First, each electron and each photon will always be found to go through one and only one of the detectors; that is, you can determine which path each electron or photon follows, and you will find that it always goes one way or the other, not both. Second, you will also find that the resulting data recorded by the main detectors have changed. Instead of getting the interference patterns of Figure 4.3b and 7.1b, you get the results expected from classical physics, as in Figure 4.3a. By introducing new elements—the new detectors—you have inadvertently changed the experiments. And the change is such that the paradox you were just about to reveal—that you now know which path each particle took, so how could there be any interference with another path that the particle demonstrably did not take?—is averted. The reason follows immediately from the last section. Your new observation singles out those histories that could have preceded whatever your new observation revealed. And since this observation determined which path the photon took, we consider only those histories that traverse this path, thus eliminating the possibility of interference.
Niels Bohr liked to summarize such things using his principle of complementarity. Every electron, every photon, everything, in fact, has both wavelike and particlelike aspects. They are complementary features. Thinking purely in the conventional particle framework—in which particles move along single, unique trajectories—is incomplete, because it misses the wavelike aspects demonstrated by interference patterns.16 Thinking purely in the wavelike framework is incomplete, because it misses the particlelike aspects demonstrated by measurements that find localized particles that can be, for example, recorded by a single dot on a screen. (See Figure 4.4.) A complete picture requires both complementary aspects to be taken into account. In any given situation you can force one feature to be more prominent by virtue of how you choose to interact. If you allow the electrons to travel from source to screen unobserved, their wavelike qualities can emerge, yielding interference. But if you observe the electron en route, you know which path it took, so you’d be at a loss to explain interference. Reality comes to the rescue. Your observation prunes the branches of quantum history. It forces the electron to behave as a particle; since particles go one way or the other, no interference pattern forms, so there’s nothing to explain.
Nature does weird things. It lives on the edge. But it is careful to bob and weave from the fatal punch of logical paradox.
The Contingency of History
These experiments are remarkable. They provide simple but powerful proof that our world is governed by the quantum laws found by physicists in the twentieth century, and not by the classical laws found by Newton, Maxwell, and Einstein—laws we now recognize as powerful and insightful approximations for describing events at large enough scales. Already we have seen that the quantum laws challenge conventional notions of what happened in the past—those unobserved events that are responsible for what we now see. Some simple variations of these experiments take this challenge to our intuitive notion of how things unfold in time to an even greater, even more surprising level.
The first variation is called the delayed-choice experiment and was suggested in 1980 by the eminent physicist John Wheeler. The experiment brushes up against an eerily odd-sounding question: Does the past depend on the future? Note that this is not the same as asking whether we can go back and change the past (a subject we take up in Chapter 15). Instead, Wheeler’s experiment, which has been carried out and analyzed in considerable detail, exposes a provocative interplay between events we imagine having taken place in the past, even the distant past, and those we see taking place right now.
To get a feel for the physics, imagine you are an art collector and Mr. Smithers, chairman of the new Springfield Art and Beautification Society, is coming to look at various works you have put up for sale. You know, however, that his real interest is in The Full Monty, a painting in your collection that you never felt quite fit, but one that was left to you by your beloved great-uncle Monty Burns, so that deciding whether to sell it is quite an emotional struggle. After Mr. Smithers arrives, you talk about your collection, recent auctions, the current show at the Metropolitan; surprisingly, you learn that, years back, Smithers was your great-uncle’s top aide. By the end of the conversation you decide that you are willing to part with The Full Monty: There are so many other works you want, and you must exercise restraint or your collection will have no focus. In the world of art collecting, you have always told yourself, sometimes more is less.
As you reflect back upon this decision, in retrospect it seems that you had actually already decided to sell before Mr. Smithers arrived. Although you have always had a certain affection for The Full Monty, you have long been wary of amassing a sprawling collection and late-twentieth-century erotic-nuclear realism is an intimidating area for all but the most seasoned collector. Even though you remember that before your visitor’s arrival you had been thinking that you didn’t know what to do, from your current vantage point it seems as though you really did. It is not quite that future events have affected the past, but your enjoyable meeting with Mr. Smithers and your subsequent declaration of your willingness to sell have illuminated the past in a way that makes definite particular things that seemed undecided at the time. It is as though the meeting and your declaration helped you to accept a decision that was already made, one that was waiting to be ushered forth into the light of day. The future has helped you tell a more complete story of what was going on in the past.
Of course, in this example, future events are affecting only your perception or interpretation of the past, so the events are neither puzzling nor surprising. But the delayed-choice experiment of Wheeler transports this psychological interplay between the future and the past into the quantum realm, where it becomes both precise and startling. We begin with the experiment in Figure 7.1a, modified by turning the laser down so it fires one photon at a time, as in Figure 7.1b, and also by attaching a new photon detector next to the beam splitter. If the new detector is switched off (see Figure 7.2b), then we are back in the original experimental setup and the photons generate an interference pattern on the photographic screen. But if the new detector is switched on (Figure 7.2a), it tells us which path each photon traveled: if it detects a photon, then the photon took that path; if it fails to detect a photon, then the photon took the other path. Such “which-path” information, as it’s called, compels the photon to act like a particle, so the wavelike interference pattern is no longer generated.
Figure 7.2 (a) By turning on “which-path” detectors, we spoil the interference pattern. (b) When the new detectors are switched off, we’re back in the situation of Figure 7.1 and the interference pattern gets built up.
Now let’s change things, Ã la Wheeler, by moving the new photon detector far downstream along one of the two pathways. In principle, the pathways can be as long as you like, so the new detector can be a considerable distance away from the beam splitter. Again, if this new photon detector is switched off, we are in the usual situation and the photons fill out an interference pattern on the screen. If it is switched on, it provides which-path information and thus precludes the existence of an interference pattern.
The new weirdness comes from the fact that the which-path measurement takes place long after the photon had to “decide” at the beam splitter whether to act as a wave and travel both paths or to act as a particle and travel only one. When the photon is passing through the beam splitter, it can’t “know” whether the new detector is switched on or off—as a matter of fact, the experiment can be arranged so that the on/off switch on the detector is set after the photon has passed the splitter. To be prepared for the possibility that the detector is off, the photon’s quantum wave had better split and travel both paths, so that an amalgam of the two can produce the observed interference pattern. But if the new detector turns out to have been on—or if it was switched on after the photon fully cleared the splitter—it would seem to present the photon with an identity crisis: on passing through the splitter, it had already committed itself to its wavelike character by traveling both paths, but now, sometime after making this choice, it “realizes” that it needs to come down squarely on the side of being a particle that travels one and only one path.
Somehow, though, the photons always get it right. Whenever the detector is on—again, even if the choice to turn it on is delayed until long after a given photon has passed through the beam splitter—the photon acts fully like a particle. It is found to be on one and only one route to the screen (if we were to put photon detectors way downstream along both routes, each photon emitted by the laser would be detected by one or the other detector, never both); the resulting data show no interference pattern. Whenever the new detector is off—again, even if this decision is made after each photon has passed the splitter—the photons act fully like a wave, yielding the famous interference pattern showing that they’ve traveled both paths. It’s as if the photons adjust their behavior in the past according to the future choice of whether the new detector is switched on; it’s as though the photons have a “premonition” of the experimental situation they will encounter farther downstream, and act accordingly. It’s as if a consistent and definite history becomes manifest only after the future to which it leads has been fully settled.4
There is a similarity to your experience of deciding to sell The Full Monty. Before meeting with Mr. Smithers, you were in an ambiguous, undecided, fuzzy, mixed state of being both willing and unwilling to sell the painting. But talking together about the art world and learning of Smithers’s affection for your great-uncle made you increasingly comfortable with the idea of selling. The conversation led to a firm decision, which in turn allowed a history of the decision to crystallize out of the previous uncertainty. In retrospect it felt as if the decision had really been made all along. But if you hadn’t gotten on so well with Mr. Smithers, if he hadn’t given you confidence that The Full Montywould be in trustworthy hands, you might very well have decided not to sell. And the story of the past that you might tell in this situation could easily involve a recognition that you’d actually decided long ago not to sell—that no matter how sensible it might be to sell the painting, deep down you’ve always known that the sentimental connection was just too strong to let it go. The actual past, of course, did not change one bit. Yet a different experience now would lead you to describe a different history.
In the psychological arena, rewriting or reinterpreting the past is commonplace; our story of the past is often informed by our experiences in the present. But in the arena of physics—an arena we normally consider to be objective and set in stone—a future contingency of history makes one’s head spin. To make the spinning even more severe, Wheeler imagines a cosmic version of the delayed choice experiment in which the light source is not a laboratory laser but, instead, a powerful quasar in deep space. The beam splitter is not a laboratory variety, either, but is an intervening galaxy whose gravitational pull can act like a lens that focuses passing photons and directs them toward earth, as in Figure 7.3. Although no one has as yet carried out this experiment, in principle, if enough photons from the quasar are collected, they should fill out an interference pattern on a long-exposure photographic plate, just as in the laboratory beam-splitter experiment. But if we were to put another photon detector right near the end of one route or the other, it would provide which-path information for the photons, thereby destroying the interference pattern.
What’s striking about this version is that, from our perspective, the photons could have been traveling for many billions of years. Their decision to go one way around the galaxy, like a particle, or both ways, like a wave, would seem to have been made long before the detector, any of us, or even the earth existed. Yet, billions of years later, the detector was built, installed along one of the paths the photons take to reach earth, and switched on. And these recent acts somehow ensure that the photons under consideration act like particles. They act as though they have been traveling along precisely one path or the other on their long journey to earth. But if, after a few minutes, we turn off the detector, the photons that subsequently reach the photographic plate start to build up an interference pattern, indicating that for billions of years they have been traveling in tandem with their ghostly partners, taking opposite paths around the galaxy.
Figure 7.3 Light from a distant quasar, split and focused by an intervening galaxy, will, in principle, yield an interference pattern. If an additional detector, which allows the determination of the path taken by each photon, were switched on, the ensuing photons would no longer fill out an interference pattern.
Has our turning the detector on or off in the twenty-first century had an effect on the motion of photons some billions of years earlier? Certainly not. Quantum mechanics does not deny that the past has happened, and happened fully. Tension arises simply because the concept of past according to the quantum is different from the concept of past according to classical intuition. Our classical upbringing makes us long to say that a given photon did this or didthat. But in a quantum world, our world, this reasoning imposes upon the photon a reality that is too restrictive. As we have seen, in quantum mechanics the norm is an indeterminate, fuzzy, hybrid reality consisting of many strands, which only crystallizes into a more familiar, definite reality when a suitable observation is carried out. It is not that the photon, billions of years ago, decided to go one way around the galaxy or the other, or both. Instead, for billions of years it has been in the quantum norm—a hybrid of the possibilities.
The act of observation links this unfamiliar quantum reality with everyday classical experience. Observations we make today cause one of the strands of quantum history to gain prominence in our recounting of the past. In this sense, then, although the quantum evolution from the past until now is unaffected by anything we do now, the story we tell of the past can bear the imprint of today’s actions. If we insert photon detectors along the two pathways light takes to a screen, then our story of the past will include a description of which pathway each photon took; by inserting the photon detectors, we ensure that which-path information is an essential and definitive detail of our story. But, if we don’t insert the photon detectors, our story of the past will, of necessity, be different. Without the photon detectors, we can’t recount anything about which path the photons took; without the photon detectors, which-path details are fundamentally unavailable. Both stories are valid. Both stories are interesting. They just describe different situations.
An observation today can therefore help complete the story we tell of a process that began yesterday, or the day before, or perhaps a billion years earlier. An observation today can delineate the kinds of details we can and must include in today’s recounting of the past.
Erasing the Past
It is essential to note that in these experiments the past is not in any way altered by today’s actions, and that no clever modification of the experiments will accomplish that slippery goal. This raises the question: If you can’t change something that has already happened, can you do the next best thing and erase its impact on the present? To one degree or another, sometimes this fantasy can be realized. A baseball player who, with two outs in the bottom of the ninth inning, drops a routine fly ball, allowing the opposing team to close within one run, can undo the impact of his error by a spectacular diving catch on the ball hit by the next batter. And, of course, such an example is not the slightest bit mysterious. Only when an event in the past seems definitively to preclude another event’s happening in the future (as the dropped fly ball definitively precluded a perfect game) would we think there was something awry if we were subsequently told that the precluded event had actually happened. The quantum eraser, first suggested in 1982 by Marlan Scully and Kai Drühl, hints at this kind of strangeness in quantum mechanics.
A simple version of the quantum eraser experiment makes use of the double-slit setup, modified in the following way. A tagging device is placed in front of each slit; it marks any passing photon so that when the photon is examined later, you can tell through which slit it passed. The question of how you can place a mark on a photon—how you can do the equivalent of placing an “L” on a photon that passes through the left slit and an “R” on a photon that passes through the right slit—is a good one, but the details are not particularly important. Roughly, the process relies on using a device that allows a photon to pass freely through a slit but forces its spin axis to point in a particular direction. If the devices in front of the left and right slits manipulate the photon spins in specific but distinct ways, then a more refined detector screen that not only registers a dot at the photon’s impact location, but also keeps a record of the photon’s spin orientation, will reveal through which slit a given photon passed on its way to the detector.
When this double-slit-with-tagging experiment is run, the photons do not build up an interference pattern, as in Figure 7.4a. By now the explanation should be familiar: the new tagging devices allow which-path information to be gleaned, and which-path information singles out one history or another; the data show that any given photon passed through either the left slit or the right slit. And without the combination of left-slit and right-slit trajectories, there are no overlapping probability waves, so no interference pattern is generated.
Now, here is Scully and Drühl’s idea. What if, just before the photon hits the detection screen, you eliminate the possibility of determining through which slit it passed by erasing the mark imprinted by the tagging device? Without the means, even in principle, to extract the which-path information from the detected photon, will both classes of histories come back into play, causing the interference pattern to reemerge? Notice that this kind of “undoing” the past would fall much further into the shocking category than the ballplayer’s diving catch in the ninth inning. When the tagging devices are turned on, we imagine that the photon obediently acts as a particle, passing through the left slit or the right slit. If somehow, just before it hits the screen, we erase the which-slit mark it is carrying, it seems too late to allow an interference pattern to form. For interference, we need the photon to act like a wave. It must pass through both slits so that it can cross-mingle with itself on the way to the detector screen. But our initial tagging of the photon seems to ensure that it acts like a particle and travels either through the left or through the right slit, preventing interference from happening.
Figure 7.4 In the quantum eraser experiment, equipment placed in front of the two slits marks the photons so that subsequent examination can reveal through which slit each photon passed. In (a) we see that this which-path information spoils the interference pattern. In (b) a device that erases the mark on the photons is inserted just in front of the detector screen. Because the which-path information is eliminated, the interference pattern reappears.
In an experiment carried out by Raymond Chiao, Paul Kwiat, and Aephraim Steinberg, the setup was, schematically, as in Figure 7.4, with a new erasure device inserted just in front of the detection screen. Again, the details are not of the essence, but briefly put, the eraser works by ensuring that regardless of whether a photon from the left slit or the right slit enters, its spin is manipulated to point in one and the same fixed direction. Subsequent examination of its spin therefore yields no information about which slit it passed through, and so the which-path mark has been erased. Remarkably, the photons detected by the screen after this erasure do produce an interference pattern. When the eraser is inserted just in front of the detector screen, it undoes—it erases—the effect of tagging the photons way back when they approached the slits. As in the delayed-choice experiment, in principle this kind of erasure could occur billions of years after the influence it is thwarting, in effect undoing the past, even undoing the ancient past.
How are we to make sense of this? Well, keep in mind that the data conform perfectly to the theoretical prediction of quantum mechanics. Scully and Drühl proposed this experiment because their quantum mechanical calculations convinced them it would work. And it does. So, as is usual with quantum mechanics, the puzzle doesn’t pit theory against experiment. It pits theory, confirmed by experiment, against our intuitive sense of time and reality. To ease the tension, notice that were you to place a photon detector in front of each slit, the detector’s readout would establish with certainty whether the photon went through the left slit or through the right slit, and there’d be no way to erase such definitive information—there’d be no way to recover an interference pattern. But the tagging devices are different because they provide only the potential for which-path information to be determined—and potentialities are just the kinds of things that can be erased. A tagging device modifies a passing photon in such a way, roughly speaking, that it still travels both paths, but the left part of its probability wave is blurred out relative to the right, or the right part of its probability wave is blurred out relative to the left. In turn, the orderly sequence of peaks and troughs that would normally emerge from each slit—as in Figure 4.2b—is also blurred out, so no interference pattern forms on the detector screen. The crucial realization, though, is that both the left and the right waves are still present. The eraser works because it refocuses the waves. Like a pair of glasses, it compensates for the blurring, brings both waves back into sharp focus, and allows them once again to combine into an interference pattern. It’s as if after the tagging devices accomplish their task, the interference pattern disappears from view but patiently lies in wait for someone or something to resuscitate it.
That explanation may make the quantum eraser a little less mysterious, but here is the finale—a stunning variation on the quantum-eraser experiment that challenges conventional notions of space and time even further.
Shaping the Past17
This experiment, the delayed-choice quantum eraser, was also proposed by Scully and Drühl. It begins with the beam-splitter experiment of Figure 7.1, modified by inserting two so-called down-converters, one on each pathway. Down-converters are devices that take one photon as input and produce two photons as output, each with half the energy (“downconverted”) of the original. One of the two photons (called the signal photon) is directed along the path that the original would have followed toward the detector screen. The other photon produced by the down-converter (called the idler photon) is sent in a different direction altogether, as in Figure 7.5a. On each run of the experiment, we can determine which path a signal photon takes to the screen by observing which down-converter spits out the idler-photon partner. And once again, the ability to glean which-path information about the signal photons—even though it is totally indirect, since we are not interacting with any signal photons at all—has the effect of preventing an interference pattern from forming.
Now for the weirder part. What if we manipulate the experiment so as to make it impossible to determine from which down-converter a given idler photon emerged? What if, that is, we erase the which-path information embodied by the idler photons? Well, something amazing happens: even though we’ve done nothing directly to the signal photons, by erasing the which-path information carried by their idler partners we can recover an interference pattern from the signal photons. Let me show you how this goes because it is truly remarkable.
Take a look at Figure 7.5b, which embodies all the essential ideas. But don’t be intimidated. It’s simpler than it appears, and we’ll now go through it in manageable steps. The setup in Figure 7.5b differs from that of Figure 7.5a with regard to how we detect the idler photons after they’ve been emitted. In Figure 7.5a, we detected them straight out, and so we could immediately determine from which down-converter each was produced—that is, which path a given signal photon took. In the new experiment, each idler photon is sent through a maze, which compromises our ability to make such a determination. For example, imagine that an idler photon is emitted from the down-converter labeled “L.” Rather than immediately entering a detector (as in Figure 7.5a), this photon is sent to a beam splitter (labeled “a”), and so has a 50 percent chance of heading onward along the path labeled “A,” and a 50 percent chance of heading onward along the path labeled “B.” Should it head along path A, it will enter a photon detector (labeled “1”), and its arrival will be duly recorded. But should the idler photon head along path B, it will be subject to yet further shenanigans. It will be directed to another beam splitter (labeled “c”) and so will have a 50 percent chance of heading onward along path E to the detector labeled “2,” and a 50 percent chance of heading onward along path F to the detector labeled “3.” Now—stay with me, as there is a point to all this—the exact same reasoning, when applied to an idler photon emitted from the other down-converter, labeled “R,” tells us that if the idler heads along path D it will be recorded by detector 4, but if it heads along path C it will be detected by either detector 3 or detector 2, depending on the path it follows after passing through beam splitter c.
Figure 7.5 (a) A beam-splitter experiment, augmented by down-converters, does not yield an interference pattern, since the idler photons yield which-path information. (b) If the idler photons are not detected directly, but instead are sent through the maze depicted, then an interference pattern can be extracted from the data. Idler photons that are detected by detectors 2 or 3 do not yield which-path information and hence their signal photons fill out an interference pattern.
Now for why we’ve added all this complication. Notice that if an idler photon is detected by detector 1, we learn that the corresponding signal photon took the left path, since there is no way for an idler that was emitted from down-converter R to find its way to this detector. Similarly, if an idler photon is detected by detector 4, we learn that its signal photon partner took the right path. But if an idler photon winds up in detector 2, we have no idea which path its signal photon partner took, since there is an equal chance that it was emitted by down-converter L and followed path B–E, or that it was emitted by down-converter R and followed path C–E. Similarly, if an idler is detected by detector 3, it could have been emitted by down-converter L and have traveled path B–F, or by down-converter R and traveled path C–F. Thus, for signal photons whose idlers are detected by detector 1 or 4, we have which-path information, but for those whose idlers are detected by detector 2 or 3, the which-path information is erased.
Does this erasure of some of the which-path information—even though we’ve done nothing directly to the signal photons—mean interference effects are recovered? Indeed it does—but only for those signal photons whose idlers wind up in either detector 2 or detector 3. Namely, the totality of impact positions of the signal photons on the screen will look like the data in Figure 7.5a, showing not the slightest hint of an interference pattern, as is characteristic of photons that have traveled one path or the other. But if we focus on a subset of the data points—for example, those signal photons whose idlers entered detector 2—then that subset of points will fill out an interference pattern! These signal photons—whose idlers happened, by chance, not to provide any which-path information— act as though they’ve traveled both paths! If we were to hook up the equipment so that the screen displays a red dot for the position of each signal photon whose idler was detected by detector 2, and a green dot for all others, someone who is color-blind would see no interference pattern, but everyone else would see that the red dots were arranged with bright and dark bands—an interference pattern. The same holds true with detector 3 in place of detector 2. But there would be no such interference pattern if we single out signal photons whose idlers wind up in detector 1 or detector 4, since these are the idlers that yield which-path information about their partners.
These results—which have been confirmed by experiment5—are dazzling: by including down-converters that have the potential to provide which-path information, we lose the interference pattern, as in Figure 7.5a. And without interference, we would naturally conclude that each photon went along either the left path or the right path. But we now learn that this would be a hasty conclusion. By carefully eliminating the potential which-path information carried by some of the idlers, we can coax the data to yield up an interference pattern, indicating that some of the photons actually took both paths.
Notice, too, perhaps the most dazzling result of all: the three additional beam splitters and the four idler-photon detectors can be on the other side of the laboratory or even on the other side of the universe, since nothing in our discussion depended at all on whether they receive a given idler photon before or after its signal photon partner has hit the screen. Imagine, then, that these devices are all far away, say ten light-years away, to be definite, and think about what this entails. You perform the experiment in Figure 7.5b today, recording—one after another—the impact locations of a huge number of signal photons, and you observe that they show no sign of interference. If someone asks you to explain the data, you might be tempted to say that because of the idler photons, which-path information is available and hence each signal photon definitely went along either the left or the right path, eliminating any possibility of interference. But, as above, this would be a hasty conclusion about what happened; it would be a thoroughly premature description of the past.
You see, ten years later, the four photon detectors will receive—one after another—the idler photons. If you are subsequently informed about which idlers wound up, say, in detector 2 (e.g., the first, seventh, eighth, twelfth . . . idlers to arrive), and if you then go back to data you collected years earlier and highlight the corresponding signal photon locations on the screen (e.g., the first, seventh, eighth, twelfth . . . signal photons that arrived), you will find that the highlighted data points fill out an interference pattern, thus revealing that those signal photons should be described as having traveled both paths. Alternatively, if 9 years, 364 days after you collected the signal photon data, a practical joker should sabotage the experiment by removing beam splitters a and b—ensuring that when the idler photons arrive the next day, they all go to either detector 1 or detector 4, thus preserving all which-path information—then, when you receive this information, you will conclude that every signal photon went along either the left path or the right path, and there will be no interference pattern to extract from the signal photon data. Thus, as this discussion forcefully highlights, the story you’d tell to explain the signal photon data depends significantly on measurements conducted ten years after those data were collected.
Again, let me emphasize that the future measurements do not change anything at all about things that took place in your experiment today; the future measurements do not in any way change the data you collected today. But the future measurements do influence the kinds of details you can invoke when you subsequently describe what happened today. Before you have the results of the idler photon measurements, you really can’t say anything at all about the which-path history of any given signal photon. However, once you have the results, you conclude that signal photons whose idler partners were successfully used to ascertain which-path information can be described as having—years earlier—traveled either left or right. You also conclude that signal photons whose idler partners had their which-path information erased cannot be described as having— years earlier—definitely gone one way or the other (a conclusion you can convincingly confirm by using the newly acquired idler photon data to expose the previously hidden interference pattern among this latter class of signal photons). We thus see that the future helps shape the story you tell of the past.
These experiments are a magnificent affront to our conventional notions of space and time. Something that takes place long after and far away from something else nevertheless is vital to our description of that something else. By any classical—commonsense—reckoning, that’s, well, crazy. Of course, that’s the point: classical reckoning is the wrong kind of reckoning to use in a quantum universe. We have learned from the Einstein-Podolsky-Rosen discussion that quantum physics is not local in space. If you have fully absorbed that lesson—a tough one to accept in its own right—these experiments, which involve a kind of entanglement across space and through time, may not seem thoroughly outlandish. But by the standards of daily experience, they certainly are.
Quantum Mechanics and Experience
For a few days after I first learned about these experiments, I remember feeling elated. I felt I’d been given a glimpse into a veiled side of reality. Common experience—mundane, ordinary, day-to-day activities—suddenly seemed part of a classical charade, hiding the true nature of our quantum world. The world of the everyday suddenly seemed nothing but an inverted magic act, lulling its audience into believing in the usual, familiar conceptions of space and time, while the astonishing truth of quantum reality lay carefully guarded by nature’s sleights of hand.
In recent years, physicists have expended much effort in trying to explain nature’s ruse—to figure out precisely how the fundamental laws of quantum physics morph into the classical laws that are so successful at explaining common experience—in essence, to figure out how the atomic and subatomic shed their magical weirdness when they combine to form macroscopic objects. Research continues, but much has already been learned. Let’s look at some aspects of particular relevance to the question of time’s arrow, but now from the standpoint of quantum mechanics.
Classical mechanics is based on equations that Newton discovered in the late 1600s. Electromagnetism is based on equations Maxwell discovered in the late 1800s. Special relativity is based on equations Einstein discovered in 1905, and general relativity is based on equations he discovered in 1915. What all these equations have in common, and what is central to the dilemma of time’s arrow (as explained in the last chapter), is their completely symmetric treatment of past and future. Nowhere in any of these equations is there anything that distinguishes “forward” time from “backward” time. Past and future are on an equal footing.
Quantum mechanics is based on an equation that Erwin Schrödinger discovered in 1926.6 You don’t need to know anything about this equation beyond the fact that it takes as input the shape of a quantum mechanical probability wave at one moment of time, such as that in Figure 4.5, and allows one to determine what the probability wave looks like at any other time, earlier or later. If the probability wave is associated with a particle, such as an electron, you can use it to predict the probability that, at any specified time, an experiment will find the electron at any specified location. Like the classical laws of Newton, Maxwell, and Einstein, the quantum law of Schrödinger embraces an egalitarian treatment of time-future and time-past. A “movie” showing a probability wave starting like this and ending like that could be run in reverse—showing a probability wave starting like that and ending like this— and there would be no way to say that one evolution was right and the other wrong. Both would be equally valid solutions of Schrödinger’s equation. Both would represent equally sensible ways in which things could evolve.7
Of course, the “movie” now referred to is quite different from the ones used in analyzing the motion of a tennis ball or a splattering egg in the last chapter. Probability waves are not things we can see directly; there are no cameras that can capture probability waves on film. Instead, we can describe probability waves using mathematical equations and, in our mind’s eye, we can imagine the simplest of them having shapes such as those in Figures 4.5 and 4.6. But the only access we have to the probability waves themselves is indirect, through the process of measurement.
That is, as outlined in Chapter 4 and seen repeatedly in the experiments above, the standard formulation of quantum mechanics describes the unfolding of phenomena using two quite distinct stages. In stage one, the probability wave—or, in the more precise language of the field, the wavefunction—of an object such as an electron evolves according to the equation discovered by Schrödinger. This equation ensures that the shape of the wavefunction changes smoothly and gradually, much as a water wave changes shape as it travels from one side of a lake toward the other.18 In the standard description of the second stage, we make contact with observable reality by measuring the electron’s position, and when we do so, the shape of its wavefunction sharply and abruptly changes. The electron’s wavefunction is unlike more familiar examples like water waves and sound waves: when we measure the electron’s position, its wavefunction spikes or, as illustrated in Figure 4.7, it collapses, dropping to the value 0 everywhere the particle is not found and surging to 100 percent probability at the single location where the particle is found by the measurement.
Stage one—the evolution of wavefunctions according to Schrödinger’s equation—is mathematically rigorous, totally unambiguous, and fully accepted by the physics community. Stage two—the collapse of a wavefunction upon measurement—is, to the contrary, something that during the last eight decades has, at best, kept physicists mildly bemused, and at worst, posed problems, puzzles, and potential paradoxes that have devoured careers. The difficulty, as mentioned at the end of Chapter 4, is that according to Schrödinger’s equation, wavefunctions do not collapse. Wavefunction collapse is an add-on. It was introduced after Schrödinger discovered his equation, in an attempt to account for what experimenters actually see. Whereas a raw, uncollapsed wavefunction embodies the strange idea that a particle is here and there, experimenters never see this. They always find a particle definitely at one location or another; they never see it partially here and partially there; the needle on their measuring devices never hovers in some ghostly mixture of pointing at this value and also at that value.
The same goes, of course, for our own casual observations of the world around us. We never observe a chair to be both here and there; we never observe the moon to be in one part of the night sky as well as another; we never see a cat that is both dead and alive. The notion of wavefunction collapse aligns with our experience by postulating that the act of measurement induces the wavefunction to relinquish quantum limbo and usher one of the many potentialities (particle here, or particle there) into reality.
The Quantum Measurement Puzzle
But how does an experimenter’s making a measurement cause a wavefunction to collapse? In fact, does wavefunction collapse really happen, and if it does, what really goes on at the microscopic level? Do any and all measurements cause collapse? When does the collapse happen and how long does it take? Since, according to the Schrödinger equation, wavefunctions do not collapse, what equation takes over in the second stage of quantum evolution, and how does the new equation dethrone Schrödinger’s, usurping its usual ironclad power over quantum processes? And, of importance to our current concern with time’s arrow, while Schrödinger’s equation, the equation that governs the first stage, makes no distinction between forward and backward in time, does the equation for stage two introduce a fundamental asymmetry between time before and time after a measurement is carried out? That is, does quantum mechanics, including its interface with the world of the everyday via measurements and observations, introduce an arrow of time into the basic laws of physics? After all, we discussed earlier how the quantum treatment of the past differs from that of classical physics, and by past we meant before a particular observation or measurement had taken place. So do measurements, as embodied by stage-two wavefunction collapse, establish an asymmetry between past and future, between before and after a measurement is made?
These questions have stubbornly resisted complete solution and they remain controversial. Yet, through the decades, the predictive power of quantum theory has hardly been compromised. The stage one / stage two formulation of quantum theory, even though stage two has remained mysterious, predicts probabilities for measuring one outcome or another. And these predictions have been confirmed by repeating a given experiment over and over again and examining the frequency with which one or another outcome is found. The fantastic experimental success of this approach has far outweighed the discomfort of not having a precise articulation of what actually happens in stage two.
But the discomfort has always been there. And it is not simply that some details of wavefunction collapse have not quite been worked out. The quantum measurement problem, as it is called, is an issue that speaks to the limits and the universality of quantum mechanics. It’s simple to see this. The stage one / stage two approach introduces a split between what’s being observed (an electron, or a proton, or an atom, for example) and the experimenter who does the observing. Before the experimenter gets into the picture, wavefunctions happily and gently evolve according to Schrödinger’s equation. But then, when the experimenter meddles with things to perform a measurement, the rules of the game suddenly change. Schrödinger’s equation is cast aside and stage-two collapse takes over. Yet, since there is no difference between the atoms, protons, and electrons that make up the experimenter and the equipment he or she uses, and the atoms, protons, and electrons that he or she studies, why in the world is there a split in how quantum mechanics treats them? If quantum mechanics is a universal theory that applies without limitations to everything,the observed and the observer should be treated in exactly the same way.
Niels Bohr disagreed. He claimed that experimenters and their equipment are different from elementary particles. Even though they are made from the same particles, they are “big” collections of elementary particles and hence governed by the laws of classical physics. Somewhere between the tiny world of individual atoms and subatomic particles and the familiar world of people and their equipment, the rules change because the sizes change. The motivation for asserting this division is clear: a tiny particle, according to quantum mechanics, can be located in a fuzzy mixture of here and there, yet we don’t see such behavior in the big, everyday world. But exactly where is the border? And, of vital importance, how do the two sets of rules interface when the big world of the everyday confronts the minuscule world of the atomic, as in the case of a measurement? Bohr forcefully declared these questions to be out of bounds, by which he meant, truth be told, that they were beyond the bounds of what he or anyone else could answer. And since even without addressing them the theory makes astonishingly accurate predictions, for a long time such issues were far down on the list of critical questions that physicists were driven to settle.
But to understand quantum mechanics completely, to determine fully what it says about reality, and to establish what role it might play in setting a direction to time’s arrow, we must come to grips with the quantum measurement problem.
In the next two sections, we’ll describe some of the most prominent and promising attempts to do so. The upshot, should you at any point want to rush ahead to the last section focusing on quantum mechanics and the arrow of time, is that much ingenious work on the quantum measurement problem has yielded significant progress, but a broadly accepted solution still seems just beyond our reach. Many view this as the single most important gap in our formulation of quantum law.
Reality and the Quantum Measurement Problem
Over the years, there have been many proposals for solving the quantum measurement problem. Ironically, although they entail differing conceptions of reality—some drastically different—when it comes to predictions for what a researcher will measure in most every experiment, they all agree and each one works like a charm. Each proposal puts on the same show, even though, were you to peek backstage, you’d see that their modi operandi differ substantially.
When it comes to entertainment, you generally don’t want to know what’s happening off in the wings; you are perfectly content to focus solely on the production. But when it comes to understanding the universe, there is an insatiable urge to pull back all curtains, open all doors, and expose completely the deep inner workings of reality. Bohr considered this urge baseless and misguided. To him, reality was the performance. Like a Spalding Gray soliloquy, an experimenter’s bare-bones measurements are the whole show. There isn’t anything else. According to Bohr, there is no backstage. Trying to analyze how, and when, and why a quantum wavefunction relinquishes all but one possibility and produces a single definite number on a measuring device is missing the point. The measured number itself is all that’s worthy of attention.
For decades, this perspective held sway. However, its calmative effect on the mind struggling with quantum theory notwithstanding, one can’t help feeling that the fantastic predictive power of quantum mechanics means that it istapping into a hidden reality that underlies the workings of the universe. One can’t help wanting to go further and understand how quantum mechanics interfaces with common experience—how it bridges the gap between wavefunction and observation, and what hidden reality underlies the observations. Over the years, a number of researchers have taken up this challenge; here are some proposals they’ve developed.
One approach, with historical roots that go back to Heisenberg, is to abandon the view that wavefunctions are objective features of quantum reality and, instead, view them merely as an embodiment of what we know about reality. Before we perform a measurement, we don’t know where the electron is and, this view proposes, our ignorance of its location is reflected by the electron’s wavefunction describing it as possibly being at a variety of different positions. At the moment we measure its position, though, our knowledge of its whereabouts suddenly changes: we now know its position, in principle, with total precision. (By the uncertainty principle, if we know its location we will necessarily be completely ignorant of its velocity, but that’s not an issue for the current discussion.) This sudden change in our knowledge, according to this perspective, is reflected in a sudden change in the electron’s wavefunction: it suddenly collapses and takes on the spiked shape of Figure 4.7, indicating our definite knowledge of the electron’s position. In this approach, then, the abrupt collapse of a wavefunction is completely unsurprising: it is nothing more than the abrupt change in knowledge that we all experience when we learn something new.
A second approach, initiated in 1957 by Wheeler’s student Hugh Everett, denies that wavefunctions ever collapse. Instead, each and every potential outcome embodied in a wavefunction sees the light of day; the daylight each sees, however, streams through its own separate universe. In this approach, the Many Worlds interpretation, the concept of “the universe” is enlarged to include innumerable “parallel universes”—innumerable versions of our universe—so that anything that quantum mechanics predicts could happen, even if only with minuscule probability, does happen in at least one of the copies. If a wavefunction says that an electron can be here, there, and way over there, then in one universe a version of you will find it here; in another universe, another copy of you will find it there; and in a third universe, yet another you will find the electron way over there. The sequence of observations that we each make from one second to the next thus reflects the reality taking place in but one part of this gargantuan, infinite network of universes, each one populated by copies of you and me and everyone else who is still alive in a universe in which certain observations have yielded certain outcomes. In one such universe you are now reading these words, in another you’ve taken a break to surf the Web, in yet another you’re anxiously awaiting the curtain to rise for your Broadway debut. It’s as though there isn’t a single spacetime block as depicted in Figure 5.1, but an infinite number, with each realizing one possible course of events. In the Many Worlds approach, then, no potential outcome remains merely a potential. Wavefunctions don’t collapse. Every potential outcome comes out in one of the parallel universes.
A third proposal, developed in the 1950s by David Bohm—the same physicist we encountered in Chapter 4 when discussing the Einstein-Podolsky-Rosen paradox—takes a completely different approach.8 Bohm argued that particles such as electrons do possess definite positions and definite velocities, just as in classical physics, and just as Einstein had hoped. But, in keeping with the uncertainty principle, these features are hidden from view; they are examples of the hidden variables mentioned in Chapter 4. You can’t determine both simultaneously. For Bohm, such uncertainty represented a limit on what we can know, but implied nothing about the actual attributes of the particles themselves. His approach does not fall afoul of Bell’s results because, as we discussed toward the end of Chapter 4, possessing definite properties forbidden by quantum uncertainty is not ruled out; only locality is ruled out, and Bohm’s approach is not local.9 Instead, Bohm imagined that the wavefunction of a particle is another, separate element of reality, one that exists in addition to the particleitself. It’s not particles or waves, as in Bohr’s complementarity philosophy; according to Bohm, it’s particles and waves. Moreover, Bohm posited that a particle’s wavefunction interacts with the particle itself—it “guides” or “pushes” the particle around—in a way that determines its subsequent motion. While this approach agrees fully with the successful predictions of standard quantum mechanics, Bohm found that changes to the wavefunction in one location are able to immediately push a particle at a distant location, a finding that explicitly reveals the nonlocality of his approach. In the double-slit experiment, for example, each particle goes through one slit or the other, while its wavefunction goes through both and suffers interference. Since the wavefunction guides the particle’s motion, it should not be terribly surprising that the equations show the particle is likely to land where the wavefunction value is large and it is unlikely to land where it is small, explaining the data in Figure 4.4. In Bohm’s approach, there is no separate stage of wavefunction collapse since, if you measure a particle’s position and find it here, that is truly where it was a moment before the measurement took place.
A fourth approach, developed by the Italian physicists Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber, makes the bold move of modifying Schrödinger’s equation in a clever way that results in hardly any effect on the evolution of wavefunctions for individual particles, but has a dramatic impact on quantum evolution when applied to “big” everyday objects. The proposed modification envisions that wavefunctions are inherently unstable; even without any meddling, these researchers suggest, sooner or later every wavefunction collapses, of its own accord, to a spiked shape. For an individual particle, Ghirardi, Rimini, and Weber postulate that wavefunction collapse happens spontaneously and randomly, kicking in, on average, only once every billion years or so.10 This is so infrequent that it entails only the slightest change to the usual quantum mechanical description of individual particles, and that’s good, since quantum mechanics describes the microworld with unprecedented accuracy. But for large objects such as experimenters and their equipment, which have billions and billions of particles, the odds are high that in a tiny fraction of any given second the posited spontaneous collapse will kick in for at least one constituent particle, causing its wavefunction to collapse. And, as argued by Ghirardi, Rimini, Weber, and others, the entangled nature of all the individual wavefunctions in a large object ensures that this collapse initiates a kind of quantum domino effect in which the wavefunctions of all the constituent particles collapse as well. As this happens in a brief fraction of a second, the proposed modification ensures that large objects are essentially always in one definite configuration: pointers on measuring equipment always point to one definite value; the moon is always at one definite location in the sky; brains inside experimenters always have one definite experience; cats are always either dead or alive.
Each of these approaches, as well as a number of others I won’t discuss, has its supporters and detractors. The “wavefunction as knowledge” approach finesses the issue of wavefunction collapse by denying any reality for wavefunctions, turning them instead into mere descriptors of what we know. But why, a detractor asks, should fundamental physics be so closely tied to human awareness? If we were not here to observe the world, would wavefunctions never collapse, or, perhaps, would the very concept of a wavefunction not exist? Was the universe a vastly different place before human consciousness evolved on planet earth? What if, instead of human experimenters, mice or ants or amoebas or computers are the only observers? Is the change in their “knowledge” adequate to be associated with the collapse of a wavefunction?11
By contrast, the Many Worlds interpretation avoids the whole matter of wavefunction collapse, since in this approach wavefunctions don’t collapse. But the price to pay is an enormous proliferation of universes, something that many a detractor has found intolerably exorbitant.12 Bohm’s approach also avoids wavefunction collapse; but, its detractors claim, in granting independent reality to both particles and waves, the theory lacks economy. Moreover, the detractors correctly argue, in Bohm’s formulation the wavefunction can exert faster-than-light influences on the particles it pushes. Supporters note that the former complaint is subjective at best, and the latter conforms to the nonlocality Bell proved unavoidable, so neither criticism is convincing. Nevertheless, perhaps unjustifiably, Bohm’s approach has never caught on.13 The Ghirardi-Rimini-Weber approach deals with wavefunction collapse directly, by changing the equations to incorporate a new spontaneous collapse mechanism. But, detractors point out, there is as yet not a shred of experimental evidence supporting the proposed modification to Schrödinger’s equation.
Research seeking a solid and fully transparent connection between the formalism of quantum mechanics and the experience of everyday life will no doubt go on for some time to come, and it’s hard to say which, if any, of the known approaches will ultimately achieve a majority consensus. Were physicists to be polled today, I don’t think there would be an overwhelming favorite. Unfortunately, experimental input is of limited help. While the Ghirardi-Rimini-Weber proposal does make predictions that can, in certain situations, differ from standard stage one / stage two quantum mechanics, the deviations are too small to be tested with today’s technology. The situation with the other three proposals is worse because they stymie experimental adjudication even more definitively. They agree fully with the standard approach, and so each yields the same predictions for things that can be observed and measured. They differ only regarding what happens backstage, as it were. They only differ, that is, regarding what quantum mechanics implies for the underlying nature of reality.
Even though the quantum measurement problem remains unsolved, during the last few decades a framework has been under development that, while still incomplete, has widespread support as a likely ingredient of any viable solution. It’s called decoherence.
Decoherence and Quantum Reality
When you first encounter the probabilistic aspect of quantum mechanics, a natural reaction is to think that it is no more exotic than the probabilities that arise in coin tosses or roulette wheels. But when you learn about quantum interference, you realize that probability enters quantum mechanics in a far more fundamental way. In everyday examples, various outcomes—heads versus tails, red versus black, one lottery number versus another—are assigned probabilities with the understanding that one or another result will definitely happen and that each result is the end product of an independent, definite history. When a coin is tossed, sometimes the spinning motion is just right for the toss to come out heads and sometimes it’s just right for the toss to come out tails. The 50-50 probability we assign to each outcome refers not just to the final result—heads or tails— but also to the histories that lead to each outcome. Half of the possible ways you can toss a coin result in heads, and half result in tails. The histories themselves, though, are totally separate, isolated alternatives. There is no sense in which different motions of the coin reinforce each other or cancel each other out. They’re all independent.
But in quantum mechanics, things are different. The alternate paths an electron can follow from the two slits to the detector are not separate, isolated histories. The possible histories commingle to produce the observed outcome. Some paths reinforce each other, while others cancel each other out. Such quantum interference between the various possible histories is responsible for the pattern of light and dark bands on the detector screen. Thus, the telltale difference between the quantum and the classical notions of probability is that the former is subject to interference and the latter is not.
Decoherence is a widespread phenomenon that forms a bridge between the quantum physics of the small and the classical physics of the not-so-small by suppressing quantum interference—that is, by diminishing sharply the core difference between quantum and classical probabilities. The importance of decoherence was realized way back in the early days of quantum theory, but its modern incarnation dates from a seminal paper by the German physicist Dieter Zeh in 1970,14 and has since been developed by many researchers, including Erich Joos, also from Germany, and Wojciech Zurek, of the Los Alamos National Laboratory in New Mexico.
Here’s the idea. When Schrödinger’s equation is applied in a simple situation such as single, isolated photons passing through a screen with two slits, it gives rise to the famous interference pattern. But there are two very special features of this laboratory example that are not characteristic of real-world happenings. First, the things we encounter in day-to-day life are larger and more complicated than a single photon. Second, the things we encounter in day-to-day life are not isolated: they interact with us and with the environment. The book now in your hands is subject to human contact and, more generally, is continually struck by photons and air molecules. Moreover, since the book itself is made of many molecules and atoms, these constantly jittering constituents are continually bouncing off each other as well. The same is true for pointers on measuring devices, for cats, for human brains, and for just about everything you encounter in daily life. On astrophysical scales, the earth, the moon, asteroids, and the other planets are continually bombarded by photons from the sun. Even a grain of dust floating in the darkness of outer space is subject to continual hits from low-energy microwave photons that have been streaming through space since a short time after the big bang. And so, to understand what quantum mechanics says about real-world happenings—as opposed to pristine laboratory experiments—we should apply Schrödinger’s equation to these more complex, messier situations.
In essence, this is what Zeh emphasized, and his work, together with that of many others who have followed, has revealed something quite wonderful. Although photons and air molecules are too small to have any significant effect on the motion of a big object like this book or a cat, they are able to do something else. They continually “nudge” the big object’s wavefunction, or, in physics-speak, they disturb its coherence: they blur its orderly sequence of crest followed by trough followed by crest. This is critical, because a wavefunction’s orderliness is central to generating interference effects (see Figure 4.2). And so, much as adding tagging devices to the double-slit experiment blurs the resulting wavefunction and thereby washes out interference effects, the constant bombardment of objects by constituents of their environment also washes out the possibility of intereference phenomena. In turn, once quantum interference is no longer possible, the probabilities inherent to quantum mechanics are, for all practical purposes, just like the probabilities inherent to coin tosses and roulette wheels. Once environmental decoherence blurs a wavefunction, the exotic nature of quantum probabilities melts into the more familiar probabilities of day-to-day life.15 This suggests a resolution of the quantum measurement puzzle, one that, if realized, would be just about the best thing we could hope for. I’ll describe it first in the most optimistic light, and then stress what still needs to be done.
If a wavefunction for an isolated electron shows that it has, say, a 50 percent chance of being here and a 50 percent chance of being there, we must interpret these probabilities using the full-fledged weirdness of quantum mechanics. Since both of the alternatives can reveal themselves by commingling and generating an interference pattern, we must think of them as equally real. In loose language, there’s a sense in which the electron is at both locations. What happens now if we measure the electron’s position with a nonisolated, everyday-sized laboratory instrument? Well, corresponding to the electron’s ambiguous whereabouts, the pointer on the instrument has a 50 percent chance of pointing to this value and a 50 percent chance of pointing to that value. But because of decoherence, the pointer will not be in a ghostly mixture of pointing at both values; because of decoherence, we can interpret these probabilities in the usual, classical, everyday sense. Just as a coin has a 50 percent chance of landing heads and a 50 percent chance of landing tails, but lands either heads or tails, the pointer has a 50 percent chance of pointing to this value and a 50 percent chance of pointing to that value, but it will definitely point to one or the other.
Similar reasoning applies for all other complex, nonisolated objects. If a quantum calculation reveals that a cat, sitting in a closed box, has a 50 percent chance of being dead and a 50 percent chance of being alive— because there is a 50 percent chance that an electron will hit a booby-trap mechanism that subjects the cat to poison gas and a 50 percent chance that the electron misses the booby trap—decoherence suggests that the cat will not be in some absurd mixed state of being both dead and alive. Although decades of heated debate have been devoted to issues like What does it mean for a cat to be both dead and alive? How does the act of opening the box and observing the cat force it to choose a definite status, dead or alive?, decoherence suggests that long before you open the box, the environment has already completed billions of observations that, in almost no time at all, turned all mysterious quantum probabilities into their less mysterious classical counterparts. Long before you look at it, the environment has compelled the cat to take on one, single, definite condition. Decoherence forces much of the weirdness of quantum physics to “leak” from large objects since, bit by bit, the quantum weirdness is carried away by the innumerable impinging particles from the environment.
It’s hard to imagine a more satisfying solution to the quantum measurement problem. By being more realistic and abandoning the simplifying assumption that ignores the environment—a simplification that was crucial to making progress during the early development of the field—we would find that quantum mechanics has a built-in solution. Human consciousness, human experimenters, and human observations would no longer play a special role since they (we!) would simply be elements of the environment, like air molecules and photons, which can interact with a given physical system. There would also no longer be a stage one / stage two split between the evolution of the objects and the experimenter who measures them. Everything—observed and observer—would be on an equal footing. Everything—observed and observer—would be subject to precisely the same quantum mechanical law as is set down in Schrödinger’s equation. The act of measurement would no longer be special; it would merely be one specific example of contact with the environment.
Is that it? Does decoherence resolve the quantum measurement problem? Is decoherence responsible for wavefunctions’ closing the door on all but one of the potential outcomes to which they can lead? Some think so. Researchers like Robert Griffiths, of Carnegie Mellon; Roland Omnès, of Orsay; the Nobel laureate Murray Gell-Mann, of the Santa Fe Institute; and Jim Hartle, of the University of California at Santa Barbara, have made great progress and claim that they have developed decoherence into a complete framework (called decoherent histories) that solves the measurement problem. Others, like myself, are intrigued but not yet fully convinced. You see, the power of decoherence is that it successfully removes the artificial barrier Bohr erected between large and small physical systems, making everything subject to the same quantum mechanical formulas. This is important progress and I think Bohr would have found it gratifying. Although the unresolved quantum measurement problem never diminished physicists’ ability to reconcile theoretical calculations with experimental data, it did lead Bohr and his colleagues to articulate a quantum mechanical framework with some distinctly awkward features. Many found the framework’s need for fuzzy words about wavefunction collapse or the imprecise notion of “large” systems belonging to the dominion of classical physics, unnerving. To a significant extent, by taking account of decoherence, researchers have rendered these vague ideas unnecessary.
However, a key issue that I skirted in the description above is that even though decoherence suppresses quantum interference and thereby coaxes weird quantum probabilities to be like their familiar classical counterparts, each of the potential outcomes embodied in a wavefunction still vies for realization. And so we are still left wondering how one outcome “wins” and where the many other possibilities “go” when that actually happens. When a coin is tossed, classical physics gives an answer to the analogous question. It says that if you examine the way the coin is set spinning with adequate precision, you can, in principle, predict whether it will land heads or tails. On closer inspection, then, precisely one outcome is determined by details you initially overlooked. The same cannot be said in quantum physics. Decoherence allows quantum probabilities to be interpreted much like classical ones, but does not provide any finer details that select one of the many possible outcomes to actually happen.
Much in the spirit of Bohr, some physicists believe that searching for such an explanation of how a single, definite outcome arises is misguided. These physicists argue that quantum mechanics, with its updating to include decoherence, is a sharply formulated theory whose predictions account for the behavior of laboratory measuring devices. And according to this view, that is the goal of science. To seek an explanation of what’s really going on, to strive for an understanding of how a particular outcome came to be, to hunt for a level of reality beyond detector readings and computerprintouts betrays an unreasonable intellectual greediness.
Many others, including me, have a different perspective. Explaining data is what science is about. But many physicists believe that science is also about embracing the theories data confirms and going further by using them to gain maximal insight into the nature of reality. I strongly suspect that there is much insight to be gained by pushing onward toward a complete solution of the measurement problem.
Thus, although there is wide agreement that environment-induced decoherence is a crucial part of the structure spanning the quantum-to-classical divide, and while many are hopeful that these considerations will one day coalesce into a complete and cogent connection between the two, far from everyone is convinced that the bridge has yet been fully built.
Quantum Mechanics and the Arrow of Time
So where do we stand on the measurement problem, and what does it mean for the arrow of time? Broadly speaking, there are two classes of proposals for linking common experience with quantum reality. In the first class (for example, wavefunction as knowledge; Many Worlds; decoherence), Schrödinger’s equation is the be-all and end-all of the story; the proposals simply provide different ways of interpreting what the equation means for physical reality. In the second class (for example, Bohm; Ghirardi-Rimini-Weber), Schrödinger’s equation must be supplemented with other equations (in Bohm’s case, an equation that shows how a wavefunction pushes a particle around) or it must be modified (in the Ghirardi-Rimini-Weber case, to incorporate a new, explicit collapse mechanism). A key question for determining the impact on time’s arrow is whether these proposals introduce a fundamental asymmetry between one direction in time and the other. Remember, Schrödinger’s equation, just like those of Newton, Maxwell, and Einstein, treats forward and backward in time on a completely equal footing. It provides no arrow to temporal evolution. Do any of the proposals change this?
In the first class of proposals, the Schrödinger framework is not at all modified, so temporal symmetry is maintained. In the second class, temporal symmetry may or may not survive, depending on the details. For example, in Bohm’s approach, the new equation proposed does treat time future and time past on an equal footing and so no asymmetry is introduced. However, the proposal of Ghirardi, Rimini, and Weber introduces a collapse mechanism that does have a temporal arrow—an “uncollapsing” wavefunction, one that goes from a spiked to a spread-out shape, would not conform to the modified equations. Thus, depending on the proposal, quantum mechanics, together with a resolution to the quantum measurement puzzle, may or may not continue to treat each direction in time equally. Let’s consider the implications of each possibility.
If time symmetry persists (as I suspect it will), all of the reasoning and all of the conclusions of the last chapter can be carried over with little change to the quantum realm. The core physics that entered our discussion of time’s arrow was the time-reversal symmetry of classical physics. While the basic language and framework of quantum physics differ from those of classical physics—wavefunctions instead of positions and velocities; Schrödinger’s equation instead of Newton’s laws—time-reversal symmetry of all quantum equations would ensure that the treatment of time’s arrow would be unchanged. Entropy in the quantum world can be defined much as in classical physics so long as we describe particles in terms of their wavefunctions. And the conclusion that entropy should always be on the rise—increasing both toward what we call the future and toward what we call the past—would still hold.
We would thus come to the same puzzle encountered in Chapter 6. If we take our observations of the world right now as given, as undeniably real, and if entropy should increase both toward the future and toward the past, how do we explain how the world got to be the way it is and how it will subsequently unfold? And the same two possibilities would present themselves: either all that we see popped into existence by a statistical fluke that you would expect to happen every so often in an eternal universe that spends the vast majority of its time being totally disordered, or, for some reason, entropy was astoundingly low just following the big bang and for the last 14 billion years things have been slowly unwinding and will continue to do so toward the future. As in Chapter 6, to avoid the quagmire of not trusting memories, records, and the laws of physics, we focus on the second option—a low-entropy bang—and seek an explanation for how and why things began in such a special state.
If, on the other hand, time symmetry is lost—if the resolution of the measurement problem that is one day accepted reveals a fundamental asymmetric treatment of future versus past within quantum mechanics— it could very well provide the most straightforward explanation of time’s arrow. It might show, for instance, that eggs splatter but don’t unsplatter because, unlike what we found using the laws of classical physics, splattering solves the full quantum equations but unsplattering doesn’t. A reverse-run movie of a splattering egg would then depict motion that couldn’t happen in the real world, which would explain why we’ve never seen it. And that would be that.
Possibly. But even though this would seem to provide a very different explanation of time’s arrow, in reality it may not be as different as it appears. As we emphasized in Chapter 6, for the pages of War and Peace to become increasingly disordered they must begin ordered; for an egg to become disordered through splattering, it must begin as an ordered, pristine egg; for entropy to increase toward the future, entropy must be low in the past so things have the potential to become disordered. However, just because a law treats past and future differently does not ensure that the law dictates a past with lower entropy. The law might still imply higher entropy toward the past (perhaps entropy would increase asymmetrically toward past and future), and it’s even possible that a time-asymmetric law would be unable to say anything about the past at all. The latter is true of the Ghirardi-Rimini-Weber proposal, one of the only substantive time-asymmetric proposals on the market. Once their collapse mechanism does its trick, there is no way to undo it—there is no way to start from the collapsed wavefunction and evolve it back to its previous spread-out form. The detailed form of the wavefunction is lost in the collapse—it turns into a spike—and so it’s impossible to “retrodict” what things were like at any time before the collapse occurred.
Thus, even though a time-asymmetric law would provide a partial explanation for why things unfold in one temporal order but never in the reverse order, it could very well call for the same key supplement required by time-symmetric laws: an explanation for why entropy was low in the distant past. Certainly, this is true of the time-asymmetric modifications to quantum mechanics that have so far been proposed. And so, unless some future discovery reveals two features, both of which I consider unlikely— a time-asymmetric solution to the quantum measurement problem that, additionally, ensures that entropy decreases toward the past—our effort to explain the arrow of time leads us, once again, back to the origin of the universe, the subject of the next part of the book.
As these chapters will make clear, cosmological considerations wend their way through many mysteries at the heart of space, time, and matter. So on the journey toward modern cosmology’s insights into time’s arrow, it’s worth our while not to rush through the landscape, but rather, to take a well-considered stroll through cosmic history.