The decoherent histories (DH) approach was initiated in 1984 by Robert Griffiths [10] (who spoke, however, of consistent histories) and independently proposed by Roland Omnès [11] a little later; it was subsequently rediscovered by Murray Gell-Mann and James Hartle [12], who made crucial contributions to its development. (DH should not be confused with the environment-induced superselection approach of Wojciech Zurek [13], in which decoherence also plays a crucial role, but one that differs significantly from its role in DH. In this approach the environment is fundamental, acting in effect as an observer, so that it is difficult to regard this proposal as genuinely providing a quantum theory without observers.)

DH may be regarded as a minimalist
approach to the conversion of the quantum measurement formalism to a theory
governing sequences of objective events, including, but not limited to,
those that we regard as directly associated with measurements. Where the
Copenhagen interpretation talks about *finding* (and thereby typically
disturbing) such and such observables with such and such values at such and
such times, the decoherent histories approach speaks of such and such
observables *having* such and such values at such and such times. To
each such *history* *h*, DH assigns the same probability
*P*(*h*) of *happening* that the quantum measurement
formalism--the wave function reduction postulate for ideal measurements
together with the Schrödinger evolution--would assign to the
probability of * observing* that history in a sequence of ideal
(coarse-grained) measurements of the respective observables at the respective
times: If the (initial) wave function of the system is ,

where with the Heisenberg projection operators corresponding to the
time-ordered sequence of events defining the history *h*. For example,
for a spin 1/2 particle initially (at *t*=0) in the state with , we might consider the
history *h* for which
at *t*=1 and at *t*=2. For
Hamiltonian *H*=0, (1) then yields
*P*(*h*)=1/4.

DH can be regarded as describing a stochastic process, a process with intrinsic randomness. Think, for example, of a random walk, with histories corresponding to a sequence of jumps and probabilities of histories generated by the probabilities for the individual jumps. The histories with which DH is concerned are histories of observables, for example of positions of particles. While Schrödinger's spherical wave impinges continuously on a screen over its full expanse, the screen lights up at one instant at one spot because it is precisely with such events that DH is concerned and to such events that DH assigns nonvanishing probability.

*To understand DH one must
appreciate that the histories with which it is concerned are not histories
of wave functions.* For DH the wave function is by no means the complete
description of a quantum system; it is not even the most important part of
that description. DH is primarily concerned with histories of observables,
not of wave functions, which play only a secondary role, as a theoretical
ingredient in the formulation of laws governing the evolution of quantum
observables via the probabilities assigned to histories. Thus DH avoids the
measurement problem in exactly the manner suggested by Einstein.

It should come as no surprise that the consistent development of the DH idea, of assigning probabilities to objective histories, is not so easy to achieve. After all, Bohr and Heisenberg were no fools; they surely would not have insisted that all is observation were such a radical conclusion easily avoidable. It is only as a first approximation that DH can be regarded as merely describing a stochastic process. There are, in fact, some very significant differences. Perhaps the most crucial of these concerns the role of coarse graining. Because of quantum interference effects, coarse graining plays an essential role for DH, not just for the description of events of interest to us, but in the very formulation of the theory itself. A fine-grained history--given for a system of particles by, for example, the precise specification of the positions of all particles at all times in some time-interval--will normally not be assigned any probability. In fact, most coarse-grained histories won't either.

For example, for the two-slit experiment DH assigns no probability to
the history in which the particle passes unobserved through, say, the upper
slit and lands in a small neighborhood of a specific point on the
scintillation-screen. Nor, indeed, does it assign any probability to the
spin history that differs from the one described after formula (1) only by the replacement of by
at *t*=2 . This is because (1) yields the value 1/4 also for this history,
which is inconsistent with the value 0 for the corresponding coarse-grained
history with *t*=1 ignored! (These values involve no inconsistency for
the usual quantum theory, in which they concern the results of
measurements, since the *measurement* of at *t*=1
would be expected to disturb
.)

DH assigns probabilities only to histories belonging to special
families
, closed under coarse graining, which satisfy a certain *
decoherence condition* (DC)

guaranteeing that *P*(*h*) is additive on and hence provides a
consistent assignment of probabilities to elements of . (The
decoherence condition in fact has several versions, the differences between
which I shall here ignore. There is also a perhaps simpler version of
(1), with a linear dependence on *E*(*h*), that involves a much more
robust decoherence condition than DC [14].) Whether or not a family
satisfies the DC depends not only on a sequence of times and
coarse-grained observables at those times, but also upon the (initial) wave
function (or density matrix ) as well as the Hamiltonian *H* of
the relevant system, so it is convenient to regard also these as part of
the specification of . DH thus assigns probabilities *P*(*h*) to
those histories *h* that belong to at least one decoherent family
(as I will call those families satisfying the DC).

It turns out, naturally enough, that a family of histories describing the results of a sequence of measurements will normally be decoherent, regardless of whether or not we actually observe the measurement devices involved. Moreover, interaction with a measurement device is incidental; satisfaction of the DC may be induced by any suitable interaction--or by none at all.

In fact, families defined by conditions on (commuting) observables at a *
single* time are always decoherent. After all, it is for precisely such
families that textbook quantum mechanics supplies perfectly straightforward
probability formulas--via spectral measures. It is important to bear in
mind, however, that even for such standard families, the textbook
probability formulas have an entirely different meaning for DH than for
orthodox quantum theory, describing the probability distribution of the
*actual* value of the relevant observable at the time under
consideration, and not merely the distribution of the value that would be
found were the appropriate measurement performed. This difference is the
source of a very serious difficulty for DH.

The difficulty arises already for the standard families, involving
observables at a single time. The problem is that the way that the
probabilities *P*(*h*) are intended in the DH approach, as probabilities of
what objectively happens and not merely of what would be observed upon
measurement, is precisely what is precluded by the no-hidden-variables
theorems of, e.g., Gleason [15, 25, 33], Kochen and Specker
[16, 33], or Bell [17, 25, 33]! It is a consequence
of these theorems that the totality of joint quantum mechanical
probabilities for the various sets of commuting observables is genuinely
inconsistent: the ascription of these probabilities to actual joint values,
as relative frequencies over an ensemble of systems--a single ensemble,
defined by the wave function under consideration, for the
totality--involves a contradiction, albeit a hidden one. For example, the
correlations between spin components for a pair of spin 1/2 particles in
the singlet state, if consistent, would have to satisfy Bell's inequality.
They don't.

A simple and dramatic example of the sort of inconsistency I have in mind
was recently found by Lucien Hardy [18]. For almost all spin states
of a pair of spin 1/2 particles there are spin components *A*, *B*, *C*, and *D* such that
the quantum probabilities for appropriate pairs would imply that in a large
ensemble of such systems (1) it sometimes happens that *A*=1 and also
*B*=1; (2) whenever *A*=1, also *C*=1; (3) whenever *B*=1, also *D*=1; and
(4) it never happens that *C*=1=*D*. The quantum probabilities are thus
inconsistent: there clearly is no such ensemble. (The probability that
*A*=1=*B* is about 9% with optimal choices of state and observables.) The
inconsistency implied by violation of Bell's inequality is of a similar
nature.

Thus, as so far formulated, DH is not well defined. For a given
system, with specified Hamiltonian and fixed initial wave function, the
collection of numbers *P*(*h*), with *h* belonging to at least one decoherent
family, cannot consistently be regarded as intended--as the probability
for the occurrence of *h*. Too many histories have been assigned
probabilities: to be well defined, DH must restrict, by some further
condition or other, the class of decoherent families whose elements are to
be assigned probabilities. It is not absolutely essential that there be
only one such family. But if there be more than one, it is essential that
the probabilities defined on them be mutually consistent.

Without directly addressing this problem of mutual inconsistency, Gell-Mann
and Hartle [12, 19] have emphasized that for various reasons the DC
alone allows far too many families. They have therefore introduced
additional conditions on families, such as ``fullness'' and ``maximality,''
and have proposed distinguishing families according to certain tentative
measures of ``classicity.'' With such ingredients they hope to define an
optimization procedure--and hence what might be called an optimality
condition--that will yield a possibly unique ``quasiclassical domain of
familiar experience,'' a family that should be thought of as describing
familiar (coarse-grained) macroscopic variables, for example hydrodynamical
variables. When the probability formula *P*(*h*) is applied to this family,
it is hoped that the usual macroscopic laws, including those of
phenomenological hydrodynamics, will emerge, together with quantum
corrections permitting occasional random fluctuations on top of the
deterministic macrolaws (and classical fluctuations).

GMH do not seem to regard their additional conditions as fundamental, but rather merely as ingredients crucial to their analysis of a theory they believe already defined by the DC alone. While I've argued that there is no such theory, a physical theory could well be defined by the decoherence condition together with suitable additional conditions (DC+) like those proposed by GMH, also regarded as fundamental. In this way, DH becomes a serious program for the construction of a quantum theory without observers.

It is true that much work remains to be done in the detailed construction of a theory along these lines, even insofar as nonrelativistic quantum mechanics is concerned. It is also true that many questions remain concerning exactly what is going on in a universe governed by DH, particularly with regard to the irreducible coarse graining. Nonetheless it seems likely that the program of DH can be brought successfully to completion. It is, however, not at all clear that the theory thus achieved will possess the simplicity and clarity expected of a fundamental physical theory. The approach to which I shall now turn has already led to the construction of several precise and reasonably simple versions of quantum theory without observers.

Wed Aug 13 17:22:41 EDT 1997