COSMOLOGY, BIG BANG THEORY BERNARD E.J. PAGEL "Big Bang" is the name given (originally by Sir Fred Hoyle with semifacetious intent) to theories which assume that the whole observable universe has expanded in a finite time (according to our usual time reckoning) from an earlier state of much higher density. The scope of such theories is limited at present by quantum effects that are expected to become significant (and cause existing concepts of a smooth space-time to break down) at some density not exceeding the Planck density (**********************, where c is the speed of light, * is Planck's action constant divided by 2*, and G is the gravitational constant), corresponding to the Planck time ********, so that even the boldest extrapolations of current physical ideas into the past are confined to epochs later than this. Observations (the microwave background radiation and the abundances of light elements) strongly suggest the existence of very high temperatures as well as densities in the past. Attention is therefore confined here to "hot Big Bang" theories, which envisage a "universal fireball" at early times, dominated by radiation and relativistic particles. These were pioneered by George Gamow, Ralph Alpher, and Robert Herman in the late l940s. Application of such theories can test high-energy physics and basic properties of matter at energies greatly exceeding the 10,000 GeV or so of the largest man-made accelerator currently planned (the Superconducting Super Collider) if we can interpret the data and, conversely, some of their predictions can be tested by laboratory experiments with accelerators. BIG BANG COSMOLOGICAL MODELS Development of consistent cosmological models became possible only after that of the general relativity theory, which accounts for the mutual effects of mass energy and space-time and can thus deal with both finite and infinite amounts of matter, avoiding difficulties with boundary conditions that arise in cosmology with classical dynamics. However, many properties of cosmological models can be qualitatively and even quantitatively described in Newtonian terms, which makes them easier to visualize. Most models are based on the cosmological principle, which asserts that the universe (considered on a sufficiently large scale) is homogeneous and isotropic. Mass energy in the universe is approximated as a smooth fluid called the substratum, populated by imaginary fundamental observers, each one being at rest relative to the substratum at his position. Homogeneity implies that the universe presents the same large-scale aspect, at any given time, to all fundamental observers, and this in turn permits them to agree on a universal cosmic time which can be standardized by observing a property of the universe (e.g., average density or the radiation temperature). Isotropy implies that, to a fundamental observer, the universe presents the same large-scale aspect in all directions, and this in itself implies homogeneity if all fundamental observers are equivalent, as well as providing a local standard of rest. Anisotropic (e.g., rotating) models have been studied theoretically, but are so severely constrained by the observed isotropy of the microwave background (apart from an effect that can be attributed to motion of the Earth at a few hundred kilometers per second relative to the substratum) that they need not be considered further. The real universe is, of course, lumpy on scales up to at least that of superclusters of galaxies; but the microwave background temperature is isotropic (apart from the effect of the Earth's motion) to a few parts in 100,000 or less, implying that homogeneity is a good approximation on large-enough scales unless the Earth is in a unique central position-a proposition that is unthinkable after Copernicus. The cosmological principle restricts the evolution of the substratum to one of three possibilities: It may be static, or it may be expanding or contracting with the relative velocity of any two points, at some instant of cosmic time, being directed along the line (or geodesic curve) joining them and proportional to the distance between them. The expanding case is the one corresponding to observation as expressed in Edwin P. Hubble's law of redshifts (1929). Thus the relative position of any two points of the substratum can be expressed by "comoving" coordinates (a distance, and two angles specifying a direction) that remain fixed during expansion (or contraction), but with a universal scale factor R(t) that depends on time. In addition, space may be curved in one of three different ways: spherical (a three-dimensional analog of the surface of a balloon) , flat, i.e., Euclidean; and hyperbolic (analogous to a saddle-shaped surface). Mathematically, these possibilities are embodied in an equation first clearly formulated by Howard P. Robertson (in the USA) and by Arthur G. Walker (in the UK), which relates four-dimensional intervals in space-time between two events to the corresponding differences in time and in spatial coordinates. A consequence of this equation is that when a light signal is emitted by one fundamental observer at time * and received by another at a later time ** the wavelength received is scaled up relative to the wavelength emitted in the ratio (1 + z)= R(**)/R(t). Thus expansion leads to a redshift * caused by a dilation of space; when small, this is given by the usual Doppler formula as it appears in Hubble's law ****************************, Where V is the recession velocity, D the distance and ***, are, respectively, the observed and the emitted (or laboratory) wavelength of light or any other electromagnetic waves. *** is the Hubble constant (actually a function of time; the subscript zero refers to some epoch of observation such as the present), which is estimated from observation to be somewhere between about 50 and 100 km *** Mpc ********, million light-years) corresponding to a Hubble time 1/Ho between about 2x10**** and 10** years, respectively. The Robertson-Walker equation permits a threefold infinity of cosmological (or "world") models differing in the sign (or absence) of curvature and in the way in which the scale factor depends on time. These properties depend in turn on the form of the field equation of general relativity (involving, or not, an arbitrary "cosmological" constant *, that acts-when positive-as though producing a repulsive force that increases with distance from the observer), on the density of mass-energy, and on its equation of state. Two important limiting cases of the latter are (i) radiation and relativistic particles (particles traveling at, or almost at, the speed of light), dominant in the early stages, for which pressure makes a significant contribution to the mass-energy density; and (ii) effectively pressure-free "cold" matter which dominates today. The first relativistic cosmological model was that of Albert Einstein (1917), who envisaged a static, finite spherical universe in which the gravitational attraction of matter is balanced by repulsion arising from a positive *. Willem de Sitter produced in the same year an alternative, flat model, also with positive *, that is empty of matter, and was later interpreted as an (exponentially) expanding universe with a Hubble constant that is independent of time. Explicitly nonstatic models were studied first by Alexander Friedmann in the USSR (1922) and independently by Georges Lemaitre in Belgium (1927). Sir Arthur Eddington showed theoretically in 1929 that Einstein's static model is unstable and Hubble's announcement in the same year ruled nonexpanding models out of court. Lemaitre first formulated explicitly the idea of an initial state of enormous density (called by him "the primeval atom"), which is nowadays referred to as the Big Bang and Friedmann calculated the properties of models with ****, an assumption which is quite commonly made today, at least for the present stage of expansion, although there is a hypothesis with certain attractions known as the inflationary universe scenario, which holds that this phase was preceded in the first *****s or so by a very rapid expansion according to de Sitter's formula. The fate of a Friedmann universe depends on whether the kinetic energy of expansion is, or is not, sufficient for it to continue indefinitely despite the deceleration produced by the self-gravitation of the matter within it (see Fig. 1), analogously to the fate of a projectile that is, or is not, thrown with enough velocity to escape from the Earth. This condition is expressed by the value of a dimensionless parameter **, which is the ratio of the actual average density to the so-called critical density ***************************. Here h is another dimensionless parameter expressing the uncertainty in the Hubble constant (or its variation with time); it is ******** **************** and *********************.** less than 1 (low density) leads to an infinite hyperbolic universe that will expand forever with * decreasing. **=1 (critical density) gives an infinite flat universe that will just barely expand forever with **=1 always (Einstein-de Sitter model). * greater than 1 (high density) leads to a closed spherical universe that will eventually stop expanding and recontract, possibly to reach another Big Bang (or Big Crunch) and perhaps start the process over again (oscillating universe). The appropriate Friedmann model to describe the actual universe can be specified (if such an assumption is valid) by measuring two parameters, ** and **, at the present time. There are several ways of estimating **. One is to find a dimensionless deceleration parameter ** from the Hubble diagram relating redshifts to apparent magnitudes of, for example, the brightest galaxies in clusters treated as standard candles. (*****************) There are difficulties due to evolutionary effects on the luminosities of the galaxies, but the observations imply that ** is not more than a few and it could even be slightly negative, that is, ****. Another method relies on departures from a uniform Hubble flow caused by the lumpy distribution of matter; depending on what kinds of galaxies are assumed to represent the distribution of gravitating matter, one obtains values between about 0.15 and 1.0. A third method comes from certain arguments relating to Big Bang nucleosynthesis: According to the standard model, the abundances of deuterium and helium limit the contribution to ** from normal baryonic matter (protons and neutrons) to below about 0.1, but an arbitrary additional contribution from nonbaryonic matter in the form of exotic elementary particles not yet discovered is not excluded. Quasiaesthetic considerations (and the inflationary scenario) lead many scientists to believe that ** is very close to 1,in which case the age of the universe according to the Friedmann model is * of the Hubble time. Other estimates of the age come from colors and luminosities of the oldest stars, which give an age of about ********* years, and from abundances of radioactive elements which give a less precise number of the order of **** years. Stellar age dating seems to require ** to be nearer to 50 rather than 100 km ******** if this cosmological model applies. HORIZONS Because the universe has a finite age, there is a limit to the distance over which two fundamental observers can communicate, which is of the order of the product of the speed of light with the Hubble time and is known as the particle horizon. This distance is thus roughly as many light-years as the age of the universe in years and in Friedmann models more and more of the universe becomes visible as time goes on. The converse of this effect leads to the paradox that at early times such as that of the last scattering of the microwave background radiation, which took place at an epoch corresponding to a redshift of the order of 1000, regions that we now see at large angular separations cannot have ever been in causal contact with one another, and yet they manage to have closely equal temperatures. The inflationary scenario provides a possible solution to this "horizon problem." In models with positive * one can have accelerated, rather than decelerated, expansion, and in that case (and also in oscillating models) galaxies can also disappear from view, giving rise to another sort of horizon known as an event horizon. When both sorts of horizon exist, there are parts of the universe that remain forever unobservable (absolute horizon). THERMAL HISTORY OF THE UNIVERSE Our neighborhood and presumably the whole universe is now filled with blackbody radiation at a temperature of 2.7K above absolute zero-interpreted as a consequence of adiabatic expansion from higher temperatures at earlier times which follow the law ***************, where T is the radiation temperature at some past epoch and ** is its value now. Thus the energy density of radiation varies as (1+z)4, whereas the density of matter varies only as (1+z)3, leading to the result that radiation dominated over matter at redshifts above 1000 or so when the universe was less than about 100,000 years old and the temperature more than a few thousand degrees (see Fig.2). Before more or less the same epoch, matter (mostly hydrogen) would have been ionized and closely coupled to radiation by electron scattering, leading to thermal equilibrium, whereas afterwards protons combined with electrons to make neutral hydrogen, which is transparent, and the radiation underwent its last scattering from the cosmic "photosphere"; the latter forms an opaque screen through which it is impossible to see. Only in the subsequent matter-dominated era would it have been possible for galaxies to form, presumably as a consequence of growing density enhancements arising from small fluctuations that had occurred at much earlier epochs. The comoving density of quasars-which may be associated with newly formed galaxies-is found to increase up to a redshift of 2 or 3, with a possible decline somewhere beyond that, and the largest measured redshifts are between 4 and 5. THE BIG BANG, PARTICLE PHYSICS, AND PRIMORDIAL NUCLEOSYNTHESIS At the earliest times, according to theory, the temperature was still higher, reaching typical thermal energies equal to the rest-mass energy mc2 of elementary particles-electrons and positrons (0.5 MeV), protons and antiprotons (1 GeV), quarks and antiquarks, etc.-leading to copious production of these particles together with neutrinos, mesons, W and Z bosons (100 GeV), and, at a very early stage, presumably particles associated with grand unification theories (*****GeV). As time went on and the universe cooled down; particles whose rest-mass energy now exceeded the thermal energy would have annihilated with their antiparticles to make gamma-ray photons leaving a small excess or ordinary matter over antimatter. After about a second, at a thermal energy just below 1 MeV, electrons would have annihilated with positrons leaving a sea of photons and neutrinos (and antineutrinos) with a small admixture of protons, electrons, and neutrons. After 100 s, at a temperature corresponding to 0.1 MeV, nuclear reactions would first become possible as a result of the capture of neutrons by protons to make deuterium nuclei; these would have been destroyed at higher temperatures by photodisintegration. Deuterium formation then led rapidly through further nuclear reactions to primordial nucleosynthesis, in which it appears that 23% of nuclear matter ended up as ordinary helium (***) with 77% hydrogen and small traces of deuterium (**), light helium (3He), and the commoner isotope of lithium (7Li). Observed abundances of these elements are well explained by the standard theory assuming that there are just three species of light neutrinos and antineutrinos, corresponding to three (and no more) families of quarks and leptons, and that the contribution of baryons to *** is between ******** and ********. Standard Big Bang nucleosynthesis theory assumes a homogeneous universe (in this phase, ** is close to 1 in all Friedmann models and * is unimportant even if nonzero); but there are also nonstandard models assuming the existence either of density fluctuations or of unstable particles whose decay products modified the outcome of primordial synthesis by further nuclear reactions. Such theories may be able to account for the abundances even if there is a larger baryonic contribution to **; but the standard model seems to give the best fit to the data. The ratio of the number of baryons to the number of photons, on which the predicted primordial abundances primarily depend, is found from application of the standard model to be about ********. This ratio, unchanged since the epoch of electron-positron annihilation, represents the excess of ordinary matter over antimatter that may have been originally set up at (or close to) the grand unification stage, and its reciprocal is closely related to a thermodynamic property of the universe-the entropy per baryon (not counting entropy of any black holes). Its value appears so far as an ad hoc parameter, but it may be expected to emerge (when understanding improves) as a consequence of fundamental properties of elementary particles and their interactions. Additional Reading Harwit, M.(1988). Astrophysical Concepts, 2nd ed. Springer, New York. Hawking, S.W.(1988). A Brief History of Time. Bantam Press, London. Peebles, P.J.E.(1980). The Large-Scale Structure of the Universe. Princeton University Press, Princeton, NJ. Rowan-Robinson, M.(1981). Cosmology, 2nd ed. Clarendon Press, Oxford. Rowan-Robinson, M.(1985). The Cosmological Distance Ladder. W.H. Freeman, New York. Silk, J.(1989). The Big Bang, rev. ed. W.H. Freeman, New York. Weinberg, S.(1977). The First Three Minutes. Basic Books, New York. See also Antimatter in Astrophysics; Background Radiation, Microwave; Cosmology, Clustering and Superclustering; Cosmology, Cosmochronology; Cosmology, Galaxy Formation; Cosmology, Inflationary Universe; Cosmology, Nucleogensis; Cosmology, Observational Tests; Cosmology, Theories; Dark Matter, Cosmological; Gravitational Theories; Neutrinos, Cosmic; Quasistellar Objects, Statistics and Distribution; Star clusters, Stellar Evolution.