Next Contents Previous


5.1. Cosmological parameters

Well before the discovery of the first lensed quasar, Refsdal (1964) pointed out that a lensed quasar system could be used to measure the Hubble constant. The basic idea is simple. Light travels along two or more different paths from the source to the observer, via deflections at different points in the lens plane. The resulting path difference can be measured if the background source is variable by comparing light curves from the two images and multiplying by the speed of light. If the source and lens redshifts are known, this then gives an absolute measure of distance together with redshift in the system, the combination of which gives the distance scale. Fortunately, the expected time delays from typical lens configurations are on timescales of weeks to months. In principle, this method offers a clean determination of the Hubble constant on cosmological scales in a one-step method. Even better, in principle, measurements in a number of different lens systems at different redshifts could also allow measurement of H(z) and hence other cosmological parameters 4.

Historically, ecstasy at the cosmological prospects after the 1979 detection of Q0957+561 quickly turned to agony, both because of the long path to the secure determination of a time delay in Q0957+561 itself (Kundic et al. 1997), but also following the appreciation of the extent of the major systematic of this method. The systematic is closely related to the problem of determining the macromodel in a lensed system, and is that the derived Hubble constant is effectively degenerate with the macro-properties of the lens model, in the sense that steeper mass profiles produce lower H0 for a given time delay 5. Worse still, the mass-sheet degeneracy causes rescaling of the time delay for the same image positions and fluxes, and thus has an effect on H0 which is unknown in a single-source system, unless a census of all the mass along the line of sight can be taken.

Again there are a number of responses to the problem. One is to abandon the attempt to measure H0 or cosmological parameters in general, regard them as a solved problem at the level that lenses will constrain, and regard time delays as a means to break degeneracies in mass models by using them together with a "known" H0 – for example, from the Hubble Key Project measurements of Cepheid variables. Many workers in the field would regard this as an unnecessarily defeatist approach, given three facts: lens H0 work requires much less high-cost observing time than alternatives; large numbers of time delays will be available in future; and although the lens modelling systematic is a serious systematic, it is only one systematic and not many.

The second response is to investigate a statistical approach. Can the systematic error be reduced to a random error, albeit a large one in an individual object, which can then be beaten down by root-n statistics? This approach again begins with the average properties of the SLACS galaxy-galaxy lenses, which appear to have a mass slope very close to isothermal (Koopmans et al. 2006) and mostly lie at low redshift (< 0.3). More recently, a higher-redshift lens sample, known as BELLS (Brownstein et al. 2012, Bolton et al. 2012), has become available from a parent population consisting of the BOSS spectroscopic survey. These show that the mass slope changes slightly with redshift, becoming steeper by a few tenths in the overall power law. Matter along the line of sight is harder to control, but here again a statistical argument may be appropriate; multiple sight-lines through large cosmological simulations can give an indication of the possible range of amount of intervening matter along a particular direction (although not a random direction, since lensing is a process which takes preferentially along more crowded lines of sight). A large Bayesian engine can then be employed to marginalise over nuisance parameters and yield the desired information. Statistical approaches have been taken by a number of authors, including for example Dobke et al. (2009) who calculated the number of time-delay lenses which would be required to constrain cosmological parameters other than H0 in this way. Similar attempts have been made to investigate H0 using existing time delay information and best-attempt mass models (Oguri 2007), yielding results around 70 km s-1 Mpc-1 although leaving the uncomfortable feeling that mutually formally incompatible results for different lenses are being shoehorned into a harmonious conclusion. Another approach is to explore the variety of possible lens models using non-parametric methods in order to thereby explore the possible range of H0 for each system (Saha et al. 2006), again yielding results of 72 km s-1 Mpc-1, with errors of about 15%.

The third, and in my view fruitful response in the long run, is to grit one's teeth and do the hard work in individual cases, in the knowledge that it will get easier as telescopes become more powerful. Two recent examples of the work required are provided by the detailed investigations of the time-delay lenses RX J1131-1231 and CLASS B1608+656 by Suyu et al. (2010) and Suyu et al. (2012). A time-delay is only the start of such programmes. Other ingredients include deep multi-colour HST imaging, in order to properly model the extended structure associated with lensed extended emission from the quasar host and disentangle it from effects of reddening in the lensing galaxy; spectroscopic investigation of surrounding matter, in order to quantify the effects of mass sheets, taking into consideration the richness of the surrounding environment and comparison with cosmological simulations; radio imaging, in the case where the lensed object is radio-loud; and performing the modelling blind to avoid unconscious biases, so that the conversion from an arbitrary scale to H0 is only done once all the systematics have been estimated. However, when all this is done, the resulting H0 values from two lenses, including all the systematic and random errors, are of comparable quality to the HST Key Programme (6% error). There is a serious prospect from further such studies of a competitive contribution to the determination of w which is largely orthogonal to other probes, once a few dozen such lenses have been thoroughly investigated (e.g. Linder 2011).

5.2. Galaxy evolution

Individual lenses can be used to find values for the Hubble constant (or for ambiguity-free galaxy mass models if the cosmological world model is known). However, the statistics of well-selected lens samples can be used to investigate galaxy evolution. This is because the number of lenses as a function of galaxy and source redshift in such a sample depends on the evolution in both density and mass of the available lens population. In order to attempt this, a sample whose selection effects are under control is needed, and in practice the easily identifiable properties of quasars, and the possibility of getting clean separation between lensed and unlensed objects, makes quasar lens samples the obvious choice.

Two at least fairly complete quasar lens samples exist. The CLASS statistically complete survey was the result of a systematic attempt to completely identify all lenses with separation of > 300 mas and primary:secondary flux ratio < 10:1. The SQLS survey, although probably somewhat less complete at the lower-separation end, is slightly larger. A number of authors, beginning with Chae & Mao 2003, have used these samples, together with a plausible cosmological world model, to derive useful constraints on early-type galaxy evolution. Most have found a consistency with no evolution either in number density or mass (Chae & Mao 2003, Chae 2005, Chae, Mao & Kang 2006, Matsumoto & Futamase 2008, Chae 2010, Oguri et al. 2012), although due to the small statistics, the error bars are still large in the index of number density evolution. In the most recent work (Oguri et al. 2012), the evolution index of velocity dispersion nusigma ident d ln sigma / dln(1 + z), is zero within errors of ~ 0.2, assuming a standard Lambda-cosmology.

4 Early in the history of lensing, the number of lenses in a complete sample appeared to be a useful way of constraining Lambda, because a higher Lambda increases lengths at high redshift and hence increases the optical depth to lensing (Kochanek 1996); indeed this was the justification for the thoroughness of the CLASS survey in attempting to get a complete sample of lenses. Although this line of research yielded a fairly clear result of non-zero Lambda (Kochanek 1996, Chae et al. 2002, Mitchell et al. 2005, but see also Keeton 2002), the rate at which constraints on dark energy improve is a very slow function of increasing size of lens sample, and it has since been abandoned. Back.

5 Strictly, the dependence is not directly on the mass profile; it is related to the surface mass density in the annulus between the lensed images (Kochanek 2002, but see also Read, Saha & Macciò 2007). Back.

Next Contents Previous