-- Eric Linder
This is page 2. The most recent reviews are on the main review page.
Post-WMAP thoughts on Dark Energy, with Lyman
alpha and LISA too
posted 3/4/03
Cluster Number Count Surveys for Dark Energy
posted 1/28/03
A Theoretician's Analysis of the Supernova Data
posted 1/28/03
The Cosmological Constant and Dark Energy
posted 8/6/02
Rethinking Lensing and Lambda posted 8/6/02
Can Clustered Dark Matter and Smooth Dark Energy Arise
from Same Scalar Field posted 5/9/02
UCLA DM2002 conference Review posted 2/26/02
Constraints in Cosmological Parameter Space from the
Sunyaev-Zel'dovich Effect posted 01/23/02
Supernovae, CMB, and Gravitational Leakage into
Extra Dimensions posted 01/14/02
AAS conference review posted 01/14/02;
updated 2/6/02
Measuring the Equation of State of the Universe:
Pitfalls and Prospects posted 01/02/02
Measuring the Sound Speed of Quintessence posted
12/20/01
CfCP Dark Energy Workshop Review posted 12/18/01
Dimming Supernovae without Cosmic Acceleration
posted 11/29/01; updated 12/21/01
Supernovae as a Probe of Particle Physics and Cosmology
posted 12/01/01
Feasibility of Probing Dark Energy with Strong
Gravitational Lensing Systems posted 11/04/01
Post-WMAP thoughts on Dark Energy, with Lyman
alpha and LISA too
The knives start to come out, post-WMAP...
Others in the community, playing the contrarians,
point out that degeneracies allow one to go rather
far afield from the consensus model. For example,
the overall WMAP power spectrum can also be fit by
an out of favor w=-0.6, h=0.55. It has long been
recognized that low Hubble constants are pernicious
in opening up the rest of phase space. So supernovae
will play an important role in closing this loophole,
both by restricting w to be more negative and maybe
by constraining h to be more positive (but right now
SN prefer an h lower than the HST Key Project value).
Seljak, McDonald, and Makarov
astro-ph/0302571 revisit Lyman alpha forest
constraints on the matter power spectrum amplitude
and slope. They take issue with WMAP's assertion
that Ly alpha results support a running index of
the primordial power spectrum, with less power on
smaller scales. They claim WMAP analyzed their
results incorrectly (though
they did thank Lyman Page, which shows class),
then go on to trash several
other researchers in the Ly alpha field. Those
people have taken umbrage, and say that they already
included the appropriate systematic errors for the
difficult fitting of the mean optical depth
(equivalently Seljak's \bar F). They say some other
less printable things as well. Since WMAP's claim
of a running index was only 2\sigma or so anyway,
those of us not directly involved won't lose much
sleep either way. Seljak et al. also cast doubt
on the combination of the 2dF galaxy results with
WMAP, saying they actually make the running result less
significant and that dark energy will affect the
growth factor. This is true, but one must also
include that effect for the Lyman alpha analysis,
though this only enters at the 2-3% level. So dark
energy is showing itself as an important element in
comparing experiments at this level.
Finally, note that the previous
Caldwell paper actually pointed out that because of
the degeneracies in the low multipole region (which
have a big lever arm in determining any running),
evolving values of equation of state w can actually
push the best fit index to the blue, i.e. in the
opposite sense that WMAP claims (and again helping
with low l's). So the game isn't
over yet.
An unrelated topic of interest concerns the use of
inspiraling binary black holes as standard candles. In
the recent Carnegie symposium, Sterl Phinney claimed
that LISA could use gravitational wave signatures
from such systems to map the expansion history with
0.1% precision out to z=4! This comes from an
old idea by Bernard Schutz, very nicely summarized
in his "Lighthouses" paper
gr-qc/0111095. Optical or other electromagnetic
follow up allows one to break the degeneracy between
redshift and mass, and measure the luminosity distance.
There are a number of other fascinating topics treated
in this paper, such as the ability of gravitational
wave detectors to automatically do spectroscopy and
polarimetry, and the analog to K-corrections (actually
very similar to that for seismographs).
[Two points of odd, tangential interest: Throwing
modesty to the winds, I'll also mention that Schutz
left out from Section 6 the strong limits on a
cosmic GW background, from the CMB and the matter
correlation function, calculated in my thesis and
carried through to some of the first and most precise
limits on the energy scale of inflation. Second,
Schutz presents an analysis method he calls a version
of the Hough transform used in bubble chamber photographs --
from bubble chambers to satellite gravitational wave
laser interferometers!]
However, as with everything else, systematics rule.
In a very well thought out and presented paper,
Holz & Hughes
astro-ph/0212218 show that gravitational lensing
will dilute the GW method distance precision to 5-10% per event.
Since there are many more Type Ia supernovae than inspiraling
binary black holes, LISA distance precision will not reach SNAP's.
Self-Consistency and Calibration of Cluster Number
Count Surveys for Dark Energy
W. Hu
Cluster counts are tremendously interesting but
quite frustrating to assess for their practical
value. This paper emphasizes a point suggested
before but not well addressed: the data is likely
to be rich enough to enable us to learn some of
the necessary properties of clusters we need to
know in order to use them as cosmological probes --
within the same survey simultaneously as placing
cosmological constraints.
Wayne presents a simple power law parametrization
of the mass function and shows that substantial
leverage can be available under certain circumstances
through tomography of the sample. While there are
a couple of minor points I balked at (he is admirably
thorough in spelling out the assumptions; though
additionally I
worry somewhat about environmental effects), my main
reaction was deja vu. This is quite similar
to the methods developed in the 1980s to deal with
possible luminosity evolution for galaxy counts -
the famous weak point of the Loh & Spillar
cosmological determination that Omega_m=1.
The details are presented in Exercise *3.3 of my
textbook, First Principles of Cosmology (surely you
have a copy?!) (starred exercises meant "think hard
about"), based on a nice paper by Caditz & Petrosian
1990, ApJ 357, 335. The general idea is that survey
data has a lot of information via tomography, or
redshift subsamples, and this can correct for evolution
so long as the form of the mass function (originally
the Schechter luminosity function) was preserved.
This was originally in terms of L_*(z); now we can
call it M_*(z) for cluster masses.
To get around sparse data and selection effects,
Caditz & Petrosian used moments of the distribution
rather than sample cuts -- but both give tomography.
If one has two unknowns: a cosmology function and a
mass evolution function, then two moments are
sufficient, e.g. the number counts
A Theoretician's Analysis of the Supernova Data
T. Padmanabhan and T. Roy Choudhury
This paper presents pedagogically the method of
comparison of supernova data to cosmological models.
Unfortunately it is severely out of date, presenting
results that have been known and published for several
years. The useful pedagogical aspect is further
compromised by a slightly out of convention phrasing.
Since there are no new results, but some confusions -
such as the role of observations at multiple redshifts,
assorted typos, and the precision of observations -
this doesn't add much to the literature. Its main virtue
is to bring many known points into a single article.
Some further notes: Appendix B is incorrect due to
neglect of correlations, thus
canceling out the nice touch of showing a slightly
different approach in Figure 2 [done correctly in
my previous astro-ph/0208550 though without individual
data points]. Also note Fig. 2 switches top and bottom
in the labeling. Fig. 7 would also be useful except
that they neglect the role of initial conditions,
e.g. phi_0. As for the two parameters they "introduce",
these are just the standard Fisher sensitivities.
Given Padmanabhan's gift for pedagogy (as evident from
his books), I would have liked to see a more considered
explication of probing cosmological models, not
dressed up as a regrettably out of date research article.
The Cosmological Constant and Dark Energy
P.J.E. Peebles and B. Ratra
Even for a Reviews of Modern Physics article this provides a
wealth of historical perspective on the cosmological constant
and dark energy issues. It applies the trademark Peebles
careful consideration to the dark energy revolution and while
it is full of cautionary notes to check the theoretical framework
one uses, it finds no major flaws. The authors also suggest
some refreshing new tests of gravitation applied to large scales.
They serve to remind us of the extraordinary extrapolation
required for cosmological research and the necessity to stop
every once in a while to check the firmness of the underpinnings.
Otherwise there are no surprises and the assessment of recent
developments is a bit conservative and focused on the authors'
personal research. Small omissions include the uncited, earlier
work on general equations of state by Wagoner (1985; plots by
Linder) and Linder (1988). ;-)
The true gems of this article for me were the tests of gravitation:
see between eqs. (23)-(24), (56)-(58), and (63)-(64). In a more
recent article,
astro-ph/0208037,
Peebles also suggested testing
(through the supernova magnitude-redshift relation) that the curvature
constant appearing in the metric is the same as that appearing in the
Friedmann expansion equation. We certainly gain from the kind of
considered overview of history and review of a too-taken-for-granted
framework that this article presents.
Rethinking Lensing and Lambda
C.R. Keeton
This is an interesting, insightful article that points out
a logical flaw in the analysis of strong gravitational lensing
statistics to estimate the cosmological parameters. Recall
that such statistics of the number of lensed systems detected
in a survey was not only one of the first probes to argue for
a cosmological constant (e.g. Rix 1988), but to place an upper
limit on the value of Lambda.
The optical depth for lensing is proportional to a ratio of
angular distance factors and the number density of lenses. The
product of these two is highly sensitive to the cosmological
parameters. But Keeton points out that we can now measure the
deflecting galaxy population - in which case it is an observed
quantity not a function of cosmology. Using the observations
instead of the theory removes much of the sensitivity, leaving
only the ratio d_ls/d_s, which is primarily dependent only on
Omega_M. This effectively changes the sensitivity
from a factor three in the number of lens systems as Lambda goes
from 0 to 0.7, to a factor of 10-40% depending on average source
redshift.
The one way out is to step deeper into theory and derive the
galaxy number density from the cosmology sensitive growth factor.
But Keeton argues that it is better to take advantage of the
insensitivity to cosmology to learn something about galaxy halos
instead.
Can Clustered Dark Matter and Smooth Dark Energy Arise
from Same Scalar Field
T. Padmanabhan, T. Roy Choudhury
The composition of the universe fails Occam's razor rather spectacularly,
with two unknown components required: dark matter clumping on galaxy
cluster and
below scales and dark energy smooth out to horizon scales. This paper
attempts to find a model explaining both as manifestations of a single
field. The authors motivate by analogy a lagrangian of the form
L=-V(\phi) sqrt{1-\partial^i phi \partial_i phi}. This is similar to, but
with several differences from, Sen's tachyon field model. I would think
there should be problems with dimensional regularization and hence
renormalization of such a form, but let's accept it for now.
After deriving the density and pressure they interpret it as the sum
of two components: a dark matter and a dark energy. However, it is not
legitimate to linearly add components unless they are noninteracting.
Since they both arise from the same field, one would expect coupling.
The rest of the paper is mathematical manipulation. They assume a uniform
power law index n for expansion, a \sim t^n, but of course this cannot
hold to even moderate redshifts as it must change from an accelerating
epoch to a decelerating one. Similarly, they take a constant ratio of
rho_DE/rho_DM in time (but not space) which would not reproduce cosmic
history such as structure formation or nucleosynthesis.
UCLA Dark Matter/Dark Energy conference review
This was a very cordial conference, with everyone being complimentary
about each others' complementary probes of dark energy. Tyson presented
DEEPLENS weak lensing survey results, covering 28 sq.deg. with 4 colors
and 4 redshift bins to 29 mag/sq.arcsec. By contrast LSST will do
14000 sq.deg. to 24 mag, 3 times/month. They found a cluster at z=0.3
purely from the mass map, not the light. The map provided the location,
redshift, and mass structure. For the survey they use the Spergel-Hennawi
xCDM N-body simulations to correct the mass function sampling.
Kamionkowski spoke on polarization in the CMB, how lensing could induce
a curl that mimics a tensor perturbation signal. It can be removed by
looking at higher order correlation functions, but this pushes the optimal
survey size away from small surveys up to the 20 degree linear size.
Steinhardt gave a tour-de-force no-visual-aids talk on the cyclic universe
when his laptop broke down. He explained the scenario clearly from both
a field theory and brane point of view. One prediction is a blue
spectrum of gravitational waves. Scale invariant density fluctuations
can be achieved for w >>1 since V< 0. In the brane picture the crunch
when the branes collide is easily seen as nonsingular - only the extra
dimension vanishes, density remains finite, the collision is inelastic
so radiation is created for a new cycle.
Mohr emphasized the importance of adding preheating (entropy) or radiative
cooling to simulations to resolve discrepancies in the mass-temperature
relation used in cluster Xray surveys. Such entropy biases Xray surveys
but not SZ ones because of the different dependences on emissivity. Davis
summarized the DEEP2 galaxy survey to measure 50000 redshifts between
z=0.7-1.4 using BRI photometric redshifts. Cluster count methods are
very sensitive to sigma_8: a shift by 5% gives a 2 sigma bias (mostly in
the Omega_m direction). But he indicated that he now believes that
counting galaxies of fixed mass or SZ decrement will be more sensitive
to cosmological parameters than halos of fixed rotation speed due to
systematic problems with baryonic infall differing from z=1 to 0.
Wright summarized CMB experiments status. TopHat maps of 5% of the sky
should be released this summer; MAP completes a full sky cover March 31,
with data release January 2003. Polarization data from PIQUE etc. is
expected soon. Frieman mentioned that SDSS+MAP can probe the neutrino
mass to 0.5 eV. The Early Data Release gives sigma_8=0.915+-0.06,
Gamma=0.188+-0.04; this disagrees with other recent results but is only
from a 2 night strip. High order galaxy correlation functions are
consistent with nonlinear evolution from gaussian initial conditions.
In galactic structure, tidal tail precession of the dwarf satellite Sgr
indicates flattening of the Milky Way halo to q=0.75. Assorted other
news: Scuba found 35 new Lyman break galaxies. Tyson found 2 halos with
soft cores - i.e. no cusps. Multiphase cluster medium (e.g. cooling
flows) will ruin a simple M-T relation. EROS microlensing rules out >
10% of our halo in MACHOs with M=1e-7 to 1 solar mass.
Constraints in Cosmological Parameter Space from the
Sunyaev-Zel'dovich Effect
This is a clear, well reasoned article analyzing the systematics
associated with using SZ and X-ray properties of clusters as a
cosmological probe of the angular distance-redshift relation and the
dark energy. (Note it
does not treat the cluster counts probe which predominantly depends on
the growth factor-redshift relation). Examining the astrophysics of the
clusters and the techniques of observations, it serves as a model for
the sort of analysis of systematics that all cosmological probes should
present along with their parameter estimation contours.
In the end they treat a 4% statistical error in distance in conjunction
with either
a 3% constant systematic or a coherent systematic that ramps up from
zero to 3% at z=1. The fiducial cluster distribution is 500 clusters
uniform in z out to z=1. For this they find 3 sigma constraints on
Omega_m of +-0.08, Omega_Lambda of +-0.12, h of +-0.008 statistical only.
The flat systematic affects only h strongly, biasing it by 3%. The linear
systematic affects h little, but biases Omega_m by almost 3 sigma and
Omega_L by 1 sigma. For the flat w-Omega_m parameter set, they find
w to +-0.18, with a linear systematic biasing it by 0.5 sigma.
The paper also considers a nearer term sample of 70 clusters with a
7% statistical error and 5% systematic. Here the parameters are
determined to within Omega_m of +-0.17, Omega_L of 0.3, h of +-0.027.
[Note: my numbers often disagree somewhat with those in the paper,
possibly because
of the 3D projection.] The constant systematic will have the same effect
as before, biasing h by 5%, and the linear biases about the same number
of sigma as before. The accuracy on w becomes 0.4 (0.14 for 1 sigma)
with about the same bias as before. Two important points to note are
that since this method is a distance measurement it enjoys the same
complementarity as the SN method does with the CMB, cluster counts,
and Omega_m
measurements, but because the cluster properties are not standardizable
this probe carries with it an additional dependence on the Hubble
constant h. This enlarges the parameter space and somewhat weakens the
constraints.
Supernovae, CMB, and Gravitational Leakage into
Extra Dimensions
In striving to explain the results of the supernova Hubble diagram,
some researchers take a different path than dark energy. The
observations have proven remarkably robust against astrophysical
explanations, e.g. evolution or dust, the downfall of many earlier
cosmological tests, so the presence of true cosmological acceleration
requires rethinking some area of fundamental physics. These can be the
ingredients in the Friedmann equations - the densities and pressures
given by particle physics - as in the dark energy route, or the
equations themselves given by the theory of gravitation. This is not at
all surprising, since the cosmological constant itself is equally at
home on the right hand side of the Einstein equations as a vacuum
energy component, or on the left hand side as a new term in the
Einstein-Hilbert action.
Weyl derived a cosmological constant type term from his theory of
conformal gravity, which generically predicts an acceleration, making q
approach -1; Starobinsky showed that scalar-tensor gravity often gives
a (time varying) accelerating term due to self interaction of the
scalar field (and may slightly cluster on noncosmological scales). This
paper considers a braneworld scenario where gravity in our 4D universe
(the brane) is modified by "leakage" into the 5D bulk. Such an induced
gravity model turns out to
be highly predictive and calculable, despite its M-theory origins.
Basically gravity is only affected on scales larger than a crossover
scale r_c=M_Pl^2/(2M_5^3), where M_5 is the 5D reduced Planck mass;
above this scale gravity acts like in 5D, going as 1/r^3 (it will also
be affected on very small scales M_5^{-1}).
Cosmologically, the effect is to add to the Friedmann equation for the
Hubble parameter terms (beside the usual \rho) proportional to sqrt{\rho}
and a constant term. A new
parameter Omega_rc=1/(4 r_c^2 H_0^2) can be defined, and the curvature
is no longer defined by a linear sum of Omega's. For a flat universe,
Omega_rc=[(1-Omega_m)/2]^2. Acceleration occurs naturally, with no need
for a cosmological constant or dark energy, when the matter density
drops sufficiently low or equivalently when the Hubble radius H^{-1}
approaches the scale r_c. The induced gravity on the brane acts as a
self-inflating source. Remarkably, the dynamics is the same as a dark
energy model with time varying equation of state w(z; Omega_m,
Omega_rc). At large redshift w approaches -1/2, and recently it
approaches -0.77 for a flat, Omega_m=0.3 universe (flat, but no
Lambda!).
In terms of the Hubble parameter, one constructs the supernova
magnitude-redshift relation the same way, so one can fit for Omega_m,
Omega_rc, and Omega_k. The authors fit 18 low z SN and 36 high z ones
from the SCP, finding (assuming a flat universe) Omega_m=0.18+0.07-
0.06, Omega_rc=0.17+0.03-0.02 (yes, this is flat) with chi
squared=57.96 for 52 dof. This gives r_c=1.21\pm0.09 H_0^{-1} [note in
the paper the errors are typoed as 0.9]. The Hubble diagram is also
consistent with SN1997ff, lying slightly above the usual curve, though
it was not included in the fit. [The magnitude deviation from the usual
curve looks to be about 0.03 mag at z=1.7]. They marginalize over the
stretch coefficient alpha, though they plot their curves with
alpha=0.6. Note that a braneworld model with
Omega_m=0.3 fits much more poorly.
They also analyze CMB fits and claim flatness is preferred, along with
Omega_rc=0.12 and Omega_m=0.3. However, the marginal distribution
actually peaks at Omega_rc=0.04 [is this a typo for Omega_rc h^2?] and
the one for Omega_m peaks around 0.3 in one graph and 0.39 in another
[different h's adopted?]. The SN fit just lies on the boundary of the
95% confidence region. For their given CMB parameters they say the
angular distance to decoupling differs by 4% from the usual Lambda
model, which can be distinguished by future CMB experiments. They
intend to pursue the calculations for large scale structure formation,
which should differ since perturbations on the brane source bulk terms
which backreact on the brane. Finally, they point out an advantage of
the braneworld leakage model is it that has the same number of free
parameters as the Lambda model, fewer than generic dark energy models.
AAS conference review
AAS meetings are often more about meeting people than the talks, so
there were not very many new results. Following is a biased selection
of developments that caught my eye.
Supernovae: Ned Wright had a poster showing that the supernovae flux
offset between a pure matter flat universe and the (0.3,0.7) universe
scales close to an exponential in lookback time. Since dust extinction
goes as exp(-\tau) then a constant physical density of dust fits the
observed Hubble diagram without needing Lambda. Wright points out
though that this behaves like no known dust and also overproduces the
far-IR background [and it disagrees with other estimates of Omega_m and
requires an Omega in grey dust of 0.2% - almost as much as in stars.
Anyway, it exceeds SNAP's 0.02mag discrimination for 0.15< z< 0.55,
reaching 0.028. -
Eric]. No physical model fitting the constraints is suggested. [Note:
this is now
astro-ph/0201196]
Wide Field session: Steven Beckwith mentioned the selected GOODs
Legacy and Treasury surveys, covering 300 sq.arcmin in 2 fields. Ground
observing achieves wide fields easily but reaches background limits
quickly (4\pi in 1.5 days possible), but space fields go much deeper.
Pat Hall said SNAP should find 5000 quasars to M_B=-23, 10^5 AGN to
M_B=-16, with ability to select by color or variability; also about 20
galactic nuclei flares, so look at nuclei. Ken Lanzetta claimed
photometric z's are accurate to 6% out to z=6, which brought many
questions. No individual object errors were shown, and other groups
have mentioned problems, such as with different galaxy types and doubly
peaked likelihoods. Alberto Conti said that to get an error of 5% in
measuring the galaxy angular correlation function at z=1 one needed a
survey with linear dimension 12 deg. Neill Reid said SNAP's stable PSF
was crucial to measuring proper motions of 25 km/s at 5 kpc for
galactic structure. Daniela Calzetti said SNAP's multiwavelength
coverage was key to discriminate dust, age effects in star formation;
one could find 10^4 Msun, 10 My old star clusters at 12.5 Mpc. Megan
Donahue said a good cluster survey needed to go wide, deep, and red.
Prime (2006-9) could discover 1000 clusters to z=2; SNAP could be combined
in cluster studies with SZ photometric followup.
SNAP Cosmology session: Tim McKay emphasized the incredible community
resource of SNAP - a continuous ground datalink means every pixel on
every frame is available to the community without "science biased"
preprocessing. Roger Blandford estimated 30000 lensed images would
occur in SNAP's deep survey, with image separations from 0.3-3". An ACS
proposal for a 1 sq.deg survey to I=25 in 2 colors would pick up 100
lenses. Strong lensing provides another route for cosmological
parameter estimation, with dOmega_m\approx (1/2)dz_source. He advocated
synergy with SKA.
Measuring the Equation of State of the Universe:
Pitfalls and Prospects
A dark energy component enters into the distance-redshift relation in an
integral form involving both its density Omega_w(z) and its equation of
state w(z). Because of this one can obtain the same distance or magnitude
by trading off one property vs. the other, referred to as a degeneracy. In
the face of this, and further errors in determining the distance as a
function of redshift - i.e. the lack of a perfect probe and experiment - one
has to choose a personal level of comfort: how pessimistic/optimistic do you
want to be in drawing conclusions about our ability to determine the dark
energy model.
It is not in dispute that certain classes of models will be able to be ruled
out relative to the predictions of others. Nor is it in doubt that some
degeneracies between individual models will remain even in the presence of
a strong data set such as to be provided by SNAP. Yet many papers have been
written either bemoaning the inability or promising the ability to
distinguish the dark energy. I emphasize that the numbers - the error
estimates - for the most part are not in doubt, only the comfort level. In
this well researched paper the authors adopt a cautionary view, noting the
half empty nature of the glass.
They basically discuss in detail the degeneracies: trading Omega_w for w(z),
and different functional forms of w(z) for each other, mostly concentrating
on the interplay of w0 and w1 in the linear form w(z)=w0+w1*z. They
summarize their conclusions very concisely in the last section, and here is
where the degree of pessimism enters. I agree with their numbers, which fall
within the ballpark of SNAP estimates, but notice how interpretation matters
in my rephrasing of their concluding points (see the paper for the full
original conclusions):
In addition, they mention a few other good points. External information,
such as CMB constraints on curvature or large scale structure estimates
of Omega_m, can indeed have some slight dependence on w(z). It would be
nice to calculate these accurately since ignoring this could lead to a small
bias or further degeneracy problem. The explicit nature of their cautions on
showing restricted contour plots, with w constant or w>-1 only, is well
taken in many instances and should be paid attention to by researchers.
A couple of cases I thought they overstated: a large variation w1 is
partially degenerate with a shift in w0, but this does not hold well over
a range in redshift. For example for Fig. 1 the models' magnitudes become
distinguishable at z>0.6. In Figs. 3 and 5 the main effect to hide a large
w1 is in fact not a biased w0 but a large shift in Omega_m, and even so the
open and solid points in Fig. 3 would be distinguishable by SNAP; the
constrained fit in Fig. 5 is distinguishable for z>0.25. As for the bias in
Fig. 5, I hope researchers would realize something is up when 1) the contour
is pushed against the Monte Carlo boundary, and 2) the value of Omega is 3-10
sigma away from the concordance value. Their detection limit on w1 below
Fig. 6 is based on somewhat larger data errors than SNAP and an uncertainty in
Omega_m of 0.1. Caution and open eyes are certainly required for determining
dark energy, but there is room for optimism as well.
Measuring the Sound Speed of Quintessence
This is a clear, compact paper, providing a definite prescription for
constraining classes of dark energy models. The idea is that while for
canonical scalar fields the sound speed is fixed at c_s=1, for models with
nonlinear kinetic energy terms the sound speed is a function additional to
the equation of state (though both of course are determined from the
Lagrangian). One example is k-essence models which include terms polynomial
in the canonical kinetic energy (as well as generically having negative
w').
The authors consider a fiducial model that in the early universe acts like
radiation with a fractional density contribution of about 20% and still
contributes of order 10% at the time of recombination. At recent times
the dark energy comes to dominate the dynamics. Because it is nonnegligible
at the time of recombination, the height of the CMB acoustic peaks can probe
the sound speed through the clustering properties, and hence gravitational
potential well deepening, of the dark energy. They find that canonical
models with c_s=1 can be strongly distinguished from k-essence models with
c_s^2<<1, even for the same equation of state. The dark energy power
spectrum for k-essence will also have distinctive, but probably
unobservable, oscillations. Finally, they show that the new parameter
of the dark energy sound speed is not degenerate in the CMB power spectrum
with the other CMB
parameters, such as densities, spectral index, etc.
Inaugural CfCP Workshop on Dark Energy
This was the inaugural workshop of the new Center for Cosmological Physics
at the University of Chicago. It drew a larger than
expected participation of 60 researchers. I think its main function
ended up being getting the dark energy community on the same page, as
certain important points were emphasized, but I saw no new results or
surprises, though a couple of proposed missions were new. All talks
are planned to be posted on the
workshop
website. What follow
are impressions lensed through a single person's viewpoint.
The theory overview talks were what we've heard before, laying out the
case for exciting physics ahead. The observational viewpoint talk was
overly catholic, it seemed to me, quoting indiscriminately limits from
papers in the literature that are not always well regarded. This worried
me a bit since people outside the field may well take these as gospel, so
it points up that it behooves us to be clear and vocal and accurate about
both the current state and future prospects of dark energy detection.
Albrecht gave a call to arms to defend the future of dark energy
determination against naysayers who claim degeneracy reigns. He pointed
out in strong terms that one can certainly discriminate between classes
of theories despite one also being able to find individual models that
are closely degenerate. This is quite refreshing after reading the
seemingly eternal progression of small articles that calculate a few
special models and "discover" that their deviation is tiny. He also
cautioned that priors imposed, on Omega_m say, may not be independent
of the dark energy model.
Krauss talked about the cosmological age test, saying that by itself it
imposes w<-0.4 at 68% cl. This is sensitive to the adopted h=0.7 though.
He finds a best fit age 13.3 Gy, 95% lower/upper limits of 11.0, 16.7 Gy.
Note that Knox has a paper with consistent ages (14.0+-0.5) from the CMB.
In the supernova session I say, without bias of course, that Saul's
talk
blew all the others away. SNAP came across as extremely well thought
out and professional. Later mission talks were confronted with questions
on topics covered by Saul and Michael, such as calibration and orbits,
that made those missions look ill prepared. I heard several comments
afterward regarding the clarity of the "like vs. like" approach, which
seemed to reduce several people's doubts. Nugent showed that stretch
corrections work well in UBV with other correctors possible for RIJ.
Hoeflich and Niemeyer emphasized the point that the explosion mechanism
is only
microphysics: given the initial nuclear state and the final (fused nickel
mass), one knows the energy output independent of the details of the
burning.
Short term supernovae searches
were discussed by Riess (200 SN in z=0.3-0.7) and Garnavich (200 SN in
z<0.8). Riess adopted a magnitude systematic linearly ramping up to
0.03 at z=0.5; Garnavich claimed total errors below 0.02 mag and used
a dispersion per SN of 0.12. For a constant w he obtained
sigma(w)=0.24; with no systematics and a prior of sigma(Omega_m)=0.06
he got sigma(w)=0.1. Leckrone presented a clever new idea of HUFI -
the Hubble Ultra Wide Field Imager. One of the fine guidance sensors
would be replaced by a 90 square arcminute imager (ACS is 11 sq. arcmin)
composed of three 4Kx4K CCDs. Its I band sensitivity is the same as
ACS. It can be run in parallel to the other science instruments and
rough estimates give 1 SN/day to follow. Multiplexing seems low.
Unfortunately, its deployment is problematic due to NASA policy/inertia,
despite that very sensor being replaced in the servicing mission two
years from now. Bennett presented a rough outline of GEST and an upgrade
with IR
capability, STEP, to find 1000s of SN in 0.6 < z < 1.7.
In the SZ and cluster game, the new proposal was DUET - the Dark
Universe
Exploration Telescope - presented by Don Lamb. This is given as a MidEx,
PI Robert Petre at Goddard, with Chicago and MIT joining in. It would
find 20000 Xray clusters to z=1.5, aiming for Omega_m to 0.015,
Omega_nu to 0.001, w to 0.07, w' to 0.3. It has a wide field of 10000
square degrees in the north and deep field of 150 sq.deg. in the south.
It uses the mirror spare (0.7m) from XMM-Newton and CCD camera a la HETE,
aimed for 2007. Mohr pointed out the parameter
estimations depend on having the cluster redshifts accurately and that
10% deviations in log(limiting mass) bias the results by more than 3 sigma.
Haiman discussed how the cluster power spectrum could give an
angular distance test from the scales of features, with the theoretical
possibility of tracing it over redshift by binning.
Holder reviewed
the SZ Array, covering 10 sq.deg. to mass limit 1e14, which should provide
100-500 clusters. For no systematics and a uniform distribution to z=2
this would give w to 0.3. Future experiments such as the South Pole
Telescope or Atacama Cosmology Telescope with 1e5 clusters could reach
dw=0.04 as a best case. A large fly in the ointment is understanding
the mass-temperature relation; this is currently normalized by SPH
simulations but without star formation included (see below as well).
Weller warned about
bias from variation of the limiting mass with redshift and showed very
asymmetric errors, e.g. dw'=+0.05,-0.55 for SPT.
On the weak lensing front, Frieman mentioned that there's a
natural synergy with cluster surveys, since they require multiple
bands for photometric redshifts so going just a little deeper picks
up weak lensing data. Photo z's seem to work fairly well, out to
z=0.5 they find delta z=0.03 with slight blow ups in dispersion whenever
the 4000 Angstrom break changes to the next filter. [I have since been
told that these results hold only for a subset of galaxies: old
ellipticals. Others are much more uncertain.] Parameter estimation
likes very wide fields - 10000 sq.deg say. Sloan South is 225 sq.deg. with
an optimistic coadd to R=25.1. Parameters are extremely sensitive to the
lens model, e.g. NFW vs. SIS halos. [Given the new results that one may
need to greatly increase particle numbers to accurately resolve the halo
structure in simulations, this is not a good thing - Eric.]
McKay pointed out that clusters surveys are fundamentally different from
supernovae ones in being a counting exercise. Thus one needs a good
estimate of efficiency, whereas for supernovae you can miss some without
bias. Errors in the source redshift distribution have a strong effect
on shallow (mag<25) surveys, especially for M_lim>1e14. Furthermore,
projection effects are very important when the mass spectrum is steeply
falling. Refregier said that the sensitivity to the mass-temperature
relation could be seen by the huge change in sigma_8 from the old value
of unity to the new of 0.72+-0.04, which uses the observed M-T instead of
the simulated. Bernstein pointed out that weak lensing includes a
systematics check in that lensing itself produces no B modes (shears at
45 degrees to the mass sheet vector).
Huterer said that one needs to know the nonlinear power spectrum to better
than 5% to keep weak lensing parameter estimation bias below the statistical
errors. Newman mentioned that DEEP2 observations begin at Keck on July 5.
Bernstein said that DEEP should take care of the redshift distribution
of lenses for R<24.5. Stebbins presented the Alcock-Paczynski method,
which is still far from application. Knox mentioned peculiar velocities
as probes, but they are insensitive to w because they depend on the time
derivative of the growth factor, which is dominated by Omega_m. Hu talked
about future possibilities of cross correlation between CMB polarization
and lensing - this is limited by cosmic variance to dw=0.06 and Planck
can achieve dw=0.14.
Overall I think the main results were agreement that systematics must
be taken seriously and one's best estimation of them should be included
on all parameter contour plots presented. I'd like to believe that
people are more aware of the powers and limits of constraining classes
of dark energy models, and are more comfortable with the supernova method
and its robustness with respect to the progenitor state and deflagration
details. Certainly there will be great activity in the use and analysis
of all these probes in the near future.
Dimming Supernovae without Cosmic Acceleration
Neither cosmic acceleration nor the expansion rate of the universe is
directly observed; rather one must interpret the astrophysics of the
source, the propagation characteristics of the light through the
cosmology, and the selection biases of the detector. Just as the
observed redshift can be superficially attributed to "tired light", i.e.
photon interactions, this article postulates the decreased apparent
luminosity of supernovae with distance as due to photon oscillations
into (undetected) axions, rather than a distance-redshift relation of
an accelerating universe.
It contains a number of clever points, as well as a number of the usual
fine tunings common in hypothetical particle astrophysics. The basic
idea is simple: consider an axion coupling to electromagnetism through
the usual F-Fdual term. In the presence of a magnetic field mixing is
induced between photon and axion states. These oscillations will be
path length dependent, and in some regimes energy dependent. So photons
from supernovae can be "lost", dimming them variably with distance. The
clever part is that with maximal mixing one will lose up to 1/3 of the
photons (equilibrium division between 2 photon states and the axion).
This puts it comfortably in the regime to explain the Hubble diagram dimming -
2.5 log(2/3)=0.44 mag. If the mixing length is greater than a Hubble
length then one gets a fraction of this. Thus the Hubble diagram curve
will rise (dim) just as for an accelerating universe. Because not more
than 1/3 of the photons are lost, the curve does not rise without limit.
At high enough redshift the deceleration of the universe forces a
turnover, just as it does for the dark energy case. By adjusting
parameters, the authors replicate the Omega_m=0.3, Omega_Lambda=0.7
magnitude-redshift curve to within about 0.01-0.05 mag with a model containing
Omega_m=0.3, Omega_{-1/3}=0.7, axion scale M=4e11 GeV, axion mass m=1e-16 eV
(see Figure; note that a shift of only about 0.05 in
w will actually offset the curve from the cosmological constant model by
more than 0.1 mag - almost at the current data errors).
So the question is, how realistic are the assumptions and what constraints
can be placed on such a mechanism. The authors address these points in
some detail. One objection is that they take a minimally nonaccelerating
universe with Omega=0.7 in a w=-1/3 component. They do this to match LSS
and CMB constraints, but it replaces one unknown (dark energy) with two
(axions plus a cosmic string network or whatever has w=-1/3). I won't go
into details on their mixing calculations explaining the energy dependence
(so CMB photons aren't affected) - the math looks ok to me; the basic
conclusion is that photons above a few hundredths of an eV are maximally
mixed in an energy independent fashion and below that energy the oscillations
are weak. So only submillimeter and shorter wavelength sources are affected.
Because photons don't actually lose energy, a la tired light, those
constraints, e.g. momentum smearing, don't apply - rather the photons
themselves are lost. I won't address weak points in the particle physics,
since the authors are much more expert and presumably did a careful job.
(I have been told that these cannot be "normal" axions that enter in
Peccei-Quinn symmetry breaking because they violate the relation between
m and M - see Kolb & Turner eq. 10.7).
What are the astrophysical doubts? Anywhere there's a magnetic field
there should be this mixing - stellar interiors (with consequent loss of
radiation pressure), quasars, active galactic
regions. They do a rough calculation showing the oscillation on galactic
scales should be of the same order as from cluster size magnetic domains,
considering B=few millionths vs. billionths gauss. (Note they have a typo
on p6: it
should read L_0^G \sim B^{-1}). This will also cause a decay in source
polarization, as well as generating polarization if the magnetic
field is properly aligned. Axions can oscillate back to photons, so
there will some level of regenerated flux.
A couple more bits: They do make the nice point that
cluster counts (or weak lensing), for example, would be unaffected by
photon-axion mixing and give a true determination of w. Also, because of
the probabilistic nature of the interaction, they claim to expect increased
dispersion in the Hubble diagram. I'll add that this would also mean
variance with direction on the sky.
Figure 1: The dashed
blue curve is the usual cosmological constant (accelerating) universe,
the solid purple the axion case, the green dot dash the same cosmology
(w=-1/3) as the axion case but with no axion mixing, the gold dot dot
dash is pure matter (flat).
Supernovae as a Probe of Particle Physics and
Cosmology
Think of this paper as CKT lite. No new calculations (e.g. physics) are
presented but that is not its purpose. Rather it gives more and clearer
background than CKT and so will be useful to those from either side not
comfortable at the intersection of particle physics and cosmology. Their
conclusion and figures can be simply summarized by the well known generic
result that anything causing extinction of supernova photon flux - whether
dust or oscillation to axions - dims the supernova (raises the magnitude
curve in the Hubble diagram), while any cosmological model with a
differential deceleration relative to a fiducial model will brighten the
supernova (lower the Hubble diagram curve). The interplay of these factors
will let you match the accelerating model, or any other, over a certain
redshift range. If you want heuristic arguments plus detailed analysis, see
my paper, but in its
simplest form it really is obvious.
For those who want a little more quantitative argument now, the photon-axion
loss mechanism at asymptotically high redshift (really above z=0.5) raises
the magnitude by -(5/2)log(2/3)=+0.44 while any cosmological models that
become matter dominated, regardless of the other component equation of
state w, will be asymptotically at constant offsets - the more decelerating
ones will have negative magnitude offsets. [This is because the luminosity
distances all asymptotically scale linearly with redshift z (the comoving
distances \int dz/H(z) become constant since H(z) asymptotically goes as
a power more negative than -1: the mark of deceleration), so upon taking
the logarithm to form the magnitude m\sim 5 log d, the differential
magnitudes lose all z dependence.] So any such models will lie closer to
the usual accelerating case than 0.44 mag at worst. By playing off the
mixing (or dust) and deceleration, one can always arrange the curves to
match very precisely over some redshift range. It turns out that over the
range z=1-2 a model with w=-1/3 (and no mixing) lies about -0.45 mag from
the accelerating w=-1 case; thus the sum of the effects give a near match
to that case. Asymptotically the w=-1/3 curve is offset by -0.3, so the curve
will turn up, go positive, and level off about +0.1 above the w=-1 model.
One interesting point that is not mentioned by either paper is that if
the magnetic field changes on a timescale shorter than the supernova
lightcurve observations (up to 100 days), then in fact the flux is not
simply diluted as a whole but the shape of the lightcurve will be altered.
Let's consider orders of magnitude. The important quantity is the ratio of
the coherence length through the magnetic field to the oscillation length.
The latter scales as 1/B. So in comparison to CKT's Mpc size magnetic
domain picture,
[For those wondering, no measurable time delay would be seen for those
photons regenerated from axions. For optical photons with energy of order
1 eV, and axion masses of 1e-16 eV, the axion velocity deviates from c
fractionally by 1e-32. Over a Hubble distance this gives a time delay of
1e-15 seconds.]
Feasibility of Probing Dark Energy with Strong
Gravitational Lensing Systems
Strong gravitational lensing, where discernible multiple images are
formed, probes a different combination of cosmological parameters
than other methods. It can actually be sensitive to a number of
different combinations depending on exactly what is measured: time
delays, image magnifications, image positions, etc. The best
constraints though would come from the rare circumstances where an
almost perfect Einstein ring was formed. This article therefore
considers only the combination D_ls/D_s.
Unfortunately they take a highly suspect Fisher matrix approach. This
requires the errors to be gaussian random variables which seems unlikely.
There are few systems and even if the individual distances are gaussian
distributed their ratio would not be. Everything depends on this
approximation. The parameter of virtue is N/eps^2 where N is the number
of lensing systems observed and eps is the error in determining the
distance ratio for one system.
Strong lensing alone is not very efficient at constraining the equation
of state w. Contours lie parallel to lines of constant Omega_m (flat
universe). 1 sigma errors on w are about 0.25 for N/eps^2=3x10^4 and
0.8 for 3x10^3. The combination with supernovae data helps because of
the complementarity and seems to reduce the errors by a factor of 4,
using 100 SN distributed randomly in z=0.5-1.5.
Several sources of systematic errors exist. The Einstein radius depends
on the model for the lens density profile (they adopt an isothermal
ellipsoid potential) and velocity dispersion. The uncertainty
contributed by the power law index defined for the profile is about
10% or greater and from the velocity dispersion a realistic estimate is
20-30%. This works out to eps=0.25-0.4. For N=10 systems, as estimated
by Holz for SNAP, this gives N/eps^2=60-160, far short of anything
useful. (The article's conclusions are slightly more optimistic.) They
also analyze what redshift distribution is optimal, finding that there
exists a local maximum for sources z_s<1. Thus low redshift lens systems
are almost as good for probing dark energy (and even its evolution),
but unfortunately this is not very good.
Just a few quick comments on some interesting
takes on the WMAP results. Caldwell et al.
astro-ph/0302505 point out,
quite rightly, that dark energy models with
evolving equation of state can have nonnegligible
amounts of dark energy density at the time of
recombination. This is clear from my
Fig. 1 of
astro-ph/0210217,
or simply from my parametrization w(a)=w0 + wa*(1-a)
which asymptotically has w(z>>1)=w0+wa. For example,
the SUGRA model has w=-0.2 at recombination. Such
dark energy density of course affects the CMB
power spectrum. Caldwell et al. point out that it
will remove power from small scales, giving a possible
alternate explanation of the red tilt of the running scalar
index of the power spectrum. But fluctuations in
the dark energy also reduce the late time ISW effect,
thus offering some help with the very low first three
multipoles in the CMB power spectrum. While cosmic
variance clouds the issue, it is not impossible that
we are seeing the first hints of a dynamical dark
energy with time varying equation of state.
astro-ph/0301416
Reviewed 01/28/03
N(z)=n(z) dV \int_x0(Z) dx phi(x)
[n(z) would be the spatial density, phi the mass
distribution, like the Jenkins mass function,
x=M/M_*(z) a dimensionless mass variable, x0 a limiting
mass] and the mass aggregate
M(z)=n(z) dV \int_x0(z) dx x phi(x).
Further evolution parameters can be drawn
out through higher moments. Redshift tomography acts
equivalently -- it is probably a less coarse but more
noisy method. A comparison of the two would be
interesting for the cluster method.
astro-ph/0212573
Reviewed 01/28/03
astro-ph/0207347
Reviewed 08/06/02
astro-ph/0206496
Reviewed 08/06/02
hep-th/0205055
Reviewed 05/09/02
February 20-22, 2002 at Marina del Rey, CA
S.M. Molnar, M. Birkinshaw, R.F. Mushotzky
astro-ph/0201223
Reviewed 01/23/02
C. Deffayet, S.J. Landau, J. Raux, M. Zaldarriaga, P. Astier
astro-ph/0201164
Reviewed 01/14/02
January 6-10, 2002 at Washington, DC
Gravitational wave limits: Lommen presented a 17 year data set with
the Pulsar Timing Array with
less than 3 microsecond residuals, for a quoted limit of Omega_GW< 2e-
9 h^-2. It was not said what GW spectrum was assumed, which is crucial.
Weak lensing: A 75 sq.deg survey with CTIO of 1.4 million galaxies
(150,000 shapes measured) with
average z=0.5 presented by Jarvis yielded rms shear on a 2.5 degree scale
of 0.24%, scaling as theta^-0.38
\pm0.03. Looking at the B mode (systematics check) of the aperture
mass, this is zero - as it must be from lensing - above 10', so
systematics remain for smaller scales. The estimation of
sigma8=0.77+0.06-0.08, assuming Omega_m=0.35. This is in good agreement
with the recent low values, differing from the old sigma8=1.
Star formation: Rodger Thompson presented star formation rates fairly
constant from z=1-6, with a slight bump at z=2. These include
extinction and surface brightness corrections; the largest remaining
error is cosmic variance.
CMB: CBI presented results to l=4000, showing the expected damping and
then an unexpected leveling. Uros Seljak speculated this could be a
very high SZ effect, but this would require a high sigma8, Omega_m.
Combining CBI's SZ study of 8 clusters with ROSAT Xray measurements,
Udomprasert gave a
rough estimate of h=0.67+0.29-0.17. There was a huge scatter
in the dependent quantity h^-1/2, from 0.34-2.48, mainly due to primary
CMB anisotropies (now relegated to being called "that pesky
noise!").
I. Maor, R. Brustein, J. McMahon, P. Steinhardt
astro-ph/0112526
Reviewed 01/02/02
1. Degeneracy: because w(z) enters as an integrated quantity, the current
value and its time variation cannot be resolved to arbitrary accuracy.
[They say "useful accuracy".]
2. Degeneracy: it is crucial to know Omega_m and Omega_w accurately.
3. A sweet spot in finding a value of w exists at low z, but high redshift
is needed to find its functional form. [They don't mention high redshift,
but say in point 4 that poor knowledge of Omega_m will wipe out even the
low redshift w determination.]
4. One needs to know Omega_m and Omega_w accurately.
5. Assuming w is constant or restricting to w(z)>-1 can give distorted
results. Don't do it. [But such distortion will likely leave anyway some
hint in bad chi^2 or deviant Omega_m that would alert competent researchers.
But don't do it. --Eric]
6. Time variation of w is more sensitive to w increasing with z than
decreasing.
7. Complementary tests to SN are very helpful. It is crucial to know Omega_m
and Omega_w.
J. Erickson, R. Caldwell, P. Steinhardt, C. Armendariz-Picon, V. Mukhanov
astro-ph/0112438
Reviewed 12/20/01
Eric Linder - A Rapporteur's View
December 12-15, 2001 at University of Chicago
C. Csaki, N. Kaloper, J. Terning
hep-ph/0111311
Reviewed 11/29/01 (also see the related next review)
J. Erlich, C. Grojean
hep-ph/0111335
Reviewed 12/01/01 (also see the related previous review: CKT)
(L_path/L_osc)_SN = (L_path/L_osc)_dom * (L_path/Mpc)_SN * (B/1e-9 G)_SN
=(L_path/L_osc)_dom * (L_path/3e15 cm)_SN * (B/1 G)_SN
It may not be unreasonable to take supernova magnetic fields of order
1-100 Gauss, extending over distances 3e15-3e16 cm, so their effect would
be 1-1000 times stronger than a single domain. But as the magnetic field
changes due to the expansion of the supernova material, it would have time
dependent effects on the photon flux. Since the light curve shape of Type
Ia supernovae is well understood in terms of the early fused nickel mass
and late radioactive cobalt decay, this seems to pose difficulties for
allowing photon-axion mixing. But the numbers describing the supernova
magnetic field need to be better estimated to improve this argument. [Note
that massive stars (pre-supernova) often have fields of 100 G.] Another
testing place would be pulsars, with fields up to 1e14 G. Even if the path
coherence length were only 1 km, this would give an effect 3000 times
larger than the extragalactic one. While radio waves lie in the no mixing
regime, perhaps Xrays would show oscillation.
K. Yamamoto, Y. Kadoya, T. Murata, T. Futamase
astro-ph/0110595
Reviewed 11/04/01