These were some suggested topics for discussion during our workshop,
updated with some of results of the discussions.
Markers:
- What is best to use as markers for the power spectrum - Halpha
galaxies?
The consensus seemed to be that these were the most likely. Strong
emission features make detection easier, certainly more
than absorption features,
and Halpha is expected to remain strong past z=1. Quasars are just not
numerous enough, e.g. from z=2-2.1 over 6000 sq.deg. one picks up 60000
quasars, which is a factor of 10 low relative to optimal marker number
density. One slight hope is that because they are point sources one can
resolve lines better and one might be able to get down to AB 23-24, which
may allow a more nearly sufficient number density. But generally we expect
that whenever quasars are sufficient, galaxies will be better. One question
is whether LRGs retain a tight homogeneity for z>1. But the overall
conclusion is that Halpha emitters are probably the best bet.
- Is there a way to use productively the higher number densities
of galaxies we are likely to find, at least from space?
One gains extremely slowly in precision when one increases away
from the optimal number density given approximately by nP=1. What higher
number density does allow is to be more selective, picking special or
"perfect" markers. One idea is to select only, e.g. edge-on spirals,
which also helps with calibration issues.
- How homogeneous does the selection of markers need to be?
Nothing quantitative discussed.
- Are there issues with Halpha line identification - e.g. confusion with
NII, OIII?
Many broadband filters, e.g. 9 on SNAP, will allow use of photometric
z's to narrow in on the redshift and hence avoid line confusion. NII lines
are close on either side of Halpha, but generally substantially weaker
except in certain object classes such as liners that could be avoided.
Even if resolution was too poor to allow resolution of NII, and they
were strong, this would only shift the centroid by about 12 Angstrom.
- What spectral resolution is needed for line identification and redshift
measurement? d[ln(1+z)]=0.002 is 13A for Halpha.
For photometric redshifts, one would like accuracy of at least 0.04.
One gains by further precision in gaining wavemodes and hence reducing the
effective volume needed relative to a spectroscopic survey. For 3D
information, one wants at least 100 Mpc resolution, or dz/(1+z)=0.003.
Observations:
- What signal to noise measurement of the markers are needed? Issues of
Malmquist bias.
Simulations being started.
- What is the true resolution of grisms needed, given multipixel
objects?
Because of background noise, one takes a hit in spectral resolution
of extended objects relative to point sources. Typically this will be
a factor of 3-5. Raw, before centroiding, one wants a resolution of
at least 100 per pixel effectively. There is also the issue of convention,
while dz/(1+z) is quoted as a standard deviation, spectral resolution is
often in terms of FWHM, which is larger by a factor 2.35.
- How practical are these grisms given, e.g., the SNAP focal plane?
In terms of raw space, there is no problem with tiling of order 0.1
sq.deg. of the focal plane. Mechanical, stray light, etc. issues are
under study, but the science case needs to be strong.
- Should the survey field be continuous or are long strips with gaps
between them ok?
There was general horror at the idea of noncontiguous fields, with
expressions of worry about aliasing, even for long strips. Solid patches
of 10-20 degree extent, i.e. 300 Mpc, were advocated.
- How best to take advantage of SNAP's 9 filters and photo-z
accuracy?
The photo-z's will be enormously helpful in preselecting markers
and avoiding confusion. The ability to cover several thousand sq.deg.
at z=1-2 is a strong argument for space. To maximize this one wants
coverage as far into the infrared as possible. For example covering
1.25-1.7 microns gives Halpha from z=0.9-1.6, OIII from 1.6-2.4, and
OII from 2.4-3.6.
- Other issues --
Combination of localization with 9 broad filters with
extra, physical, narrow
"comb" filters sounded good, but offered little improvement when simulated.
The ratio of wavelengths between Halpha (6564), OIII(5007), and OII(3727)
are all roughly 1.33 (and one could extend further to Mg), so one could
have continuous redshift coverage with detectors with 33% bandwidth. However
each redshift shell would have different galaxy type selections. Also,
OII(3727) is weaker than Halpha and in a region of more background.
At a fixed observer wavelength it is about 24 times fainter, hence
requiring 600 times more exposure time. OIII(5007) is maybe 15 times
weaker. So this idea may not be practical.
Analysis:
- How serious are redshift space distortions and what to do about
them?
Nothing quantitative discussed.
- How does a d[ln(1+z)] translate into a resolution R and v.v.,
realistically?
See above discussion under Observations.
- What are the subtleties in the method of oscillation fitting?
It is not clear whether one needs to model the underlying power
spectrum to high accuracy or not. One idea is to model it as, say, a
4th order polynomial expansion and see how the extra fitting parameters
smear the oscillation frequency determination. If by not much, then
there seems little worry in needing a robust theoretical prediction of P_k.
Another possibility is a full maximum likelihood approach. For a
Gpc^3 volume, resolved down to 10 Mpc, this gives 10^6 independent cells.
For 2D, a million by million matrix is well tractable (cf. CMB). If we
want 3D we have a million cubed matrix. This may well be ok, especially
if one can treat it in terms of small deviations: a ratio of determinants
when expanded becomes a trace, which is numerically much faster.
- Calibration issues?
To avoid problems with the 0th order dispersion and where specifically
on the galaxy one is looking, one could select only starburst or edge-on
spirals. Open questions are how much of a deviation from alignment one
could tolerate and whether these galaxies have sufficient number density.
Theory:
- Redshift shells, redshift ranges, 2D vs. 3D
These ideas have been pretty well worked out in the literature, and
we'll continue in more detail. The range z=1-2, well suited for space
observations, is quite an interesting one theoretically. Also 2D
should not be ignored.
- Other issues --
Extra physics, such as an additional neutrino species, that alters
the CMB determination of the sound horizon, does not seem to ruin the
baryon oscillation method, since it cancels out of the wavemode ratio.
Daniel and Martin are writing a paper analyzing the various degeneracies.
It appears that even aside from the noninfluence, the degeneracies might be
broken by the potential forcing of the oscillations.
- Where will baryon oscillation surveys/detections be in 5 years?
(OmegaCam, FMOS, AAOmega, VIMOS, (V)ISTA, HET, etc.)
This seems strongly dependent on TACs. A useful survey would require
several hundred nights to get sufficient area. Until a major program
such as KAOS, or a dedicated telescope, we don't expect good characterization
of baryon oscillations. The one wildcard is PanStarrs capabilities, which
we don't know enough about.
- How best to take advantage of ground/space complementarity?
Definitely the redshift range. And crosschecking. Note that from
the ground the survey markers are targeted, so this adds noise to the
power measurement, e.g. zones avoided, slit/fiber collision areas. From
space we just mow the sky, getting more uniform coverage.