LBL Cosmology Visitor Program Weak Lensing workshop Day 4 discussions * Morning discussion focused on weak lensing as a cluster finding tool. * Alexandre Refregier presented background on the topic, and Joe Hennawi some technical aspects, with David Wittman discussing DLS experience. Martin pointed out that if one wanted to use the cluster correlation function, then one would need thousands of square degrees. But at this point the observations would be less noisy in the optical so WL would be mostly as calibrator. For leverage on dark energy one wants to measure the high redshift end of the function, which drives observations to space. * Considerable discussion focused on lensing efficiency in cluster finding, and completeness vs. contamination, i.e. false negatives and false positives, related to the issue of "dark clusters" and what fraction of peaks in the maps is real. Marc Davis expressed concern about the high factor of correction needed for selection effects - with DEEP2 he would get concerned about a factor of two while lensing seemed to require up to a factor of 10. One idea is to use the shape of the counts with redshift rather than the number as such, but this will also be sensitive to mass limits. * The summary of the opinions of those present seemed to be that WL for cluster science would be very tough, and we decided to have Josh Frieman do it (he being absent ;-). * The lensing role is likely to be predominantly one of calibration, and space would be valuable for going deep, to low mass, and giving better calibration. A case of crosscorrelating weak lensing and optical surveys needs to be developed in detail. - - Afternoon - - * One interesting aspect of the STEP program for comparison is that the VLT data will enable examination of the dependence on seeing. This is expected to be quite steep, as Gary Bernstein's recent document demonstrates. * Eric pressed the assembled observers on when the variety of weak lensing methods would be ready for prime time. For tomography, the feeling was that CFHTLS would be the first step, though having only 15 galaxies/arcmin^2 was a little worrying, though less important on large scales. * For bispectrum, the answer was possibly VIRMOS; it was unclear how large an area was required. The point was made that new systematics would enter as one scaled up surveys from 1 deg^2 to 100 deg^2 or higher. Right now the data quality is such that one can only aim at a single number, the amplitude of the nongaussian signal, whether this is called skewness or M_ap^3 or whatever. Martin says this is all one expects as long as you stay within the Born approximation. Concern was also voiced regarding holes within the survey area, and the resulting S/N hit. * Geometric methods, CCC etc., would be extremely difficult without space observations, and possibly with. Certainly the stability in photo-z determination is a critical push to space.