LBL Cosmology Visitor Program Weak Lensing workshop Day 5 discussions * Gary Bernstein showed effective source redshift distributions that were rather shallow at high redshift, falling less steeply even than exp(-z). This means more attention may need to be paid to the very high (z>3) sources. From Huan Lin's GOODS-N photo-z catalog he finds the effective number density (using the white noise weighting criterion) is 15/35/107 per arcmin^2 for DES/LSST/SNAP. One caution is that catalog only has photo-z's for about 50% of the shape measurements. Gary also emphasized the importance of the cut in S/N around 10-15. * We had general agreement that z_median was a better description of the source redshift distribution; better is z_mean, even better is giving the full distribution. Ideally one would want a multivariate catalog a la Lanzetta such as magnitude-size-spectroscopic redshift. Pixel level catalogs enabling high redshift and systematic tests are needed. * Martin showed that the bispectrum statistics was rather insensitive to cosmology when the amplitude was normalized. So this indicates that in the near future a single number describing the amplitude is likely sufficient; one possibility, easy to implement from observations and interpret from theory is M_ap^3. Eric pointed out that even if the bispectrum is insensitive, it can still be useful for breaking degeneracies, and other roles such as different systematic dependence and self calibration. * Henk emphasized again that for higher order statistics, holes in the survey have large effects, e.g. reducing 11 deg^2 for the power spectrum to effectively 2-3 deg^2 for bispectrum. * Tony Tyson and Vera Margoniner presented DLS data on n_g vs. magnitude. There was general agreement that the groups needed to operate on the same data set to figure out discrepancies in the values for n_g found, and that this should have high priority so the numbers quoted for n_g had the same meaning and would be persuasive to the larger community. * Eric encouraged all participants to send him links to post on this website with data plots, catalogs, or relevant calculations. * For n_g we needed clear calculations to show both what is theoretically possible (with given observing conditions, exposures, etc.) and what is achievable (taken into account systematics), on real data and on uniform simulations. * We agreed to characterize (i.e. use as the common quoted number) the galaxy shape recovery in terms of the mean shear error in a one square arcminute field. The effective number density of galaxies would then be defined in terms of 0.2 per component shear squared times that inverse variance. * One question was a seeming conflict with the Subaru data in 0.4" seeing giving n_g=30/arcmin^2. Why so low? * For checking the reality of image simulations one would like to see a known shear applied and recovered.