SCP Meeting Notes, May 12, 1999

Peter's escrow seems to have closed, or nearly closed. He'll be moving over the course of June and such.


Yet Another Update on the Gerson/Don New Template

Gerson tells us again about this paper that he and Don have been working on. They've done an improved (again? Or is this the one we've heard about before?) version of the lightcurve. Cf: earlier meeting notes for a description of this. He's showing an improved version. Every datapoint is within the error. These datapoints are residuals (I think), for various cuts on the sample. They've cut on redshift, on stretch, etc. It all looks good. The conclusion is that the new lightcurve is improved upon the previous version (which was already modified from Leibengut), and works well for all the supernovae that we have.

Don says that there will be one more step of iteration. Despite this, there is much discussion over the fact that the "complete sample" fit has a <1 sigma fluctuation up on one side of max, and down on the other side.

It's better than you ever imagined.

-G. Goldhaber

Don says that one correlary of this is that we can no longer locate the maximum as well as we thought; it's possible to slide the fit around a few tenths of a day. He prefers to reference things to the half-max on the rise side. Peter does not want to go away from referencing things to maximum light.

The new conclusion and the strong statement, Don points out, is that stretch works on the left. He says that this is a statement that should boggle the mind of any theorist. Gerson says that the paper is further along, but that there is still a ways to go. He's shooting for the meeting (early June), but Don will not be available in the mean time. This discussion gets a little gory, and will not be fully transcribed.


CCD Update

Don says that they now have a working back-illuminated CCD. The polish job wasn't great, so they can't call it science grade. What's frusturating them at the moment is Richard's QE calculations, which did not match at all what Don had calucated. Don shows us a plot that compares data taken a year (?) ago with some pathetic AR coating, and the current chip with a supposedly really good AR coating. There is a dip that his simluations can't predict, and there is a low point somewhere in the red, I think, that is to low desipte all the really good AR coatings. Don says they don't know whether Richard's screwing up, or whether the quartz fell off (so he says). In other words, it's something that they are in the middle of and don't know the answer to yet, and requires more work, and most of us here probably don't have any conceivable need to know the details of at this point, so it's exactly the sort of thing we spend most of these meetings on.

As far as yield goes, they have yet to have a large format CCD that does not image. That sounds good.


Preliminary Nearsearch Lightcurves

Maria shows some results she and Ana have been getting for lightcurve results. These, I think, represent their quick measurements from a nearby supernova. The one example we're looking at seems to have a spectrum at seven days later after maximum than what Peter (?) had thought looking at the spectrum. Peter notes that the curve plotted is V-band, and points are B-band. He wants to know if the redshift is near 0.1. Robert Quimby says that even if you plot the matching theoretical lightcurve, it's shifted by at least five days.

Peter does point out that there are lots of possible reasons; stretch, reddening, etc.

Saul says he thinks its time for Maria and Ana to start learning how to do data reduction and start dealing with reduced, full, real images. Rob and Susana are working on a manual for data reduction, so that people can learn how to do this.


MLCS vs. Template Fitting

Alex Lewin is reading a paper by Drell (which both Peter and Greg are casting aspersions upon, for reasons unclear). She did something which is supposed to be similar to what Alex is doing. She hasn't read the whole paper yet, so she can't say too much. However, it compares the MLCS versus template fitting method for the other group. The plot the difference in distance modulus as a function of average absolute magniutde. They claim there is a trend, but Alex points out that there are only two supernovae. You see a trend if you plot difference in distance modulus versus MLCS Magnitude, or average Magnitude, but you don't see much trend if you plot it against tepmlate fitting magnitude. Conclusions are not immediately intuitively obvious, though.

Alex points out that the coefficient between stretch (or, equivalently, delta-M_15), is steeper for their MLCS data than for what we get from our supernova. The slope they get for template fitting is in between. Peter and Greg note that if we were to extinction correct our data, we would get a highter alpha (steeper slope). (I.e. there are correlations... our brightest supernoave tend to be the more extincted ones.) (Is that right? I think it means the low redshift ones, which have the most weight in calibrating the stretch/luminosity relation. Somebody should check this. Indeed, as a general note, be very careful citing anything you read in these meeting notes as an actual result of the group.)

Greg notes that the results they get for the distance modulus differences could just come from the fact that they used different delta-M_15 coefficients for the two methods.

That was where I started to drift. I think Peter's talking about reddening correction.

Peter is also asking whether the same set of filters (BVRI vs. BV, or something else) were used for both the low redshift calibrators and the high redshift supernovae for the MLCS thingies whatchamadugit bleah. He notes you should make sure you are comparing apples to apples before you draw any conclusions about evolution. (Even if apples evolved from Jurassic dinofruits, in which case you need to compare apples to Jurassic dinofruits, but that is a completely different issue altogether.)

Greg notes that problem in the data sample are more likely than any evolution conclusion. He says they are doing a microscopic detailed analysis assuming that there is a perfect, clean, laboratory sample and set of measurements. In short, Peter and Greg think that the whole thing is just so much hooey. Well, at least, "misguided elements" that have had Saul get phone calls that ask if everybody should give up on the supernova results.


Our Goof in Color Limits

Greg is pointing out a mistake we made in our 42 supernova paper. He's telling us about the histogram of E(B-V) for Hamuy supernovae, and for our supernovae. Hamuy was this nice narrow spike thing, and ours was broader (mostly due to measurement errors). We calculated what the mean was in both of these data sets; they were very similar. We had <E(B-V)>=0.01 +/- 0.02, and Hamuy had <E(B-V)>=0.01 +/- 0.01. Our first thing was saying that, hey, they are similar, so you probably don't get much with reddening corrections. However, setting an upper limit, adding the errors and multiplying by R_B (about 4), the errors become non-trivial.

What we did was we tried eliminating some supernovae, including what we thought was the reddest quartile. (After that we did another Omega-Lambda fit, to see how much things moved.) However, when we throw in the measurements errors, the measured reddest quartile isn't necessarily the intrinsically reddest quartile. With big measurement errors, eliminating that quartile doesn't buy you much at all. What we have to do now is figure out how meaningful what we did really was.

Branch and somebody did some modelling by putting random supernovae in disks and halos, and looked at them at all sorts of different angles. The result is a fairly strong spike at zero, with a tail that can go out to fairly large values of A_B. Our paper poitns out that we have a selection effect that will trim out the supernovae with the highest A_B. Greg applied a selection effect, and then assumed that this was the intrinsic supernova distribution. He then took our errors, and generated fake measurements from that.

Greg found that the true A_B is pretty low 0.019 (that's just the result of the Branch et al. work, modulated by the selection effect). After you clip the reddest quartile (and others clipped by our critereon), the intrinsic A_B (i.e. parent population, something you can't do with real data) was 0.015, but the measured A_B (done exactly as in our paper) was -0.051. In other words, the clipped distribution isn't really that much less extincted as a set than the whole distribution, and we fooled ourselves in making the argument. In other words, we vastly overstated our sigma limit on how much redder than the Hamuy sample we are. Our results (especially some results on greyer dust) were in fact way overstated. Oh well.

The papers by Aguire on Astro-PH, which disputes our limits in grey dust, used intrinsic supernova dispersion instead of measurement errors to do the analysis that Greg also did. However, that doesn't make sense, since Peter assures us that the intrinsic colors for a given stretch are known better than the intrinsic supernova distribution. But, although his reasons are wrong, his conclusion is right, because of the analysis done by Greg using measurement errors.

We should have realized that we could never get more than 0.01 bluer in E(B-V) than the Hamuy set, since their average E(B-V) is 0.01+/-0.01.

We futzed up. Oh well. Saying that we think the extinction is the same at low and high redshift supernova was probably right. However, in trying to set upper limits, we did screw up.

Saul is wondering about buying back a few hundreths (relative to adding the uncertainties in the two E(B-V) distributions, and just calling that the "color statistical error") of a magnitude by doing this analysis, but doing it more right.

Final result, this could shift things by about 0.12 in magnitude. That turns out to shift us about 0.12 in Omega.

At this point, the only really good thing we can do is just get better data. We have to hope that the HST will help. (It's also useful that Tom York is working on other limits on how much dust (including grey dust) there could be out there.) The dust question is one we still have to keep working on.


Omega/Lambda with Many Fake Supernovae

Robert ran some test Omega/Lambda fits with more fake supernovae at redshifts between 0.3 and 1.5. He put in 500 or 1000 supernovae, and came out with really nice, really tight little confidence limits. Greg notes that we say that things get more interesting if they close in around our current beast fit values than if they close in around the flat universe value (in terms of how fast the contours pull away from lambda=0). He suggests that Robert go back and redo it with the fake supernovae centered around the flat universe.

This leads to some discussion of how you are going to get spectra of 500 supernovae, even if you have a magic space telescope (or something) that can find that many.


B-I and Grey Dust

Peter has what he describes as a cool idea about grey dust. He things we could get V-J (using NIRC at Keck) of 0.3 supernovae. This you could compare to B-I of what supernovae are supposed to be. Peter talks about models, but Greg thinks that we would have to better know about how supernovae really behave. He notes that even longer lever arm, e.g. rest B-K, would work better. Peter says he likes B-I because we can do it, as I-band at z=0.3 (i.e. J band) is something we can do right now, while K at z=0.3 starts to get really nasty. (No surprise there, for anybody who's ever looked at an L-band image.)

Peter thinks that this is the best observational way to kill the dust at the moment. Saul thinks we could do it with three Albinonis. (What is the plural of Albinoni?) Peter says he won't believe Albinoni until either we get H and J (i.e. rest B and V), or until we understand the U band a lot better than we do right now.


Etc.

It's 3:30. We've been here a long time. I'm tired. Let's go home.

Saul says that the LDRD went in for expanding low redshift supernova search.

Meeting ends with large food fight.

(Not really, but we thought about it.)