SCP Meeting Notes, 1997 July 16

Susanna is Growing Old

We start by singing Happy Birthday to Susanna, and cut the, get this, thirty-two dollar (!!!) cake for distribution and consumption. Rob made a mess and spilled it all over himself.

How Dim is SN9784's Host?

Greg talks about setting limits on the surface brightness of the 9784 host. Greg re-coadded all the HST data, with better CR rejection. He says that there is sort of a blip ~1" projected 70kpc away (using H0=65 km/s/Mpc, q0=0.3). (How sensitive is this size? If H0 goes down and q goes up, it may cancel. This should be checked, if we're gonna really say anything about this 70kpc.) This is far, and there are enough blips that it seems there is a reasonable probability of blips. If this is the host, then that's really far. Greg says that there are a couple of examples of galaxies known this big. (Gerson says that the 9621 host was about 0.3" away. Neither number should be considered precise.) In region right around the SN, the surface brightness limit is low... the actual number is not currently known because of vaguaries of things like K-corrections and so forth. (They used the SN K-correction, which is probably too big a correction. Just using a 1+z correction it makes the limit somewhat dimmer.

Gerson thinks this smudge may be something that was only there in one frame (one of april26)... Greg thinks that that would be rejected. He will try taking out the april26 data. Greg is currently in the process of trying to subtract out a HST psf from the HST images.

Extinction, Finding with Good Spectra; Spectrum Strategy

Greg is also thinking about the next run and possibilities for getting at the extinction. The hope is that with better flux calibration of the spectra, we might be able to use those to get the extinction. You have to get the calibration right; perhaps you can use nearby field stars to correct for atmospheric dispersion. Of course, you also have to subtract, or at least, estimate the host. Perhaps templates together with photometric host colors will be good enough for that.

One of the biggest in our final number for the distance modulus of a high redshift supernova is the extinction. The problem is that the error in the R-I color is not great, and the error in the color gets multiplied by four to convert it to a reddening. Perhaps a good strategy for the next run would be to get enough SNe so that we can throw away the ones we think are reddened, and do careful enough spectrophotometry so that we can throw away the reddened ones.

To check if we can do this, we have to decide if the nearby templates are good enough for this sort of thing. We need Peter to look at the templates and decide which ones are good, and if there are good models. Peter says that he's never seen spectrophotometry done to better than a few hundreths; 0.05 in color is when soembody is doing really good. These are for 12th-13th mag suprenovae as well... although the error is systematic, we can't do things like take short spectra and then do a bunch of nearby stars. Greg thinks we should consider doing multislit spectroscopy to observe the supernova and nearby stars at the same time. There are problems of getting your reference objects centered the same way as the SN. There is also the problem of being able to make the masks fast enough; we don't have days ahead of time to plan for this. How does it work with LRIS? This warrants investigation.

Peter personally thinks that your best bet is to pick SNe which are far from the core of the host. Carl says how do you know you aren't in a dust lane: well, you don't, but further from the core means lower probability of extinction. There is also the idea of trying to just do ellipticals. This may start to be possible in March when conceivably most (all) of our frames are fields we've used before. In that case, we can use the previous year's data to decide host colors, and perhaps make some kind of cut that will allow us to increase our Elliptical Fraction.

Greg is going to play around with calculations and simluations to find out how bad this is and how bad it can be, how well we can do, is it worth worrying about, what we might do, etc. etc. etc.

Spectrum strategy: first clear night you get shoot through as many as you can, to see what you've got. The next two nights, finish off what you didn't get, and go back and nail (with long exposure) the ones you think you want to. This might allow us to measure things like extinction.

B-V Color Uncertainty

Gerson has this disturbing plot of B-V vs. Stretch for our 95-96 SNe; it is disturbing because the errors are so huge, and the scatter in the data points is not big enough to make those errorbars seem justified. We have to find out if the peak R and peak I magnitudes that come out of Don's program include the photometric zeropoint errors or not. If so, it could be the fact that the zeropoints are all correlated. If this is the case, it means we really need a better measurement of the zeropoint.

The errorbars are still too big even for the Rob-memory version of the zeropoint errors. (This should be checked.) Mostly, we need to track down where these errors are coming from -- Supernova light curve? Zeropoint added in the fit program? Where? If it turns out that we know R-I better than we currently think we do, then we have a better handle on the extinctions than we currently think we do.

Somebody talk to Don and figure out what's going on.

Expected trend. (Let's go through this again.) B-V at Bmax, should show a trend, Peter says. (Is this what he said last week? Or was it somebody else who was saying last week?) Well, OK, the very extreme ones show it; how about the rank and file? If you look at the nearby, it's a mess, not a curve... and there's reddening... yipes a ripes. Saul thought that there was evidence that most SN have B-V at Bmax around 0 for all stretch. Peter asserts dispersion must be at least 0.05. Saul remembers that there was no trend. (Believe me, the discussion was almost as much of a mess as this paragraph of notes is.)

Alex has the answer, somewhere, for Bmax-Vmax.

Upcoming observing time

Saul says reference run is end of November; CTIO search is December 27-28 (?), with a _40_ day gap. (Too much!!) Keck is Dec 30, 31, and January 1. (Do not consider these dates hard and fast.) Mostly, we're worried about this gap being too much, so that we won't catch things on the way up.

Greg is going to look into how we're going to do the next run. Strategy, try to get lots at 0.5, or really try to get the 0.8-0.9's? Issues of search efficiency, extinction cuts, ability to follow on the ground and HST, and so forth. Are we so sad about the galaxy that we want to try to change our time and push to early Jan/late Jan search, instead of our current December search?

Alex's Templates

Alex is trying to build a template light curve with which we can use stretch to determine everything. You have a Vband stretch which is proportiaonal to B-stretch which is prop. to peak B which is prop. to V peak which is proportional to the national debt. Alex was trying to come up with an errorbar on the template. He starting assuming that the corrleated errors are stationary. So the point-to-point error is only dependent on the time difference; so it's like a two-point correlation function. He's checked this, and it looks like it works. He's generated a covariance matrix for the template. He's adding that to the covariance matrix for the Hamuy data, and running the fits. (Adding how?) He thinks everything is running now?

BEEP BEEP. There is a public announcement over the LBL PA that there is a gas odor emanating from the campus. However, there is evidently no emergency.

"All they are saying is that the Campus stinks." --G. Goldhaber

(Is it a conincidence that this announcement came while Alex, a graduate of UC Berkeley, was speaking about what he was working on?)

The result is that now you have a template with a matrix of errors on the template. Well, really, a 2-point correlation function from which an error matrix can be made.

Alex isn't far enough along to know if there are trends. E.g, are some epochs better to determine properties like stretch than other properties. However, it does look like the V-band has a slightly better sigma on the template than the V-band. The size of the errorband on the template is around 0.05 magnitudes in V. (For one point.) 0.08-0.09 for B.

So, the next question is, how are you going to use this error matrix for fitting data? (This sounds hard.) Something is going to have to happen to SNminui; it will need another error matrix read in, and dealt with. Saul and Alex talked to Billinger [sp?]. Can you just add the two matrices (model and data) in quadrature? It's hard to tell. He said you could add the matrices in the most obvious way, but you have to do it right. (Woah.) (There's also the fact that since we tend to have several data points all on the same day, then when you make your template, if the matries will be the same size, you will have to have multiple copies of effectively the same template point. The errors on these multiple copies would then be 100% correlated.)

This sounds all extremely scary. It's also crucial to get this going right away.... Saul says this week, but this author and Alex seem to think that integrating this into SNminui sounds like a big giant hairy effort that will cause things like the fall of Western Civilization.

If Alex has the final templates (version without errors), we could start trying those on Hamuy SNe and our SNe. We have to make sure to include Alex's equation for Alex's stretches in the corrected magnitude if we do this.

Instrumental Corrections

Peter was talking to Rob and Greg about these things. Peter has started looking at these again. Peter is getting the programs into a situation where he can pass them off. Saul wants this to be done by Alex and Matthew so that everybody knows what is going on, and so that the crosschecks which were done before can be done again on the new data.

Peter will prepare a presentation and tell us about it Friday at 1:30. At least Saul, Matthew, Alex, Peter, Greg, and Rob will be present; also others who are having problems with insomnia and need instrumental corrections to help them sleep. (Presumably, some of this will also be documented in Matthew's thesis?)

Omega/Lambda Fitting

Sebastian: Omega/Lambda program. He's working on it. Rob will give Sebastian a fake zeropoint correlation matrix so that he can test the program; a real one will come in later. (Soon.) For things like redshift, Sebastian is using the output of the minui program. Alex asserts that everythign you need comes out of the minui program. It's getting it from this text file that Don has, which is the closest thing we have to a SN database. This needs to be integrated, somehow, in some centralized place.

Efficiencies

Sebastien also has all the output of the efficiencies. Magnitude of 50% efficiency of the BTC is R around 24.3 or so. (This is the magnitude from the APM calibration.) It's more or less the same for all the quadrants. It is odd, because one would have thought that the efficiency would be substantially less for quadrant 1.

R vs. I. Hard to say I-band efficiency cutoffs, because the APM stars are R-band calibrators. What we really want is S/N for supernovae of a given redshift, so we know for at what redshifts we really want to be looking in R or in I.

There is some vaguarie associated with the efficienceis of subtractions with >2 references. It looks like it goes completely to hell. Sounds like a bug; Sebastien is trying to diagnose where the problem is. Right now he thinks that it may be in the searchscan program.

Time Dilation

Gerson plots Time Dilation Stretch Factor vs. 1+z. Shows that our data are way more consistent with time dialation than with tired light. You get a reduced chi^2 of 1 for cosmology, a reduced chi^2 of 10 for tired light. This "chi^2" is the sum of the squares of the data residuals divided by errors, plus the model error. Gerson is somewhat dubious about his way of measuring the "model error" in cosmology.

Some issue of trying to measure Omega from Gerson's time dilation plots. We can't do it now with this data set. Alex asserts that in the long way, this is the right way to do it, because you can (via your efficiency calculations) calculate out or Malmquist bias. Gerson and Peter, however, don't like this because you have to assume that your distribution stays the same with all redshift.

If you people buy this, the paper should come out yesterday. --G. Goldhaber

Some concern of extrapolating the models back to redshift=0. The two models should agree, and because of how it was done, the two models should each reproduce the Hamuy distribution quite well.

OK, then the paper's not going to go out tomorrow. --G. Goldhaber

Miscellaneous Reports

Robert is working on more fiducial stuff. He is looking into the ratio between R-band magnitudes of the frames as a function of fiducial color.

Mike is working on making a catalogue of all our fields, the depths of the fields, what colors they are in, and so forth. There are concerns about looking at the grid names, since some physical fields on the sky have multiple names. There is lots of mess. Mike should talk to any number of people about ways of doing this.... Rob and Matthew, anyway, are claiming to have code relevant to this.

Rob says that soon there will be zeropoints and lightcurves for most of the 1997 Set D operating systems in the next couple of days. 95, 96, are basically done; Set D will be soon. They can be mixed into our Hubble diagrams.

Carl says that Felip is working on psf fitting with DAOPhot. We have some evidence that Carl doesn't believe that the psf fitting didn't do anything for us, and just made things more unstable. However, we don't want to do this until we've proved that it might actually help.

Saul needs comments on the z=0.83 paper.

Conclusion

The meeting went clear through tea! That's two hours. Way too long, folks.