SCP Collaboration Meeting, 2000 June 14, 10:00

The first thing today is to finish up a little bit more on things which have gone on. (This will still go on this afternoon.) The main thing this morning is the list of things which we don't have assigned to anybody, and to figure out who can work on it.

The other topic is that we found out that all our planning for the HST propsal series, which had to do with NICMOS and ACS, may be for naught. Saul heard from Andy Fruchter that perhaps we won't be able to use those instruments substantially during the next HST proposal. We'll talk more about this later.

Tasks to be Assigned

We're going to work from past to future, including past data we haven't gotten to.

Past HST Data

We heard from what Brenda was doing yesterday. Are there other things that need to be done with this?

Ground-based Data

Lightcurve fitting

Start using the Peter/Greg fitting program instead of snminuit; they've found some issues with This means that this program needs to be docuemnt. (This program is not a fully gory gridsearch. It is gridbased, but is fast.) We will have to figure out how to get errors out of the grid output from this program (if we want to quote them).

Omega-Lambda Fitting

Update the Omega-Lambda fitter. Take the output from the fitting program (probability contours, not just magnitude/error) and use those in Omega-Lambda fitting. Perhaps Ariel has some fitting routines that can be used. (This is perhaps lower priority.)

Documentation Management

Docuemtnation of our procedures and metholdology. Not just "how to run this program", but documentation of what we do and how it works. Some of this exists, but it needs to be organized, and put to where people can find it. Some of it still doesn't exist. Related issues:

Exposure time Calculator

For purposes of working up plans and proposals, Greg says that we go around from place to place running their exposure time calculators, but we don't really know what they're doing, and if they're making any of the same assumptions. For our own comparisons, we need to homogenize all of this, so that we can make reasonable comparisons ourself. It sounds like we'll need to write some own modelling/exposure time calculators ourself. Greg thinks that this would be a tractable problem, and would be easier than trying to reverse engineer and compare all other telescopes' calculators. Greg would also like to see us mate this with our template SN spectra, so that we can do calculator on what we are really hoping to obesrve. This stuff might also be able to feed Ariel's Monte Carlo stuff. It'd be nice to do this for spectroscopy as well as photometry, but that is harder. This would need confidence regions/error bars/etc.

Chris warns that with these calculators, you can get into a lot of detail quickly. For instance, grism efficiency, chip efficiency, etc. Sometimes this information may be unavailable, or hard to get ahold of.


Rates Stuff

Reynald has a rate paper which includes all the SN searches through December 1997. The missing sample is March 1998; that run was a little chaotic with bad weather.

  • For current rates paper, determine if all fields are photometrically calibrated, and turn APM magnitudes into real magnitudes. (Rob and Reynald are looking into this.) Look into bootstrapping.
  • Peter and Saul need to give Reynald comments on the paper.
  • Figure out what can be done with rates for the 1998 March run; figure out what was searched (weather), and worry about not having gotten all the spectra (or did we?). This sounds farily contentious. This set won't go into the current paper.
  • Do we need to do anything different in our search/observing strategy to be able to handle future sets? How do we get rates for z beyond 1? Is the hit worth it? Calibration of fields; might photometric calibration be necessary later.
  • Look at supernova galaxy position. Andy Howell has worked on this. Isobel says that Richard Ellis looked at this as well. This should be updated.

Galaxy Morophology Correlations

Richard Ellis has a paper on this. Greg is working on this from the nearby search (he'll probably do the Galaxy luminosity function).

Multiband Stretch Method Plus

How statistically can we describe the stretch approach, or whatever approach we're using, when fitting in more than one band so that we really know what the final error is on the stretch corrected magnitudes? In the past this has been done simply; more sophistication has been suggested. Once upon a time Alex and Greg were working on this, but Greg believes that things need to go back and be done more carefully. This includes determining what the templates are, how to use them statistically, determining where you start and finish, how do you handle multicolor fitting.


U-band is weak at the moment. There is a paucity of available U-band data, and we depend currently on the kindness of strangers. We have no absolute calibration in U-band, nor do we expect to get any. Publications are being held back, Peter says to put us at a disadvantage. There is data from the SCP nearby campaign last Spring, which will need to be pushed to see if it can help with this.

Using Spectra to do Lightcurve Fitting

Chris brings up the idea of just using spectra. Peter says he has a good s=1 template spectrum at many epochs. The worry is that the quality of the spectra is not good enough for high redshift (host galaxy contamination and how to remove it), and probably never will be. Pushing the spectroscopy to answer that question is probably harder than just doing photometric followup. Peter thinks that for the nearby supernova, this is something that we might want to flush out more. The idea is that you still will need a photometry point to get the absolute magnitude, but you can get the date and stretch from the spectrum. We still have to figure out how well we can measure s and date. This maybe can be determined from the nearby campaign, and then maybe from the first year of the SN factory. Far future.

Day 40+ plateau issue

Peter and Greg are working on this. The idea is that the C/O ratio will change the Peak/Tail ratio in the lightcurve, even after Stretch is taken out. The problem is that of all SN available, there are only about three supernovae avaliable that have good enough data to answer this question. Peter and Greg think that they have enough SN to do this nearby from published data to start probling this question.


How much have we nailed the dust quetions? What else can we be doing with it, and writing up our current understanding? This include variations in the extinction law (from standard red dust to grey dust). What can be done in the future? E.g. a statistical study of how much you might be able to differentiate grey dust from normal dust using blackbody like Type II supernovae. Decide if dust is still a real concern, or if it's turned into a straw man? (People seem to think that it's not a straw man.)


Tasks include reviewing of literature, asking if we can do anything with our nearby data (or perhaps some well observed high redshift SNe such as Beethoven), and thinking about future observations we may need to do. Peter has some UV followup of SNe with HST that will happen next Spring.


There are problems which may be interesting to look at, such as what would be a good way to get a w? We should be on top of all the other techniques that are out there, and how our stuff is fitting into all of that; e.g. is somebody going to do something better than us before we can do it? Or are observations complementary? What new strategies might be pursued?

Tools for planning and doing runs

This is related to the "Exposure Time Calculator" topic above.

Communication issues

Grad student thesis topics

Keep track of who's working on what, what their big goal is. Make sure that subjects don't conflict. Also, topics which are good thesis topics that we may be looking for a grad student to work on.