SCP Meeting Notes, 1997 August 26


Multichroic images

As we come in, Saul is looking at a spec diagram for somebody he talked to who is designing a "multichroic" camera for Subrau (8m telescope an Mauna Kea) which does imaging in 16 bands at once using multiple dichroics. Right now the field of view is 5'. Apparently, the design isn't finalized, and Peter is supposed to talk to them about what filters he would ideally like to have for observing SNe. There is some thought, though: if we're on an 8m telescope, why not just take a spectrum? Some suggestions are the difficulty in flux calibrating a spectrum.

Saul is fantasizing with this camera (or, rather, one we build which has a bigger field of viwew), doing a reference run, and a search run, and being done. You have your date of max, your luminoisty class ("stretch" equivalent), and your peak magnitude, and even your redshift. Of course, if you can do this, than you can do this by doing a search an getting a very good spectrum....


Saul's Cosmology Meeting Report

Saul mentions other things from this meeting from last week. A huge amount of time was spent on the Arp/Burbidge cosmology where some redshifts are not due to expansion of the universe, but because objects don't start with mass, but gain mass as time goes by. All of this is based on statistical studies which show "lots" of "clear" associations between objects of different redshifts. Apparently nobody says that they did the statistics and that Arp et al. are wrong, because nobody's done the current new set of statistics.

Jerry Ostriker gave the summary talk, and his conclusion was that Omega was 0.2, and there's a lot of Lambda, but maybe not enough to make the universe flat. (Supernova were not on his list of what you're allowed to use in determining all of this... Supernova results are clearly not useful because there is metallicity which effects supernova brightness, so evolution is convolved in, etc. etc.) Peter says he's started a paper with Ed and grad-student-at-university-of-Oaklahoma where they are doing a bunch of models with lots of different metalicities. They are also going to go back and look at the state of the art galaxy/metallicity evolution as a function of redshift; he thinks that really there's no difference between here and z=1. Saul mentions that on the other side, Peter Hofflich says that there is a huge effect out to reshift of 1, but also that there should be a big effect on the spectrum, which we can address.

Greg mentions that this may be an arguement for doing ellipticals, where metallicity effects should be smaller. Saul says it would be nice if we started keeping track of where studies of this come from - figures, etc. - so that we can reference these things well. We also need to start getting at the identification of our host galaxies, which we can do based on our spectra (where we have a host spectra, or where the SN was far enough that we can extract a good amount of galaxy not contaminated by SN). Greg is talking about basing it on two eignevectors: the base elliptical component, and the nebular/emission line component. (Peter says that E's and S0's are all lumped together as far as finding Ia's is noted.)

Saul says we should also get the colors of the host galaxies. Mike (maybe with greg) is going to follow up on this.


Alex and his Templates (and Extinction)

Alex and templates. For some of the SNe where we don't have two color data, we probably need a specially designed template where you don't do any reddening correction. If there is any gooping between stretch and reddening, and you have a template which assumes you will be taking care of the reddening, then it's not appropriate to use that template on a set of SN where you won't do a reddening correction, just cutting out the reddening part.

Ways of proceeding: could do all SNe without any reddening correction, looking for a ridgeline. All you'd use color for is to throw away some reddened objects, to improve your ridgeline. Or, you could just say that you have to have the color (R-I) as good as a certain cutoff to say that you have color information on something. Alex says that E(B-V) is less well constrained at some epochs than at other epochs (errorbars on the template).

Gerson says that taking the current B-V estimates (K-corrected R-I), and has converted to a reddening assuming that Bmax-Vmax is supposed to be 0, and multiplying by 4 to turn E(B-V) to AB, and getting too many SNe which, corrected, are way too bright. Gerson says that 2 would be a more reasonable factor, to keep things in a reasonable range.

Note that this ratio of 4.1 between AB and E(B-V) was chosen for stars, and it may be different for SNe. Also, the K correction will change as there is extinction. Also, what you really want is Bmax-Vmax instead of B-V at Bmax. There is debate over whether or not this is a significant difference; Peter says easily you go a few hundreths of a magnitude in two days, and that as you multiply by 4 this can start to become more important.

Peter wants to know what happens when you don't apply a stretch correction and do this extinction correction. Gerson doesn't remember, but believes that you get the same result. (The van den Bergh procedure is the same thing; don't do a stretch correciton, but just do a single extinction correction, which does stretch and extinction all at once.) All of this, of course, requires further investigation to get to the bottom of. One concern is who can give us the numbers for predicted Bmax-Vmax (or whatever). Alex can return a Bmax-Vmax for a given stretch now, which he will give to Gerson to use instead of assuming that Bmax-Vmax is supposed to be 0.

Peter does not believe that extinction effects will be a huge function of time (e.g. extinction effect on K-correction). In other words, the extinction won't affect the shape of the lightcurve much.


NICMOS Proposals

Peter and Greg, NICMOS HST proposals. Greg has a number of scenarios, and Peter has some data which will tell him which work. The idea is to get a better handle on the reddening with these observations. Filter functions, the HST filters are unlike most ground-based J-band filters. The filter is huge, 6000 angstroms wide. There are also narrower filters, which means you lose about a third of the light, but you are no longer overlapping with other filters. There is a third scenario using F110W on the z=0.5 SNe, and F140W on the z=0.9 SNe. The filters don't match well with any ground based stuff, but here the z=0.5 matches the z=0.9 we measure.

Peter looked into the level arm for getting a handle on the extinction. He plotted R, I, J11 (F110W) and J14 (F140W) magnitudes of supernovae as a function of z. (All of this is assuming Vega is 0 magnitude for all SNe.... but it's somewhat moot, because relative differences are all we care about. We're talking extinction, not S/N.) J11, J12, and I were all fairly close, none did anythign wacky.

He then plotted three different colors at two different extinctions: 0 and E(B-V)=0.2. Result: get twice (or a little more) the difference in magnitude due to extinction for R-J11 R-I. There is not a whole heap of a difference between R-J11 and R-J14. At a redshift of 0.75, you're looking at 0.5 magnitudes for E(B-V)=0.2 for R-J14. It's more like 0.4 magnitudes for R-J11. Slightly better for J14. So, the question comes back, what is the signal to noise difference.

Note that at a redshift of 0.9, you want to use I-J*, not R-J*, because there B maps to I, not R. So at a redshift of 0.9, you want to think about I-J14. Peter says that R-J11 at 0.5 and I-J14 at 0.9 are both slightly under 0.5 magnitudes.

You also want to compare I-J11 and I-J14 at 0.9; which gives a better handle on extinction? Peter says I-J14 is 0.4 , and I-J11 is 0.25, both at z=0.9. So J14 is nearly twice as good. (Note that J11 was in the original NICMOS proposal.)

Greg talks about the S/N for these filters, assuming NICMOS-1. For the wide filters after the SN has faded 1mag from peak, for J11 the S/N gives 3% photometry for 1 orbit (2000s). For a narrower filter (which doesn't match any ground based filter for these redshifts), you get about 6% photometry. The only reason you'd do this is if the extinction lever doubled. For a redshift of 0.5, Greg asserts that the S/N will be better for F140W than F110W, because the NICMOS efficiency ramps up. Peter says the spectrum is flat, so it's not ramping down enough.

For the redshift 0.5 guys, originally we weren't going to do any NICMOS at all. Now the question is, should we do these guys in J11 (F110W). Does that give us enough better handle on extinction to make it worth doing? R-I vs. R-J11 at a redshift of 1/2: it's twice as good to use J11 as to use I. Peter says that it's also beter to use J14... but he uses his models out there. He says he has enough data to make him believe his models out there, where the wiggles and bumps are small, and the spectrum is very much like a blackbody.

Saul says, finally, can we come up with a compromise which says that the observations are useful even if we don't know the redshift a priori. The conclusions scattered in the previous few paragraphs don't ... well, let me correct that (as I write multiple paragraphs in parallel), it seems that the F140W is always a better filter to use.

The next question for a redshift-independent method is F547M, which matches U for a redshift of 0.5. If we don't know the redshift, is it worth risking that we have a redshift 0.9 guy, and doing this anyway? Can we afford the time? It sounds lke this is our plan: In any event, F675M, F814W, and F140W are the filters that we'd use. If we know ahead of time, we'd also do F547M on a redshift 0.5 guy. (F110W is out the window.)

Then there's also a deal of J-band deficit, which occurs at 1.2 microns... but even at redshift 0.5, that's outside our filter.


Reynald's CFHT Proposal

Next topic, E-mail from Reynald about doing a search with data taken 1 month apart at CFHT. Saul thinks we could use this as a practice run; we may get some spectra with the WHT 4m.

The proposal for CFHT is due Friday or Saturday, and we have to decide what to put in. This proposal is _not_ going to be separate France and Canada proposals, but some sort of combined proposal. Saul says that the goal should be to get an entire search at CFHT: Reference, Search, spectroscopy and followup. Four nights of 8Kx8K for reference/search, and then more of other for spectra and followup. Saul says put together the proposal we did before and pump up the time for it to do everything (execpt for Keck/spectroscopy and HST/followup for the very highest redshift?), make it at the same time as our CTIO search so that (a) we have lots of backup telescope time, and (b) we have way more to do at once than we have manpower to keep track of. On point (b), it would have to require the people in France getting more involved in both going to telescopes as well as manning the search. (And, of course, we still need another post-doc.)


Miscellaneous Reports and Such

Saul says that Greg should start working with Alex about how his code works... Greg says that they did do it once already..... Errr.... But it seems that they talked about SNminui already, but not the template code yet. Alex says it's really straightforward, but we all know that looking at other people's code is NEVER straightforward, even if it's "hello world."

Sebastien classified all candidates we had on previous run as possible AGN, SN, or anything else. He has all the candidates with different subtractions and lightcurves in a gigantic thick binder. Some of them were also found in a 5-day going-down subtraction. The bottom line is that we are not very efficient. Perhaps the problem is that we don't have better spectroscopy (i.e. didn't get some of them, and some of them had ugly looking spectra). 9767 is one that had a decent looking lightcurve, and there are a couple of others. We should talk more about what to make of these guys.

Robert: PADE approximation... he stopped trying to approximate and is rrying to make the old code faster. He's rewriting Alex's code in C. The PADE approximation has been put off, and it looks like it will be a lot more work. Peter warns about exponentially increasing pages worth of math to keep track of all of this.