Deepsearch Meeting Notes, 1999 August 25

Frikkin' meeting is at 2:00. Last week we got yelled at because we spilled over into the 3:30 tea. I had hoped our response would be to keep the meeting down to a more reasonable time, but oh well.


Propsal update

Saul is updating us on proposals. CFHT will be done at the end of this week. Reynald is going to update the last proposal, saying we want a little more time, and saying that we're going to try Keck AO work, so we need more SNe to make sure we get natural guidestars. Saul said that he told Reynald that we want the search to be April/May, but we'll see if that communication happened or not.

The debate is being raised about whether or not we can really get enough supernovae close to natural guidestars to do it. Greg thinks it's impractical for a deep search. Saul says of course we'd really like to get Subaru time to get that kind of rate; apparently, Subaru people told Saul that we could put in proposals that we will get "some" time, but he doesn't want to count it. Saul is estimating that 20% of the area of a search is close enough to natural guidestars.

HST we have to decide what we want to do a year from the coming fall. It sounds like we aren't going to do anything other than (a) improve statistics around 1.2, and (b) add some SNe at 0.8 to fill in that hole. With HST, though, in particular we're going to say we'll use the I-band magnitude to get stretch and time of max, but we'll say that we'll use J-band from Keck and maybe Gemini to get peak magnitudes.

(Saul is leaving town a week from Friday, and will be gone the week before the proposals is due.) ("Why don't you just cancel your trip?" Mike Levy asks.)

Keck proposal will be very much like the last one. Isobel is going to do a first cut at updating it. Saul thinks that the only thing different would be to add in a request for a couple of nights of AO... which, Rob and Greg wonder, we thought was going to be a second proposal. Saul notes that the science goals for both proposals will have so much in common that maybe they should be just one proposal. Saul is suggesting that Brenda should write the AO proposal. The Keck proposal is due the 16th, but Saul will be back the 15th. (He says he will be places where he can E-mail.)

Isobel is also going to try to update the VLT proposal. Greg thinks it should perhaps focus more on the 0.8's. Greg's argument is that you want to see a continuum of supernova points going along the cosmology. We will need to fill in that region. There is also the practical issue that it takes way too much time to really do a z=1.2 well on the VLT, while it's not so hard to do that well on an 0.8 with the VLT.

We have decided not to go for an explicity Type II search, although apparently there are some ideas about hiding it inside other problems.

Finally, there's the other HST proposal Peter is working on with Andy Fruchter. Apparently, Andy is mixing in GRBs as well.

Saul is back working on the main HST proposal. He notes that we want to show something related to the reduction of the current HST data, even if it's very preliminary.

Dan Kasen's Satellite Simulations

He's been working on simulations for SNAP/SAT, trying to get our feet on the ground in some of the numbers about it. He talks about why we're simulating this. Reasons have to do with design considerations, the quality of results we might get, and observation strategies. He's looked into several issues.

Dan talks a little about the simulator that he's built. He spent a lot of time making a tool to answer these questions, and trying to make it flexible and useful. SNAP-SIM is the name of the thing he wrote. It's a big C++ program, with classs for the Universe, the Telescope, the Analyzer, and things like Spectrum and Filter.

Dan shows a signal to noise equation; S is just f_obj * t * QE. There are lots of noise terms including posson noise, sky noise (mostly due to zodiacal light for space based observations), dark noise, and readout noise. Each instrument you build into the program has its own set of parameters. Many of the noise soures multiply the number of pixels; for 10 micron pixels and a 1.8 meter telescope, he says that Npix<4.

He talks about cosmic rays. The cosmic ray rate is uncertain to a factor of five; he's been using 0.005 hits/pixel/second (for what pixel size?). Don points out that this is 3/pixel for a 600 second exposure, so a zero seems to be missing. But even 0.3 in 600 seconds is really, really scary. The two negative effects of this are unclean images (false detections), and pixel inefficiency. You can modify the S numerator of the S/N equation by adding in a sqrt(1-CRR*t)*sqrt(N) factor, where t is the time in each exposure and N is the number of exposures.

The probability of detecting a false SN is (CRR*t)^N, so you can figure out how many images you need. Dan says that the pixel inefficiency is more important. You can figure out that the optimal exposure time is somewhere around 1/2 * 1/CRR (if the readnoise is dominant), which corresponds to a 1000 second exposure.

For spectra, Dan plots the total time for a given S/N for a readnoise of 3 and several different cosmic ray rates. In 100 hours, with no cosmic rays you're at 26th magnitude; with a CRR of 0.005 (or whatever it really is supposed to be), it drops down to 25th magnitude. It's not quite so severe for lower read noise.

He also mocked up what the spectrum might look like. The first one is a S/N of 15 in every resolution element (i.e. sigma in each resolution element divided by the average flux in a resolution element within a given broad band). He shows all spectra, which have varying quality depending on which broad band (UBVRI) you insist must have S/N 15 per resolution element. There is much sound and fury about how even the worst of what Dan plots is too good. Dan also shows S/N 7.5, which saves a factor of 4 in the time. B is starting to look a little scary, but Saul and Greg thinks it's good enough for ID. Peter wonders if you could really tell a type Ic from a type Ia given this.

Dan also shows some spectra with low resolution. Peter says that, for these z=1 supernovae, you can't go worse than 100 (which is at about 1 micron, so that corresponds to, if I did the math right, a R=100). Peter thinks this is OK for ID, but at the limit. Greg notes that we may not be able to do a resolution better than 200 angstroms (Steeges! When you're in the IR, call it 0.02 microns!). Peter worries that at this resoltion, you're missing out on features.

One other thing he's looked at is to have an imager going at the same time, trying to get lightcurves while we're doing spectra. Sometimes, doing this, the imager will walk off of the field of view. On average, about half the time you're integrating on a spectrum, you're covering a spectrum. You can tell the simulator to go around the field and take spectra, and store the images that you get at the same time. You can the plot what the sampling looks like on these "for free" lightcurves. This idea would work if you had an intelligent way of going around and doing the spectra, and making sure to do targeted imaging to fill in the gaps.

Dan says that he's going to keep working on this and figuring out what questions to ask. Saul points out that Turner's grad student Herterer will be showing up, and may be able to work on all of this.