Deepsearch Collaboration Meeting

1999 June 4, 9:00 AM, probably

Contents:


Low-Redshift Spring Search

Greg has passed out a handout thingy all about the nearby search. He tells us its still going... there's a single supernova, which Peter mentioned yesterday, that we're still following. Greg outlines what the goals had been (improve low-z portion of Hubble diagram, earlier low-z lightcurves, work on stretch-brightness, get U-band lightcurve, fix K-corrections, figure out color terms, test for abnormal host galaxy extinction laws). There are also some supernova science goals (intrinsic luminosity function, new relationships between luminosity, lightcurve shapes, spectral diagnostics, etc., host environments, rate, maybe even finding new types of supernovae, of which we may have one).

Greg shows scary numbers; 100 nights of search telescope time, 100 nights of followup time, 25 telescopes on 4 continents, 250GB of search data processed, neary 3000 images processed (though Rob asserts that that's low by an order of magnitude), etc. etc. etc. Greg lists the searches (EROS, Mosaic, NEAT, Spacewatch, and QUEST). EROS was the most productive survey, but lots of their candidates were faint, and we only really ended up using half of what they produced. Mosaic had weather problems but still produced 6 Ia's. Mosaic was the one where they were mailing hard disks back and forth. Greg briefly outlines the other searches, noting that QUEST got completely nuked by weather (they had 4 clear hours out of 3 weeks of telecope time).

Greg next shows the list of supernovae that either we discovered, or that we were fed and confirmed. (We got some (ca. 8) from the KAIT Filippenko stuff, from Brian Schmidt, and from a few amateurs who do supernova searches.) The KAIT search provided 3 nice nearby ones. All in all, it's something like 48 supernovae we were involved in. (That includes all types, I believe.)

Greg shows pseudo lightcurves, where the points are plotted assuming perfect photometry and stretch=1 supernovae. These are all faked points, but just show the coverage. We didn't get as early lightcurves as we would have liked, mostly due to various scheduling constraints. Greg also shows histograms of supernova types (Ia vs. non-Ia as a function of redshift, and from which search as a function of redshift). He shows additional histograms. One impressive one is the spectroscopy followup, which shows the number of spectra as a function of epoch. We've pretty much doubled the number of spectra of nearby supernovae.

Greg also shows the list of followup telescopes. There was nearly 300 equivalent nights of 1m followup time, between photometry and spectroscopy. He says that we will probably never be able to put that much telescope time together again, unless we own our own telescope.

Greg notes for the future, he'd like to stay below z=0.06. That makes the supernovae easier to follow, and you can follow them through the moon. The handout he passed out indicates what we need to do with the data in hand for the nearby search. There are some complicated political entanglements involved with who gets which pieces of what data.

Greg notes that if there are people who are looking for things to do, they should look in his handout, as Greg has a list of all sorts of things we need to be working on with that.


Intermediate redshift survey in the Spring

Reynald tells us about the intermediate search. He says it's not the highest prioity stuff, but it is going. He's working with the Institute in Cambridge (Richard Ellis, et al.), as well as Nick Walton. They have a widefield survey with the INT 2.5m telecsope. The telescope is working pretty well. One problem is that there's a 3 minute readout time, which is why you can't really do a good nearby search. The "intermediate" search, though is going to about z=0.4. They're doing 10 minutes of search time, which is very close to what the SCP did during the first INT searches 7 years ago. They did this in March and April and got 80% bad weather. They surveyed 10 (?) square degrees. They had a handful of candidates (17, of which 10 they thought were "good), and got two spectrally confirmed Ias (out of 5 tried, with mostly too-short exposures). Reynald doesn't think there will be much science out of this run. There will be another search in August and September with an improved 30s readout time, with followup on INT, WHT, JKT. Reynald says that this search will last another year, with two other runs. The goal is to get something like 20 SNe between 0.2 and 0.3, to fill the gap in the Hubble diagram, and to establish the rates there.

Saul questions what the supernovae in the "gap" (z=0.2-0.3) do in terms of actual Science goals, rather than just aesthetically filling the gap. I think that hey, you don't know until you go and look, even if higher z supernovae are more useful. Ariel suggests that those supernovae may be useful for quintessence. Reynald claims that the rates will be useful.

Reynald notes that supernova discoveries aren't the main priority of this project, so they're sort of second down on the totem poll. This is something he thinks will end up diluting the quality of the data. In order to have it really work, you have to be able to set the followup time and and so forth. There were clashes between this and the nearby search, as well, and Reynald thinks we should coordinate in the future.

Reynald says that one thing they want to do in Paris is to have an automatic search running at the telescope. They had the software running at the telescope on a PC, and that it worked. As a backup, they had Sebastien running the search in Paris using the Berkeley software. They haven't yet done a systematic comparison of the efficiency.

Their idea is to have software running at INT and at CFHT, and have them send the candidate list in. They don't want to have to do any scanning. I, personally, am skeptical, unless they aren't trying to squeeze the last drop out of their data. If they are going to higher sigma limits, then sure, automatic scanning in principle out to work, even if it hasn't yet. But if you're looking for deep supernovae at the limit of your data.... You may considered me biased.

Sebastien talks a little about the software the FROGS group was using. He also lists the names of the FROGS. Their software is designed to be small and not require a lot of other packages.

There was a discussion about software. This one could get ugly.


Very Low Redshift Survey

Susana tells us a little about what's going on. Whenever this telescope ends up on a site that's useful, it will probably end up being more useful for following up supernovae rather than searching for supernovae. Saul says that it would be good to find more supernovae in galaxies for which there are (or could be) Cepheid calibrations. Greg thinks it will almost always be more efficient to do "one-offs" (the sort of thing the automated search will do) for the very nearest supernovae.

Susana says they've picked out 10,000 galaxies which are at z<0.6. She and Shawn estimated that they could look at each of these between every 3 and 10 days, and you could probably find some 20-30 supernovae per year. Given that KAIT already does this, there is a question of whether it makes more sense to really do a search, or to just devote this telescope to followup of the other nearby supernovae we find. Greg hesitates, and says that once we see it in action we will know. (It sounds like a few things would have to be done - autoguider, friction drive replaces - in order to use the thing as a good followup telescope.)

The ostensible site is Chew's Ridge, which has good seeing in the summer (sub-arcsecond) and sucky seeing in the winter. The problem is that the site is a Forest Service site, and there is a Native American (one specific 1/64th NA, in fact), who is giving trouble.

With the current CCD, Susana predicts a limiting magnitude of 19 (in R?) in 60 seconds, once it's at Chew's Ridge with 1" seeing and the mirror is cleaned.


Instrumentation Plans

Don pulls a manilla envelope out of his shirt.

He's giving a summary. It's a "fully depleted" CCD. They done a couple of fabrication runs. The first one did 200x200 CCDs, the second one did 2048x2048 CCDs. Fab run #3 was submitted to Mitel corporation, and was (we are told) done last week. It could produce 0-50 CCDs. They are being leaned on by various people to produce 2k by 4k CCDs.

By way of introduction, Don shows that between atmospheric cutoff and the silicon band gap, the absorption length changes by 4 orders of magnitudes. Particularly in the infrared, as you cool it moves over (gets longer), and you lose red sensitivity.

Don shows the difference between front illumiated thick CCDs (used by home video CCDs). For astronomy, you can improve the blue response by "thinning," where you remove the substrate and expose the naked back of the CCD. This is the CCD that has all of the fringing and everything else.

Don shows fringing using LRIS (both an older and a newer CCD). He calculates the fringing, and shows a calculation for the LBL CCDs, which have much less fringing.

The LBL bright idea is to use n-doped silicon instead of p-doped silicon. You keep it clean and high-resistivity, which is easier to do with n-type. This means that you collect holes instead of electrons. Additionally, you don't think. Put a transparent window on the back, which is also an electrode. You put 20-50 volts across it, which totally depletes it. The front end (the gate and all that) is made basically the same way.

It's thick enough that it's sensitive way out into the infrared. The collection of holes instead of electrons helps things on the blue end. Not thinning cuts way down on the price. (These CCDs ought to be a couple of order of magnitudes cheaper to make.)

Don says they've taken pictures of things with the first of these CCDs. They've even taken pictures with 1 micron light, that was at about the same efficiency as in the I-band.

One thing they noticed was that there was a lot of lateral diffusion with a 10V substrate bias. Things sharpened up as you turn things up to 25V. In the latter case, things get totally depleted so there isn't much lateral diffusion. (One reason to not go too thick (they're at 300 microns) is that the bias voltage needed goes up quadratically with the thickness.)

They've tested 3 2048 CCDs. The first one had some sort of soft short and couldn't be run at full voltage. The second one had two bad columns. The third one had no flaws. All three imaged.

More details.

Discussion

Mike Levy wants to have an open discussion about instrumentation plans. Specifically, what kinds of cameras do we want to build, for LBL, and France, and so forth. This includes some discussion of the putative satellite. Even without that, what would be a good venue for putting a large camera together, especially with regard to the advantage of the Supernova Cosmology Project.

Already we're committed to manufacturing a chip for ESI (an upcoming spectrograph for Keck). Saul thinks that with regard to a Mosaic, it'd be worth building some readout instrumentation as well.

If we build a Mosaic, where can we put it?

Greg notes that at some point, we will find enough supernovae, and the problem will be getting enough spectral followup. Perhaps one thing we should be doing is getting our chips into as many spectrographs as possible so that we have a lot of guaranteed spectroscopy time.

Saul brings up the issue of the Space Station telescope that Carl's workign on. That might be a shorter timescale than the satellite, and might be a good first/test/development sort of deal.


Lunch

Pound.


The Future: the Next Low Redshift Search

Greg says that if we kept going in the mode we were last spring, we would probably never get anything else done. Anyway, we'd probably never get that much followup again.

Lessons learned are that we don't really want to work out to as far redshift as we had thought previously. If you pull the redshift limit in, you can do a much better job with the followup. One telescope which covers a lot of the sky and works to the limits we want to go to now is NEAT, the Near Earth Asteroid Tracking telescope. It's an AF telescope, and the NEAT team is at JPL. Right now they run 6 nights for month on the semiautomated AF telescope normally used to track satellites. There is a 4096 squared thermoelectrically cooled chip, with 1.4" pixels and a 1.6 degree field. Typical images have 2-3" FWHM. They take three sets of 20 second exposures, spaced such that asteroids are easy to find for them and easy to reject for us. The link between Haleakala and NERSC is very fast, such that we can bring the data over in real time. This is because the Air Force site is hooked into the Maui supercomputing center. We can do all the searching at LBL.

With NEAT this spring, we just used a few nights of their data. We went through their March 98 data to find the best references we liked. In Feb 18-22, they only had two clear nights. We brought something like 50GB of raw data over hear, chunked it thorugh the cluster of 16 Pentium II's. It was a little scary because we would only find out hours before it started that they were going to observe. In the future, they are supposed to move to another similarly sized telescope, where they'd have 18 nights a month.

This particular run, they had focus gradient problems, as well as weather problems. We could probably help them improve the focus gradient problems.

For the future, NEAT is fairly interested, but things are uncertain. One of the latest uncertainties is that in April, they developed problems with their camera that are as yet unresolved.

Greg is proposing that in the future, instead of cobbling together a number of different searches, we should just concentrate on using the NEAT data. Even if we get that working, there is the equally onerous task of scheduling the followup telescope. This time around, for the most part we got our full request at most telescopes. That won't be so easy in the future....

Ideally, we'd like them to work with filters, to make the data more useful for us. This would probably be correlated with their cooling their CCD (with lN2), so that they don't lose any effective sensitivity.

There's another asteroid search, LINEAR, which is the biggest baddest asteroid search. Apparently they're ramping up to search the whole sky every two nights or something. When Greg first conatcted them, there was no response; after IAU circulars with supernovae from Spacewatch and NEAT started coming out, Greg started hearing from LINEAR. LINEAR has a telescope in New Mexico at White Stands. One thing about LINEAR is that they are paranoid, and we might have to put a computer farm out there. LINEAR is actually run by the Air Force, whereas NEAT just uses an Air Force facility.

If NEAT runs for 18 nights a month, Greg says we'd probably ask them to look at a set of fields for 9 nights, and then go back the next 9 nights. That'd give a short baseline for early supernovae. Even throwing in weather factors, they'd produce SNe at a rate of approximately 1 a night.

One of the biggest problems is how to handle the spectroscopy. This time around, we had people go and sit at telescopes.

Of course one other thing we'd have to do is reduce the scanning effort greatly. Ideally, we'd like to get it down to the point where out of 400 fields a night, we only have to actually look at 10 or so of them. We also would like to streamline and automate the followup reduction.

All of this is preliminary; Greg thinks we're probably at least year away from anything close to actually doing this. The point is that we have figured out a way to find nearby supernovae; we don't have to build a small widefield telescope for this purpose.


Next Semester

Nearby?

Reynald asks if there is going to be a nearby search; Greg says that he's not going to run a nearby search next semester. In any event, he's not going to go back to the same observatories and ask for time until we've reduced and analyzed the data we have. Chris Smith and Lou Strolger are going to continue searching; they will search again in October. If they ask us if we will search their data, we will have to decide when it happens. (Me, I can already predict....) Isobel points out that we did apply for this VLT time, but Greg thinks our getting that is a big "if".

Reynald raises the issue of the analysis of the nearby data, and who is going to work on what. Greg just says that there isn't a master plan in place. It took 200% of our resources to execute the campaign. He did circulate a document that describes a lot of the issues involved in the data reduction. So far, we've only gone so far as figuring out who is going to be doing the basic imaging and spectroscopic reduction (respectively, the two Portugese students and Susana).

Intermediate z Search

In August and September there will be a FROGS search at INT for intermediate z supernovae. There is WHT, JKT, and French followup. Reynald thinks that this will cover something like 20 square degrees.

Very Distant Search

The FROGS have 6 half-nights on the CFHT. The time hasn't been decided yet, though Saul thinks it might be September/October, but Reynald still wanted to do the search in September. Our Keck nights are currently in October, but that has the problem of HST being refurbished right after that (mid October through mid November), so they won't put us on the schedule. So, there are all sorts of problems. Reynald thinks he may be able to get references in July using some sort of discretionary time, which could then be used for searching in September. The idea would be to observe three fields (almost a square degree) during three half nights. If we want to use the VLT with this, we can't use the VLT past the end of September. At any rate, Reyanld thinks he will know more when he's at CFHT next week.

Greg worries about the idea of getting the references from somebody else.... Reyanld is going to try to get this guy to commit to getting us July references, so that it won't be so chancy/dicey. Ideally. The search, Reynald says, is going to be near September 10.

Peter is pointing out that with a 2-month gap, you're probably going to have something like 20 candidates which are at the right magnitude range, but are late time z=0.8-0.9 supernovae. This is going to be a major screening problem, even though 2-months is an OK gap for z=1.2 supernovae. Peter suggests that if you do both R and I, you might be able to do a photometric redshift from the host galaxy (which Greg points out that we don't have the expertise to do). This sounds like a real problem.

Saul suggests that if we had something like a 9-day baseline, we could screen to see which ones are on the rise. Peter asserts that with that, the ones that are on the way down are going to be dropping like a rock in the rest U-band.

All of this is thorny.

There is some thought of maybe doing a September/October search, with trying to get a HST point right before the refurbishing mission, and then additional points after the refurbishing mission. Of course, in that case, you gamble that HST refurbishing happens when it is supposed to happen.

It's possible to use the VLT time for spectroscopic screening. Greg points out that we're grade B, so we're not likely to be able to get the kind of high priority supernovae we need.

Much discussion. I didn't get it all down here. There is, at the moment, no consensus what is the best way to proceed. There is slight momentum in the direction of a September/October search, but HST and VLT are issues. We are grade B on VLT's queue, and some doubt has been expressed that we'll even ever be able to get much of that time anyway. Right now we're trying to decide if we can live with the gap.

Perhaps we can't solve it here. (But there is more discussion.) Some calculations should be done in order to figure out just how bad the 2-month gap is.

The other aspect of the search is the whole business of where the analysis is going to be done. Saul's raising the issue of training a Japenese student (!), who is apparently interested in coming here to get trained in suprenova searching. The idea with this would be building up ties to Subaru for the long run.

Saul is wondering about efficiency studies for the FROGS software versus the SCP software.... Reynald says they have already planned to do it with their CFHT data. He also asserts that they already tested this in the spring. Saul wants to know where the two searches would be run- though it's not clear if Sebastien and Reynald agree that both sets of software are necessary.

Break.

Peter and Greg talked. Greg points out that the original plan for HST was that we were going to get a U-band lightcurve with HST, and a B-band (in the J) point for colors. Since AO isn't online at Keck, we simply can't get the color. As such, Greg argues that we ought to just postpone the HST time until we can combine it with J-band ground based (AO), and not blow it on getting just a U-band lightcurve, which isn't really useless. In other words, for this CFHT search, we shouldn't even consider HST. If we can't get AO J-band in time, we just use the HST for final references, which is probably more scientifically useful than a single color lightcurve in a non-optimal color.

Saul doesn't fully agree, as it may depend on what we find out about what we find out about the U-band from the low redshift search. Also, he says in terms of ruling out grey dust, if we appear to rule out grey dust, the effect of an extinction correction could not allow grey dust.

Greg argues that for this particular semester, it's a no-brainer. HST complicates things, and it's possible we'll have AO for the future. Greg suggests taking HST out of the equation for this semester. Saul doesn't immediately buy it, and wants to see some other numbers. For instance, can you really do a good enough job with J-band AO; Greg responds that we did some number we thought we weren't lying when we wrote the HST proposal. Greg argues that we can't make a better estimate for real AO performance than what we did in the proposal.

In terms of the HST use, Greg points out that if we use HST time to get final references, we don't maximize science. However, because it saves us some software development, it helps us get the results out a little faster.

Ariel suggests that maybe we should be finding several at z=1, so that we can work in the rest-B band with observed I band. Greg responds that z=0.85 is the highest you can really do that. So, perhaps that should be the target of the CFHT run, but there is debate over whether you really want to do that and blow HST on it, or if you'd rather use HST for final refs or save it for useful z>1.


Ariel's Daydream

He's talking about precision measurements with SNe Ia. First, he wants to talk about how good we can get on various parameters. Second, he want to ask if our data can shed any light on the nature of dark matter.

How good can we get

Ariel has been trying to characterize the science potential with supernovae at various different redshifts. He is assuming that in the future not too far from now we'll have a supernova factory where Nsne(z<=2) goes to infinity. (This must assume the satellite.) This will leave us with some irreducable systematic uncertainty, which he optimistically assumes will be 0.01-0.1 magnitudes.

Ariel shows 1sigma Omega_M Omegal_L confidence bands for each individual redshift (combined with an assumed well measured set of nearby supernovae, or equivalently a known script-M), using a fundamental uncertainty of 0.05 magnitudes (with a large (inifinite) number of supernovae found). Two things happen as you go to higher redshift. One, the width of the band decreases, i.e. you get lower errors on Omega_M and Omega_Lambda. Two, the slope of the band increases. How do you combine this to get the best limit?

Ariel starts integrating bands, starting at z=0.5 and integrating out to various redshifts. This lets you estimate how your science improves as you go to higher and higher redshifts. He shows results for 0.05 and 0.01 magnitudes fundamental (systematic) uncertainties. If you can do the latter, you can eventually measure the universe to better than what people are promising from CMB anisotropy measurements. One bottom line Ariel shows is that going much beyond z=1.5 doesn't buy you very much unless you have a very good handle on the systematics.

Ariel next talks about quintessence, or "X-matter," which has a density which may vary with redshift.according to some parameter w (equation not transcribed). For further discussion, he's assuming a flat universe.

On the Omega_M vs. alpha_x (some sort of value for this parameter related to quintessence), the bands don't immediately cross as they do with Omega_M and Omega_L. However, as you start combining data at different redshifts, you do beat the region down... with 0.01 systematic uncertainties. With 0.05 systematic uncertainties, it isn't quite as nice, and you'd require an independent measurement of Omega_M. These are the sort of plots we should think about as a target for a satellite.

Somebody should probably beat these plots out of Ariel, and get them published or at least get one of these mythical internal group memoes written.

Part II: What is the nature of dark matter?

Will SNAP/SAT be able to say anything about the nature of dark matter? There are things like supersymmetric particles, astronomical point objects, etc. It turns out that the lensing probability of high-z supernova depends very much on how you distribute your dark matter. I.e. do you go through a smooth distribution of dark matter, or do you look mostly through an empty beam, and occasionally hit a point source that does high magnification. The scatter about the mean is different in these cases.

He and collaborators have set up a package for monte carlos, for simulation and analysis of supernovae. It can produce "any" SN lightcurve (Ia, IIn, p, L, Ib/c)... and other things I wasn't fast enough to get down. But it does other things with lensing and so forth. You can feed in any cosmology, and any SNR(z) (I assume that means rate, not signal to noise ratio). It generates lots and lots of supernovae; you can get all the data you want, including points on lightcurves vs. date, etc. etc. etc. They use a monte carlo raytracing method to account for the effects of lensing. The package isn't complete; they are just assembling it, but Ariel thinks it will be very useful.

He shows, for a specific dark matter model, a 3d plot of z vs. Mbeff vs. scatter delta_M about the average magnitude. At higher redshift, there is higher scatter, as your probability of being affected by a lens goes up.

According to one of these calculations, at 0.5

Ariel shows also did the case for point lenses. He then did a bunch of montecarlos to calculate the probability of assigning the wrong lens profile (isothermal vs. point lenses) given the data.

Ariel is also looking into the probability of getting multiple images (though perhaps not resolved), and what the time delay would be. As you go to thousands and thousands of SNe, you might actually see some of these. He says that if most of the mass is on galaxy scales, the gap is something like a month. The question is, is the secondary lightcurve bright enough to see Peter cautions that the same lightcurve shape has been observed from a reflection off of a background dustlane some 50 lightdays away.

Ariel says they are still working on figuring out how many square degrees you have to cover in order to get one of these.

Steeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeege.


Other things (tools) we need that somebody else could work on.

Reynald mentions having a spectral database for everybody to access. This would be all the spectra of previously published Ias that have been observed. Isobel says that it should come with a table of stretches. Reynald would like a program to generate synthetic spectra; Peter says that exists on the web at UO, but Reynald says that it doesn't work.

Saul questions if our detection efficiencies are integrated in. This has raised some questions and discussions about how you do efficiencies.

etc.


Priorities for the Future

We are touching on all of these priorities we have at the moment. We seem to be giving lip service (again) to making getting all of our current data analyzed the highest priority, over more proposals for observing. Of course, we'll be working on the SNAP/SAT proposal... Saul asserts that it won't suck down huge amount of time, but it's something that could be going down "in the background." We will have to be careful to keep that from sucking down all of our time. Also, if we get lots of money from the DOE for the nearby search, that will be ramped up. Don notes that we have a funded but unfilled postdoc position for the instrumentation side of things.

Saul wonders how much we could speed up analysis of things like the Reynald Rate paper, the Isobel Spectrum paper, the Rob Photometry paper... (see collaboration meeting notes from the last two years).

Ariel is also saying that he'll probably be hiring two new grad students. There was some thought about them coming for a couple of months to learn how things work in Berkeley, or something like that.

Are we forgetting any big science priorities that we need people to worry about? Saul reminds us of the two big loopholes he started with yesterday, dust and whatever the other one was.

There's also the question of critical path reduction. How do we rank our latest high-z supernovae versus the nearsearch spectra, etc. Isobel wants to know what the priorities on all of these things are.

We're doing a priority list, of what key things need to be done. The most important is the HST data, and Gred handed out a thing about this yesterday. Hand in hand with that is getting correct z's and ID's from first looks on the set F and set G supernovae.

The next thing is the completion of results from the 42 supernova, like the 42 spectra paper, the mythical photometry paper (which will have 42+F+G), the 42 rate paper, the <=42 composite/stretch paper, the 42 galaxy host paper (Richard Ellis (though Peter asserts that he (Peter) is in fact the HST contact for this)/Morphology/HST snapshot/or something), K-correction/reddening paper (almost done, Peter says), multicolor stretch/statistics paper (nearby, include the 22 Reiss published supernovae, or something), Albinoni paper (that Greg wants to get out before somebody finds another high-z paper). All of this is our traditional list of papers that will be published in the next month, just like the list we made the last two collaboration meetings in 1998 and 1997.

Reynald suggests that there may be a type II paper/study.

There are also more cosmology papers, such as from HST +40+Albinoni. (Or is that part of the aforementioned HST paper?) Another paper is perhaps Alex Lewin, comparison of different techniques.

Next there's the list of priorities for the low-z stuff. Which (it seems) all comes as a lower priority than all of the high-z papers. Note that the reductions, though, are parallel processing along with high-z data reduction. Peter does want to get SN1999as published, as it was a biazarre Ic which was 1.5mag brighter than a Ia at the same z. This is something that nobody has seen before, so Peter things we really ought to get it out.

This whole discussion is just the usual tangled mess.

Oh, yeah, the 9571 zoo-on paper as well. And the GRB rate paper that Sebastien was once upon a time working on with Bruce. These are lower priority. There's also things like white dwarf searching, which Greg was doing with Mike Moyer.

Now we're talking about "GRB" (rapidly fading object) rates and whether or not we have lots of these we found that we threw out. We probably don't have time to work on this, but if somebody wants to go back and plum the data....

Now we're talking about timescales for papers and such. We talk about this a lot. Little was archived.

Proposals

Greg notes that we need to get final reference spectra and images for the nearby campaign. The former on CTIO 4m, the latter on that and YALO and CTIO 1.5m. 20 spectra, 20 images in 5 bands. Greg thinks that those three telescopes will be enough. This will be a proposal written in September for Spring 2000.

HST propsal that we might put in in the Fall. Nearby UV spectra (oops... another group (SINS or somebody who sounds like that) does it, so we can't, Peter says) and photometry of very high z supernovae.

I need to go.