Deepsearch Collaboration Meeting

1999 June 3, 9:00 AM.

Contents:


Introductory folderol and stuff about our 42 SNe

We're waiting on introductions because Isobel and Ana are not here.

Saul notes that the paper was published on June 1 in ApJ. Yes, this is the paper that we were going to submit at the end of the meeting a year ago... oh well. He shows the Omega/Lambda confidence region, along with a scorecard of what we believe. He thinks that statistical error is well in hand, and that the statistical errorbar is good.He thinks we have some aspect of dust that reddens in hand (with some caveats). He thinks the two biggest loopholes are the grey dust story, mostly becaues of the paper on the web by Aguirre. Aguirre points out a statistical error in our handling of dust that reddens. Saul asserts that his particular complaint isn't right, but that we in fact do not constrain greyer dust as much as we had claimed. See meeting notes from a few weeks back.

The other questionmark is suprenova evolution, which Saul asserts is one of the key drivers of the nearby search. Saul shows an overhead with ages of people indicating that the problem perhaps isn't so much that the supernovae are intrinsically changing, but perhaps that the demographics are changing. The goal is to find as many subpopulations as possible at low redshift.



More detail on problems and question marks.

(We're already jumping about in the putative agenda.)


Dust

We're going to talk about dust. What are the tools we can use to get at the dust. Saul introduces Tom York, who's looking into this. Tom says that he's done a couple of things so far. He's looked at the paper by Aguirre (astro-ph/9904319), who makes the estimate that there might be enough dust that you can get a few tenths of magnitude of extinction from dust that has been expelled from galaxies and is farily uniformly spread through intergalactic space. One possibility is to argue that there isn't enough mass in metals to do this, but Tom hasn't done this yet. Mostly, Tom has looked at if there were dust and it's grey, what will the extinction look like as a function of z for some plausible guess about the dust density as a function of z, integrated.

Saul and Tom show an overhead that shows our residuals, with Omega_M, Omega_Lambda=(0.28,0.72) as horizontal (our best flat universe fit). The line for (0.2,0) with grey dust stays close to that line out to z=0.7 or thereabouts. Several supernovae out in the 1.0-1.4 range ought to be able to differentiate between (0.3,0.7) and (0.2,0) with grey dust (as plotted here, under whatever assumptions went into that). The difference between the lines at z=1.2 is about 0.3 magnitudes, which is probably comparable to what Albinoni's error bar will be. Note that the amount of dust he plotted is an arbitrary amount of dust... he chose it to make a difference at z=0.5 or something. Tom notes that in order to get the fit to work with (1,0) and dust, it requires an amount of dust that's starting to get impluasible.

Saul also notes that grey dust isn't really grey; by the time you get to the near infrared, it's already showing signs of reddening. (Note that his model for grey dust is dust where the smallest grains get destroyed, e.g. by sputtering, when they are ejected from the galaxies. Tom shows plots of extinction coefficients (or absorption or something) as a function of wavelength. There are several models that are flat through the optical bands, but start to drop off in the J (1.1 micron) and K (2.2 micron) bands. Tom also plots A(V-lambda) as a function of A_V for grey dust with a minimum size of 100nm. I spaced out for a little while, but Saul concludes that from the HST data we have in hand, we won't be able to say much about grey dust. So far, of what we have, only Albinoni will tell us.

Ariel asks, what about the dispersion question; if there were grey dust, we should see a higher dispersion about our best fit. Greg notes that Aguirre claims that out to any of our redshifts, you go through enough pools of the dust that the dispersion will wash out.

Ariel notes that there is a forthcoming paper by a Danish group, who's been studying the dust grain size segregation issue. Bottom line is that they think it's very unlikely that you'd get any sort of segregation between dust sizes, that with radiation pressrue, etc., all the dust grains get the same velocities. Ariel isn't sure what sputtering would do to this. Tom York notes that if they all do have the same velocity, then it makes the sputtering calculation easier. As the dust goes through hot gas, statistically every dust grain would then use the same amount of radius. If you plot N vs r, Tom asserts that you get N goes as r^-3.5; after sputtering, N goes as (r-r0)^-3.5, where r0 is the amount of radius lost.

I had a computer hang (for reasons unknown; I don't see that much), so I missed some discussion. Gerson asked if there are other astrophysical effects of this dust, and has anybody looked for it. Saul supects that Aguirre has done this literature search, but it might be something that is worth somebody doing.

The discussion has gone on to dust in clusters... if the dust is localized in clusters, can we see the difference for supernove where we're looking through lots of clusters, and where we haven't. Peter says that Adam Reiss has dome something with MLCS, looking at supernoave which are behind clusters, and seen no evidence for it. Peter says there will be a kink in the diagram for supernovae behind clusters and in front of clusters... using well known nearby clusters like Fornax, Coma, Virgo. If you're looking for an 0.4 mag effect, you only need to know the distances to a couple tenths. Greg notes that in the dust literature, nobody's using limits based on SNe Ia seen behind some of these clusters. Greg is pretty sure that there is a detection of dust in clusters; Peter asserts that it's not 0.4mag, but it might be 0.05mag. Greg says that you go through, on the average, less than 1 cluster on a random pathlength to z=0.5.

Using SNe II to measure dust.

Saul notes that it's probably worth some student, or advisor with student, working on this, as there are some interesting questions. Saul wants to mention one other method, which Peter and Greg came up with. Peter talks about it. Peter notes that Type II SNe, when the first go off, are incredibly hot blackbodies; they are blue and mostly featureless. When the are so hot, it doesn't matter if they are 25,000 or 45,000 K, they are in the Raleigh Jeans case where changing the temperature by 10,000K, the colors don't change by more than 1%. This has been used for many years to judge the extinction to young SNe II (young meaning up to a weak after max). You can then make multicolor observations from B to z. You know what the color should be since it's blue and featureless; you measure the difference, and that tells you the reddening. The problem with Type II supernovae is that they tend to occur within dusty areas of galaxies. However, if you do enough colors, you can differentiate between the extinction from the Milkey Way, the host galaxy (red dust), and intergalactic red dut. To do this, you'd have to stage a search for SNe II that has a very short baseline, maybe a week separation on a 4m telelscope. You're not looking for huge increases, and you have to survey a large part of the sky (say 10 square degrees), down to 25th magnitude.

Ariel wants to know how conclusive all of this will be. Peter says you'd have to try to do 1-2 percent photometry on 23-24 mag objects (21st in J). It's not easy, but it's potentially powerful. Peter and Greg estimate that you'd get at least a dozen in one night of searching. You'd do the search in the B-band (or maybe V-band). (Which is the rest frame U, that will select against finding SNe Ia.) Ariel also wants to know how much spectroscopy you'd have to do to know that you're featureless out into the IR. This is harder than just screening (Type II vs. Type Ia). Peter says if you do what we do at keck already (0.5-1 microns), that's good enough. The first feature you'll see is the hydrogen; if you don't see that, then you're probably OK.

Saul asks about AO. Greg says that 1% photometry would take about 1/2 hour if you have 0.2-0.3" seeing.

Saul thinks that, hey, you might be able to use this to cross calibrate between the filters. Greg says you have to know a priori that you don't have dust to do that, and if you know that... never mind.

Peter also notes that you could go back a month later and get spectroscopy at Keck, and photometry elsewhere, and get EPM (Expanding Photosphere Method) distances to the supernovae.

Aside: Gerson wants us to start making internal memoes as we do studies of things, so that everybody else knows about them.


Organization

We now go back to the introduction and introduce everybody and discuss the goals for today. Saul wants to spend "just a couple minutes" on distant future goals.

We go around the room and introduce everybody. Details not archived. We spent nearly 20 minutes on it. At the end, Saul pointed out the part in the agenda that lists the goals for the meeting. Presumably the'll be on the web somewhere near where this ends up.

Short break.


Inspirational Talk: Pie in the Sky, or the Satellite

This is supposed to be the point where you get all excited. But first we pause, while we try to get everybody back in here.

Saul thinks that supernova are still the most direct cosmological tool that is out there. He thinks it is not out of line to try to push this to its extreme, and measure the cosmological parameters as well as possible with just supernovae. In this scheme, the supernova would then become the benchmark to which the other methods would have to compare (for those parameters which SNe can measure). A second goal would be to explore what the dark energy (cosmological constant or something else) might be.

SNAP/SAT (Saul even has a picture!) is the satellite. I didn't write down the acronym expansion fast enough. We're talking a 1.8m aperture (monolithic mirror) telescope, with a 1 square degree field. This would be 36k by 36k mosaic CCD in total, built from 2k by 4k 10-micron CCDs (0.1"/pixel). The 1.8m was chosen to be the smallest telescope that could still do the spectrocsopy. These would be using LBL CCDs, with their good ca. 1 micron efficiency.

The spectrograph would have a pickoff mirror and a 3-channel spectrograph; blue, red, and IR (using a HgCdTe chip). The goal is to have the spectrograph photometrically calibrated so you can read off synthetic photometry....

The concept is to get a full sample of supernovae between z=0.3 and 1.7, with a subsample of 200 SNe at z<0.15 (discovered from the ground, followed from space). This would include nearly continuous monitoring of some fields; 2 sq. deg. to magAB=28.5, 10 sq. deg. to 25, 100 sq. degl. to 24. The 200 nearby ones would be followed very fully in spectroscopy, allowing you to construct very good K-corrections for our specific filters (whatever they turn out to be).

The science goal is to measure Omega and Lambda to 1% under the flat universe assumption. Wihtout constraining curvature, you can measure Omega_M to 0.02, Omega_Lambda to 0.05, and even the curvature to 0.05. With regard to the "dark energy", we could constrain the equation of state (w) to 5% (for Omega_M=0.3 and a constant equation of state). You can also start plotting w as a function of z, as the best place to study this is between z=0.3 and 0.8, where we'd have huge quantities of supernovae.

There was a discussion. Much later, Ariel came back to try and prove that the Turner & student limits were wrong... specifically, he things you should be able to hugely rule out positive w at large z.

Saul notes that there are other cosmological parameters one could play with, plus things like weak lensing, galaxy clustering, strong lensing statistics, EPM from Type II, blah blah blah. There is other science you can do with this, such as GRB optical counterparts, MACHO optical counterparts and proper motion, target selector for NGST, blah blah blah.

With regard to our systematic, dust and extinction would become something measured rather than a systematic error bound. They key issue, and the one which is hardest to tell people about, is evolution. We'll have to look at all the different sub populations of environments and supernova types. Metallicity, progenitor age, etc., blah blah blah. And, finally, measurement systematics.

Saul thinks that this is all most doable because of the development of LBNL CCDs, that he says we will hear about later.


Lunch

We all went to lunch.


Evolution

My headers aren't consistent. Since we're jumping around in topics, it's probably heard to get it right. We've even touched on this topic before....

Peter and Progentior and Metallicity Models

Evolution is the other big question mark (in addition to dust) in our current results. Peter tries to explain what some theorists have seen. There are two different progenitor models; one is a double-degenerate model, which perhaps could be affected by evolution in terms of the mass of the two white dwarves. If the two coallesce, you won't always have two guys that just barely make a Chandrasaekar mass. Sometimes you might get more, and you might see differences between spirals and ellipticals (with spirals more likely to have things that sum to more than a Chandrasaekar mass. If this is the dominant source of SNe Ia, it should affect the rates as you go to higher z as well, since you have to have a universe that's been around long enough to statistically make the two WDs.

The other progenitor scenario is the hydrogen acreters. A WD accretes hydorogen, which gets converted to He and then C and O and builds up until you hit the Chandrasaekar mass. One thing that can vary is how many metals there are on the donor star. From that star, metals can build up on the outside of the accretion disk and the white dwarf. The other question is metals down inside the white dwarf; how does that affect the explosion mechanism? It does, in various ways, roughly by the amounts we see in the stretch variations. (Basically, you make different amounts of nickel.) Peter Hoeflich then took the step of assuming a linear correlation between metallicity and redshift.

That's complete hogwash, but he's a theorist.

--P. Nugent

Hoeflich found that as he went further out, and to higher redshift, you see fainter and fainter, and narrower and narrower stretch supernovae. This doesn't match with what we see, though, because we see a range of stretches out at z=0.5. (And, anyway, we correct for stretch.) Peter says that the quick easy answer is that the range of metallicities that we see nearby is so large, and there clearly isn't any bias at z=0.5, that this has to be moot at some level.

Peter shows a paper by Lentz et al. (including himself). Eric (Lentz) took supernovae, and varied the metallicity from 1/30 to 10 times solar, and syntheized spectra (using Peter's code). It's based on a solar abundance atmosphere made by Ken Nomoto, and converted (something) to Carbon and Oxygen. The luminosity was frozen, and only the metallicity on the outside was varied. The biggest differences are in the UV section. Higher metallicity models are greatly depressed relative to the others. (More iron in the atmosphere, iron and cobalt lines in the UV depress the spectrum.)

In the optical region, the silicon feature changes with metallicity. (Both temperature and metals can affect this, Peter says.) The amount of iron will also change where you measure the trough. For higher metallicities, the trough moves more to the red (by about 100 angstroms over the full tested range of metallicity). (Line blanketing.) The same happens to a couple of iron features. Peter wants to look at the spectra we have, and hope that we catch enough of them at a similar phase (especially with our recent nearby campaign), and see if we can see these sorts of variations. (And, then, use them to compare to brightness.)

Rise/Fall Time

Saul mentions that Peter Hoeflich thinks that evolution would change the rise time versus the fall time. If you can constrain that, Hoeflich says you can constrain evolution.

On the lines of all this evolution stuff, another task to do is to have somebody figure out how to break down our galaxy environments and such to figure out what subsets of supernovae we want to make for all these sorts of tests.

This leads us to...

Gerson's composite lightcurve stuff

(Most of this is probably in previous normal Wednesday meeting notes. If fatigue allows, I will attempt to get all the information here again.)

Gerson first plots a width distribution, which is the product of the stretch and (1+z). Most of the difference in the distribution between our data and the Colon/Tololo supernova is explained by the (1+z). Taking out (1+z), they mostly agree; C/T has a couple of very low stretch supernovae, and we have one very high stretch supernovae. Except for those, most of the supernovae are between a stretch of 0.8 and 1.2.

He next plots lightcurve width (s(1+z)) vs. 1+z. It falls very nicely along a 1+z line. The scatter is mostly the effect of the stretch. He then divides out 1+z, and plots stretch vs. (1+z).

What we do in the minuit program is to fit each supernova to the lightcurve. Supernova measured in the red is fit to a blue supernova, with a K-correction applied. Gerson uses the maximum time from these fits, and then normalizes the peak of each curve to unity. He then plots all 35 (those between z=0.3 and 0.7, where R maps to B) of our supernova, and all the points from the Hamuy supernovae. These are all effective blue points (i.e. K-corrected). You notice that our data sit outside the Hamuy data, but that is mostly due to (1+z). Gerson plots both all individual points, and then data where all points within one day are averaged (one set at a time). There it's clear that our data is outside the Hamuy data.

Gerson then divides the timescale by 1+z, and now the Hamuy and SCP points fall on top of each other, although there is still a considerable amount of scatter.

Isobel notes that there's a circular arguement, in that the K-corrections were applied using the (1+z) time dilation... so the conclusions are perhaps not that conclusive when you then divide by (1+z). Somebody should think about this. Peter says that this could make some difference.

Gerson next plots individual points and averaged points where the timescale is divided by s(1+z). In so doing, he found that there were measurable deviations from the Leibengut template. As such, Gerson and Don have come up with a new template lightcurve. That done, you can iterate... refit, look again, etc. Probably, as it turns out, a single iteration is enough.

The other thing this does is allow us to find the day of the explosion, which is 17.5+/-0.4 days times the stretch before the day of max. Gerson says they will also give the value with respect to half height. (The problem with doing it with respect to max is that max isn't really that well constrained; day 0 on our template is, but that may not really be exactly max.)

Gerson also plots residual as a function of stretch for ten different time intervals. The residuals all look pretty good and flat. This shows that, on the whole, there are no regions of the curve where the shape is stretch dependent.

Gerson has run a fit to the entire bunch (stretch and 1+z corrected). This gives a stretch of 0.998 (which is circular). He's also fit just the first half (time before max up to just over max), and the second half (just before max to the end)of the data. With just the first half, he gets a stretch of 0.997; with the second half, he gets a stretch of 0.996. Both are good to about 0.01. In other words, the stretch works equally well with the early data and the late data. Saul notes that the theorists are surprised; they didn't expect that the stretch would be the same on the rise and fall times, because they had thought that they were determined by two different parts of the physics problem. This implies that those two things must be correlated.

Drell

Saul points out that there was a 45 page astro-ph paper by Drell et al. that argued that all of our cosmological constant was due entirely due to evolution. The argument is basically is that since you see differences between supernovae at different redshifts, evolution explains everything. The first difference they claim is that with the low-z supernovae, you see an improvement in the Hubble line residuals if you apply the stretch correction, wereas with the high-z supernoave you don't. However, we even noted in our paper that our measurement errors are enough on the high-z supernovae that you aren't surprised not to see any statistical improvement. The second difference they claim is that the three different analysis techniques (delta-m15, stretch, and MLCS) don't give the same corrections for the supernovae, and the differences increase with z, there must be evolution. Alex Lewin has been looking into this, and Saul argues that we understand where the differences are coming from.

Drell et al. take these differences and say you must assume there is evolution, and put in an arbitrary equation with magnitude change with evolution. They then do the fit, and surprise surprise, find that they can do a fit that shows that you don't need any cosmological constant, merely evolution.

Alex Lewin shows us some of what she's done about all of this. She plots corrected magnitudes as a function of z when the supernovae were fit with different methods. These are the data for the 10 supernovae from the Dark Side's data. She has points for mlcs, our stretch fitting, and "template fitting" (which is the delta m15 fit thingy). It seems that the mlcs was getting bigger corrections than the other two methods. The fit apparent magnitude all agreed pretty well, but the corrections disagreed. The mlcs method generally finds them wider, so they get corrected to the fainter, which is one reason why the mlcs method would be biased towards fainter supernovae. Reynald asks why they got the same Omega/Lambda as we did; Greg notes that that would happen if there was a similar bias in the Hamuy supernovae. The two biases would cancel out.

Alex also plots the 10 Dark Side supernovae, plotted on a hubble diagram, based on our fits to their data. Their points appear to the eye to fall consistently with our points.

This is an ongoing thing, but it looks like there are answers that will be able to address the points that the Drell paper rases.


Nap time

But that's always true.


Upcoming Analysis Problems

Saul wants to state these. These are topics that we will dicsuss later. These are the remaining weak points in the way we present our analysis and measurement steps.

The K-corrections still have this problem that we have to figure out the steps of all this. ("Energy" Ariel says. I agree. It's pretty simple. The whole filter was defined, way back when, in terms of Energy.)

The next point is the zeropoints of the magnitude systems, but Saul would like a group memo on how well we know the absolute flux of the magnitude zero of the B band and R band magnitude scales. How do you know what the actual energy of B=0 is? Greg points to Hayes and Latham paper, which is where it is done. Greg is not worried. Saul is less convinced.

Intrinsic color versus stretch and time. That isn't very well pinned down. Some of this will come out of the nearby and HST datasets, hopefully.

U-band lightcurve; not very well known at all. We only have something like 2 supernovae.

Papers:

Saul asserts that all of these things are important so that we'll have a final clean analysis chain.

I'm not sure where this fits in, but Sebastien points out a paper by (somebody) who claims that we can get big differences from weak lensing. Saul notes that ages ago we put a nail in the whole weak lensing thing... what's the deal? Unclear.


Overview of Current Projects

High redshift, very high redshift, intermediate redshift, low redshift, and very low redshift supernovae.

High-Redshift Supernovae

Our traditional range, z=0.4-0.8 or thereabouts.

Greg shows Omega/Lambda plots, with simulations. He shows how much things should crunch down for the 11 HST supernovae. This does not include the additional only-ground supernovae (though those won't make that big a difference). These are supernovae from Set F and G, December 1997 through March 1998.

The most pressing thing is to get all of the photometry reduced, so we can fit it and see if it confirms what we've already published. Rob thinks that groundbaesd lightcurves could plausibly be completed by September, which everybody thinks sounds too late, but Rob is trying to be realistic. Isobel says nothing has been done on the spectra from Set F and G since we were at the telescope. We have rough values from the telescope, but nothing more; that needs to be done.

Which leads us to...

Spectra in General

The spectroscopy paper hasn't changed in the last year. We have to decide what is going to go into it. The first thing is presenting the data, which is important. The other thing to decide is to what analysis to do. Re: which data, do we want to present all 42, or do we just want to present a subset of the good ones. There is a sentiment to write a data paper that just presents the data for all of the supernovae, with only small amounts of analysis on the best ones. A heavier analysis based on the best supernovae ought to be a separate paper, done later. There are some issues of Filippenko, because he was going to write up the supernovae from the first two sets.

The question remains as to specifically which analysis we can do. Possibilities are: show that the spectral dating works within the erorrs, show that you get a better match to Ia than to Ic or II, show that the stretch determination from the spectra matches with what you get from the lightcurve. So far, Isobel's been composing a sort of time sequence, comparing 92A to a whole bunch of distant supernovae plotted based on their epoch. Qualitatively the features seem to vary in sort of the same way.

Isoboel also shows a plot of lightcurve date versus spectrum date (done crudely). There is definitely a correlation, but it's not perfect. There are no error bars on the plot at the moment.

Greg wants to know if there's anything more to discuss WRT theoretical interpretation. The first thing Peter wants to do is take the 10 best spectra with host galaxy redshift, and compare the velocities of the features to those of nearby spectra from SNe from the same stretch. The other thing is what he's working on with Kirsten, developing metrics for doing feature-based fits to day of maximum and stretch. That latter project is something Peter says as being done by December; the former is, as Peter describes it, "quick and dirty." Peter also wants to look at the things we have spectra of which aren't Ia, to see if we can do anything with them.

HST

We haven't gotten real far with the HST, mostly because we've been distracted by many other things. Greg has a memo on the web which gives an ad hoc reduction plan, that describes some of the problems we are having or may have to deal with. Refer to that document....

The biggest thing, it sounds like, that differentiates this from plug and play is dealing with the HST corrections, the long/short thing, the charge transfer, etc. All of this may be a thorny issue. (Geometrical transformations are ugly too.) Gerson notes that Kirshner said he thinks that he has somebody at Harvard who's figured all of this out... but Saul notes that we said the same thing with our last HST paper, and suspects that Kirshner may just not know about all of these problems. Wendy Freedman, with the Hubble Key Project, as of two weeks ago still don't think they have it fully worked out, and that is still their biggest systematic error.

Shane has gotten through a lot of the NICMOS data. He shows a summary of what we've got for sets F and G. Three's a total of six supernovae, and we have references for all of them. Some of the images have problems, especially the set F images. There's a "pedestaling" (what I used to call "quandrant floating" or "bias floating" back in my IR Caltech grad school days) problem, but there is a standard correction for all of those these days. There are also problems with persistent cosmic rays. (I know from my grad school days that persistence is a general problem.) Then there's shading, which happens in image taken in "accum" mode. You get a bias that changes nonlinearly across a single quadrant. That's nasty... there are techniques.

Most of set G wasn't zapped by most of these problems... plus, we went longer and we had more exposures of each supernova. Set F will be more trouble.

Shane gives a list of what he's done. He says anomalies are understood and corrections are applied. He's currently building reference images. Photometry is next, and he thinks he will have something by the middle of July.


Very High Redshift

Constitues 1 supernova right now: Albinoni. References in September at Keck, Search in October. 1/2 total exposure, decent seeing (0.6-0.7"). Two weeks after the search, we had a spectroscopy run. We looked at several objects, Albinoni first with seven 1/2 hour spectra. By the end of the night we knew we had an O 3727 emission line at z=1.2. We also found Brahms at z=0.86, in a beautiful CD galaxy (thus probably unextinguished). Then there's Strauss, at redshift of z about 0.1, found by Peter blinking at the telescope....

Greg shows Albinoni's spectrum, which looks just like noise. He also shows the final spectrum, which involved Greg spending a week doing nothing but reducing data, utterly concentrating, more than 10 hours a day. He plots it over 81b, so it's undoubtably a Ia. After the discovery, we were granted some Keck director's time. We have another Keck (I) point. We also got an additional 10 HST orbits from that director's discretionary pool. We have four HST I-band points, and 1 good HST J-band point.

I'm mentally drifting. It's nearly 5:30.

Greg wants to have something submitted on this one before the other group does their deepsearch. Probably there will be a discovery and data presentation paper, together with a rate calculation, but no cosmology. For cosmology (to be done later), there will be the issue of having to figure out a rest frame U-band lightcurve; in theory that's on disk from the nearby search, but a lot of analysis must be done.

Rates

Reynald a couple of months ago had draft 0 of the rates paper. This is something he did without discussing with anybody. He wants to do rate per unit volume as a primary measurement. He's using the 1995, 1996, and 1997 data, including Set F (1997 November/December). The redshifts he used are from the paper, or from Isobel's spectrum page (for Set F). The magnitudes are from the lightcurves (from the paper?), except for Set F where he used the APS magnitudes from searchscan, but he may have applied some correction which is not really clear. Distance to host he took from the searchscan software... Robert Quimby says that he's done this for real, so we should get ahold of his data.

Total area covered is 25 square degrees.

He plots number of supernovae as a function of redshift, together with a histogram monte carlo calculation for a given cosmology (H0=100, Om=0.3, Ol=0.7), assuming no evolution (i.e. volume and efficiency are all that go into it). Isobel and others note that they want to see the rates of number of supernovae per unit volume as a function of redshift.

Reynald then does a few fits to figure out the rate. He gives an expected mean z, and observed mean z, and the overall rate per unit volume per year (in the rest frame). He also has the rates he gets when he assumes a power law functionality with (1+z); there, the coefficients typically have errors of 1.5, so is not very constraining. However, the rate of change appears to be quite small.

Reynald notes that as long as you stay z<1 and within "reasonable" values of Omega_M and Omega_L, the volume goes as approximately (Omega_M-Omega_L). Assuming that the rate is constant, and fitting Omega_M-Omega_L and the rate, he gets a limit on both. Omega_M-Omega_L comes out to -0.5+1.5-0.4, and the rate comes out at 1.50+2.31-0.65 (in whatever those units are... he told me, but I wasn't quick enough to type it in). He's got a confidence region for this fit, which he shows as well. Note that all of this is under the assumption that the rate is constant; Isobel objects that this isn't very meaningful, because there isn't any reason to assume that the rate is constant. (In contrast to our Omega_M/Omega_L fit which is based on the assume that SNe Ia are standard candles, or at least calibratable to same.)

Reynald also tries to do the rate as a function of luminosity, and he says he does not see any real luminosity influence... but now he's mumbling somthing about not really being able to deduce anything from that. Isobel's objecting that there's some model in this as well, but I'm drifting a bit, so I'm no longer really sure what's going on.

Steeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeege.