This was one of the SNe we sent to HST. Greg has looked at the Jan 5 and Jan 11 HST images, and cannot find the SN. This was one which wa in what looks like a tidal tail between two interacting galaxies. The spectrum was whelming at most, more likely underwhelming. The ground based lightcurve is ugly. The one point we have is consistent with mag 24.8, and the errorbar on the bright side (combining four points from two days at CTIO) is at magnitude 24.3.
The issue is, what should we do with the HST time on 97236? By now, the only orbits we can change are the last scheduled point. We can get one HST point in the 8 hour field. The consensus seems to be that we should just put it on the other 8 hour SN rather than getting a single HST point on a gratuitous SN 70 days after discovery.
The other SNe from HST look good, all are well separated from their host. 97226 has very weak host. 97201 has a clear host, but it is far (on the HST).
Saul says you can think of it as having wasted 10 orbits... OR you did very well by using over 3/4 of your orbits correctly.
For this interacting system, if we have somebody interested in this kind of thing, maybe the data will be even useful for them. We'll have great colors on that system. Saul wonders if we can do the Neta Bachall odds based on this group of galaxies.
When Rob redid the lightcurves on several SNe, the reddened peak went away. Now it looks like it is a histogram peaked at zero. This is a first look. Rob's theory (he hasn't checked this) is that the peak we saw before was due to a missing airmass term in the I-band calibration of the 95 SNe. Note that it is primarily the colors which changed.
Instrumental correction is becoming a big deal again. Apparently CTIO keeps a log of color corrections; we need to try to find this. Greg says based on past experience he's been able to determine color terms of 0.03 to several sigma, whereas all of our calibration data does not give nearly good enough data to determine these. The ones Rob has seen tend to be positive, which agrees with what Greg has seen. Peter's theoretical curves have a negative slope.
Do we propogate it through even if the term is less than 0.04? Saul proposes that for the first paper, we spot check; if it's small say that, and don't do it. The final version can put these things in.
Peter will check with CTIO to find out color terms; Rob will check with WIYN, both to figure out what the "assumed" values of color terms are.
Issue of B-V vs. Stretch. WE have some SNe outside the range of nearby SN stretches, meaning we don't really know their unextincted color. There are some new ones, but they belong to CfA/CTIO. There is a question of whether we can get ahold of that data.
Saul remarks that we do have to redo our whole study thorwing out any SNe for which we don't have any nearby calibration data, including these 10 or so which have stretches which are broader than what has been seen nearby. Then there's also the issue of treating them as standard candles with an intrinsic dispersion, i.e. pretend we don't have any clue what the correlation between lightcurves and luminosity would be.
New issue: Peter has been doing his own version of Omega-Lambda fits. He was doing a bootstrap resampling method using our set of 38 SNe which give a decent Chi-Square. From your set of 38, you randomly pick 38 SNe, allowing repeats. You calculate the Chi-square minimum for those and plot it as a point. Do that 10,000 times, and fill up your Omega-Lambda grid. Peter compares this to our Chi=square contour for all of the SNe done normally. What's interesting is that the resampling give you almost exactly the same result as the normal chi-square fitting. This would indicate that the distribution of the SN is the same as what their errorbars would indicate. This is comforting.
(That he did this indicates that Peter is the group's bean counter.)
Peter is now working on trying to do absolute chi (as opposed to chi-square). Right now, he's having the problem that there are two huge spikes with ~1/6 of all the points. He's still trying to track this down. The reason to consider this is the two obvious outlyers we have.
There is a pileup on the Omega=0 axis right now. Peter says that those which would go to negative Omega piled up. He asserts that the formula for distance doesn't work in that region. Others object... mathematically, the region should continue. Don points out that there is a difference between the physical quantity and your best estimate of the physical quantity.
The real confusing thing is that before Sebastien has been able to plot for negative Omega_M, but peter asserts that the program blows up. So what's the deal here? Peter's using a program which (who?) wrote to give you the distance for a given Omega and Lambda. The question people are asking is wheter some biases of having a physical universe were programming into this.
Another set of discussion is whether the region of Lambda~0.5, Omega<0 is a bouncing universe. Saul says he thought it wasn't, Peter says that it is. This goes into a mathematical discussion of imaginary numbers and units and cosh's and cos's and etc. etc.
Another issue: fitting in flux vs. magnitude vs. distance space. Depending on where you're fitting, you get a different answer. Don's assertion is that the measurements are made in flux space, and that the errors are more Gaussian there, so there is no good reason to fit in magnitude space.
Outlyers: the 38 and the 40 (where the 2 are our big outlyers, 9733 and SN1) give very different answers with chi-square fitting, but (it seems) not with absolute value of chi.
Rob asks, is the zeropoint error included in the m_R which the Don fitting program returns?
Somehow we segued into a gigantic dicussion about or current set of data and Rob's sadness over the fact that the fiducials measured by the lightcurve software don't look good. They are such that most of the INT and WIYN data from the beginning of January would be thrown out right now... this needs to be tracked down. Is it a problem in the lightcurve software?
There is also an issue of whether this could have ramifications for our search. We tried to push a search ending at 0.8 to 1.0, and we kinda failed. The other group was trying to push a redshift of 0.5-0.6 out to 0.8 by doubling exposure times, and they failed; their SNe were all at 0.55.
Sebastien is working on why we didn't go as deep a we thought. He wanted to find out if our noise is what we expect for poisson statistics.
Intermediate search, redshift less than 0.25. Some small discussion; Greg wants to have a meeting just on that. He says that the followup time necesary goes as the fourth power of how far out (in redshift) you go.
Lots of collaborators.
Spectroscopic followup, backbone from CTIO 4.0m and CTIO 3.6m is the general idea. Also have additional time at WHT and Lick, and maybe some in Tucson as well.
Which telescope to search with. Greg says the 0.9m at Kitt Peak has a 1 square degree area with 0.4" pixels. This would get us lots of z=0.1m supernovae which we can then followup with fewer resources. Greg is proposing a search in October at Kitt Peak, asking for two five night bits with a week separation in which to find SNe. There are some things to follow up on. This is predicated on having science grade CCDs for the whole camera.