SCP Meeting Notes, 1997 July 30


Fits to Models with Errors

A statistician named Louis (I think) was talking to Saul before the meeting, so as we started we were discussing what to do with a fit where you have a covariance matrix on the model as well as on the matrix. Loius says you can just add them, so long as the model uncertainties are uncorrelated with the data uncertainties. (There was sone talk about ill-conditioned covariance matrices which we didn't fully understand... but it seemed to have something to do with big correlated errors, which would in turn lead to there being fewer degrees of freedom than you had thought you had.) Don is worried that (a) we don't have a good way of representing the data uncertainties, and (b) as we do the fit, it affects the parmeters and what will go where in the model and the model covariance matrix. It was decided that further discussion of this issue would happen offline.

This led us into a discussion that we noticed that fits to the new set of data pretty much all give Chisquare/degree of freedom of about 1/2. Two reasons why this might happen, Don says: one can be that the errors were genuinely overestimated, another is that there is a correlation which was not taken into account correctly. Some talk of Sinular Value Decomposition to figure out how many significant different data points you have. How many degrees of freedom DO you have? Don's position is that once we've put the diagonal elements on the matrix, we've taken care of that sort of thing. (The problem remains that our errors are too big.)


Wobbling Camera Concept Test Run Upcoming and CCD Update

Don's about to go to Lick (tomorrow) to do a check on the Steve Holland CCDs. They're going to do a continuous readout to measure the ripple of a star to measure how fast it wanders about. This could be useful information for the eventual design of a "wobbling camera" which could correct some of the psf by using the equivalent of a tip-tilt correct (although it would probably actually move the CCD around... Don seems to say). Greg pinpointed Don's biggest worry is that the field of view over which you will be able to correct which is too small. The wobbling probably can't help atmoshperic seeing at the optical, given the small size of the isoplanatic patch, but it can do things like correct telescope wobble, tracking errors, etc.

There is also some talk about how we're going to have to come up with some plan about where CCDs will go once we eventually have them; how many will Lick get, how many will go somewhere else? Cameras themself will be telescope specific, of course, so we can't just build us a camera we bring with us everywhere.


Our 1m Telescope on the Space Station

Saul and Carl say that they (we) (who?) are talking about putting a 1m telescope on the (putative) space station. It would have the same resolution of WFPC on the HST, and the CCD would be more efficient than WFPC2, so the 1m telescope would do "as well as" the 2.5m HST (within about 50%). (Although, to me, this sounds like "sufficiently advanced technology.") Carl claims he will be head of the TAC.

We'll have to kiss your butt in a few years.

--P. Nugent to C. Pennypacker


SN 9784 HST Points and Errors

Gerson says that Greg is working on errors with the HST points, which may change. We will need to then refit the 9784 lightcurve reflecting the new information. Apparently the readout noise had been ignored before, only the poisson noise from the SN and sky itself, and the Charge Transfer Efficiency Errors. Peter seems to remember that Gerson said that the read noise error was insignificant in comparison to CTE... but Greg is worrying about this now. In other words, it sounds like nobody fully remembers or agrees what was done to get the errors on the HST points, and Greg is starting over from scratch. Current procedure: 2x2 box, aperture correction for 1/2" circle. Then, apply the CTE for a small aperture (which Greg will check to make sure he knows how big it is, but it may be 2 pixel radius). At ANY rate, we believe that the error on the background is big enough that we want to use as small a radius as possible, which corresponds to a 4-box (2x2 pixel size), and we have to compare that to the 12-box fit (which, if you do the math, is close to a 2-pixel radius), which is apparently what we have the CTE numbers for.

There is also the talk of finding a single position for the SN which you use each time, looking at the other objects as offset stars. Greg thinks he can find the position of the thing to 1/3 of a pixel.

For the R-band, where it's more peaked (i.e. lots of flux in one pixel), it might be worth starting to add half-weighted pixels outside the central pixel, etc. Using the offset from the other "stars" (objects) in the frame might not be safe because the scale may be slightly different.... The R-band data looks noisy, and will probably come out only good to 0.15 or 0.2 magnitudes. The single HST R data point will probably not add much in this case, because we seem to have done better (and, certainly, more often) from the ground.

New numbers will hopefully be in tomorrow which Peter will then run through, so that Saul can have them for (a) the paper and (b) the talk he has to give next week.

Greg says that the way that the "sky" program in IDL is estimating the sky is biased, due to an overall undulation on top of the sky. Saul says that they noticed this earlier and accounted for it in the estimate of the sky background. There is also a quantization problem because the readout noise is less than one data number.


Lightcurve and Cosmology Status

Lightcurves, keeping them updated. There are some 14 or 15 (not counting 1994) which we have in two colors whose data we seem to believe well enough to be willing to plot them on the hubble diagram. There are still little problems in the light curves, as well as in figuring out what the deal is. Work in progress, we still don't have our final list, but will sometime before too long presumably. B&H extinction has been put in, but not color/host extinction.

Gerson shows the hubble diagram. Peter wants to see the color information on this.... e.g. plot effective q0 as a function of B-V. Perhaps it will be "obvious" that some points are more effected by extinction. With Lambda=0, you still get lots of points with Omega_Mass<0. If we believe this, it seems to say that we need some value of Lambda. However, we don't yet want to go off on any limbs of interpretation. It is, of course, still consistent with a little bit of extinction with a envelope. We don't have nearly enough data to really see this, however.

Gerson also has a histogram of Omega_M for all our SNe, both "good" and "all", "corrected" and "uncorrected". (Peter says to do a zerth order extinction correction is [(R-I)-(KR-KI)]*4.1; subtract that from M_B. That would give a rough idea for the host galaxy.) You can also try doing this, and _not_ correct for stretch, the idea being that the stretch/magnitude correction goes the same way as extinction, so that you can "pretend" the whole thing is extinction and only correct for that (thereby doing the whole thing). Peter says call this the "Van den Berg" correction.

We also need to put in a cut based on the spectra. Craig has these numbers, which came from Isobel.


Miscellaneous Updates and Status

Peter is working on a talk, on the paper. He says that Mark Phillips is going to be here on Friday in the afternoon; his flight gets into Oakland at 11. Saul says that we should talk to him about getting him on board with the intermediate redshift search. (He's already in with Kirshner on the distant search.)

Greg also wants to mention that he was talking to a friend involved in Spacewatch, with the thought of modifying that to find intermediate z supernovae. Apparently they get to 21st mag in some quasi-R color in a 1/2 degree by 1/2 degree which they then driftscan. It might be a possibility to scan the same area the next year thereafter, so that the second year you could find supernovae (and AGN), using the reference year. The resolution is about 1"/pixel. Of course, we seem to be able to find lots of SNe now, and our problem is getting enough followup.

Argen Dey at Kitt Peak is doing a survey for clusters, and will be working in exactly the mode we'd be working in to find SNe. They hadn't thought about SNe at all... the data will come in in October or sometime like that. We might analyze their data. Of course, we wouldn't have any followup.

Apparently, Argen Day is also working with the Kuiper Belt people at Kitt Peak (whoever they are). There is some mention of offering them our data to mine for Kuiper Belt objects.

Speaking of which, the next CFHT proposals are due in a month. Just so we can look forward to all being ready to panic two days before it is due.

Two students from Portugal, masters students, are coming in the fall, and will be here for a year. This will be a problem because we have no space or computers left.

Robert put a couple of pages on the web. First, a tutorial of flatfielding, second, a program which can do fiducial residuals. It's under the "analysis" section. The flatfielding page is ideally a tutorial for somebody who's never done any cleaning before. He's also adapting the residual program to work with standards as well as for fiducials; he's pretty much done with that. He just has to make the graphs look a little prettier.

Sebastien. Saul says we want the efficiency curves for the fields on which the 40 SNe have been found. For those curves for what redshift is there no malmquist bias for? (I.e. which ones are we ~100% efficient for a supernova up to 1 magnitude dimmer.) That way we can indicate which ones we think are right near the limits.

Sebastien has also worked on GRB stuff, and has now searched R band up to 2.7 square degree field, and I band is being subtracted now. He's found two objects which could be things that'd look like the GRB optical counterparts. One of them looks like an asteroid; the other he's still subtracting to see if he sees it elsewhere.