SCP Meeting Notes, 1997 August 13

Pelar La Puente is here at this meeting; she is at Santa-Barbara for the next couple of months.


Don's 'Wobbilng Camera' Concept Test Run Results

I only come to listen to myself

--Don Groom

Don: talking about his run with Richard Stover [sp?] where he kept reading out the CCD to track the low frequency wandering of a star. Scope was not on autoguider but was tracking. The goal was to look at the startrails, and to see how much low frequency stuff there is, and to see how it's correlated across the array.

Features: centroid wiggles back and forth; there are bright spots in the track. (What is the timescale of the wiggles?) The wiggles of two stars appear to be fairly well correlated. The stars are 3' apart, which is much further than you would expect for wavefront correlations. (The conlusion is that the slow wiggling is due to telescope tracking errors and such.) Plotting correlations (correlation vs. column lag), the cross correlation between the two stars is very similar to the autocorrelation of each star with itself. The correlation between the two is about 75%.

Unfortunatley, the sigma of the wiggling about of the centroid, which is correlation, has a size of order 0.1", which would not be much in comparision to a >=1" seeing disk when you subtract in quadrature. Shortly, a wobble correction doesn't seem to do much. This is a befuddling result because empirically, a tip-tilt system improves things, which is what Don thinks he should have been measuring.

Saul says that he understands tip-tilt systems to work between 10 and 100Hz, which is faster than Don was reading out the chip. (There were electronic reasons why Don could not read the chip out any faster.) Susanna mentioned that she thinks the seeing at Lick is a lot better if you get outside the Dome; she recommends talking to Merle Walker, as he's been doing seeing studies "for centuries."


Don's 'Projected Image' Tip-Tilt Detector System Concept Evaulation Calculations

Next topic: Don was talking about making a CCD camera with two small CCD's on either side, which you read out amazingly fast, equivalently plotting a projection on the side CCDs. You can then use this projection as a signal to use for correction guiding type stuff.

Don took one of our images and made a projection along the column direction. He monte carloed it down for the time exposure, and shook it around a bit (as a part of the monte carlo0. (It's all going so fast that it's hard to get it down here reasonably understandably. If I remember to do this, I will try to clean it up later.) He will then run a correlation between one of his monte carlo examples and his template (the original). For 1ms samples, the centroid correlates with a sigma of 0.06 pixels. In other words, without any messing around, hunting for bright stars (which AO typically needs), you can locate the centroid exceptionally well. With 10ms sampling, the sigma was .019 pixels (which is related to the previous result by the square root of 10...). All of this was with a fairly rich starfield.

He took another image, one that was very sparse, didn't have many bright stars at all (from 9760). The biggest S/N spike is 0.33 in this image in the projection (the N is counting sky noise Don put bak in). With 10ms, the correlation he got a sima of 0.22 pixels. (Don asserts that you can read out the number of rows you need to read out in 10ms, so that this is a realistic test. Note that you don't have to worry about charge transfer efficency, since you're projecting anyway, so if you lose a few electrons to ajacent buckets, you'll get them back later.) Don concludes that this is the way you should make autoguiders, or detectors for tip-tilt.

You might want to patent this.

--C. Pennypacker

[Dismissive gesture.]

--D. Groom


Lightcurve and Cosmology Update

Gerson: has been going through the 97 data which Rob has produced curves for. He had a difficulty with 9785; it came out crazy, terrible fit. He rescued it by hardcoding the stretch to 1. The points from the reference run on this SN were already 1/3 of the way up. Saul says that Gerson should re-try the unconstrained s fit with the initial values being the parmeters produced by the s fixed at 1 fit, as it sure looks like the fitter got caught in some local minimum somewhere.

Gerson then went to look at the Omega distribution for these guys. He did the whole set of the ones which Saul showed at the S.B. conference fixing the stretch at 1; results weren't much different. He then tried forcing all s>1.1 to be 1.1, and all s<0.8 to be 0.8, for the sake of the correction. Again, the difference isn't huge, it just made the peak slightly peakier, the rest staying the same.


PADE approximation to Omega/Lambda calculation

Robert was doing some studies suggested by Mike Turner on using a PADE approximation for approximating the Hubble curves just from a polynomial (rather than using a big numerical integral), for any Omega and Lambda. (I.e. predicted magnitude as a function of z.) Robert didn't completely reproduce the curves in Turner's paper. However, he said that the approximation reproduced very well the integral up to z=1 (1% in r(z) error at z=1). The error (as in wrong, not as in uncertainty) gets up to 10% at z=3. (Alex points out that the error in the answer, magnitude, is twice the error. While we should notice this, for most of the range it doesn't seem to be really an important issue.) Robert says that this one he showed us was one of the better cases, and that some others will be worse. He will have more information later about how wrong the approximation is for various combinations of Lambda, Omega, z.

The reason for this approximation is so that for things like fitting programs we can get fast answers rather than having to wait hours and hours for the computer to do a long integral at every step of the fitting.

This segued into a discussion of the histograms and plotting two different definitions of the X-axis, one for Lambda=0, one for flat universe, and that the simple transofrmation isn't exactly correct but a good approximation. However, this makes it look like there's a degeneracy that means we can't separate Omega and Lambda. However, with high redshift supernovae, we should be able to start to break the degeneracy. Saul makes the point that, yes, you get a lot of overlap for the regions we're considering, you are throwing out a lot of other parameter space which isn't thrown out from SNe at just a single low redshift. Ultimately, though, it looks like we may have trouble separating a flat universe from a Lambda=0 universe.


Gamma Ray Burster Update

Sebastien. Finally started writing the GRB "false alarm rate" paper, following Bruce's outline. Had a few false alarm candidates, which he got rid of because they were too faint, just above threshold, too low S/N. (S/N 3.5 was the highest one.) He covered 2.75 deg^2 with a 5 day difference, found nothing which could correspond to what has been seen as a GRB.

There was one object that he orignally thought might be something but since has thrown out. The extra light was right at the center of a galaxy. It's a 19% increase, a 4.4 sigma. It was a signal that seemed to be there on March 6 and 7 but decreased and was less on March 10 and 11. One thing Sebastien noted was that the sum subtraction was less than the individual subtractions.... There is also that the March 11 seeing was worse than the seeing on March 6 and 7 (people seem to recall from memory). Seeing mismatches can give rise to bad subtraction. Sebastien also did a subtraction with February 10 references. Doing Reylight with the reference as February 10, it looks like only one March 6 image is not consistent with 0 (where 0 has been defined as the February 10 flux value). At the end, Sebastien wants somebody else to look at it to decide if it's real or not.

Side note: when he does subtractions other than the first one he found the thign in, he has to find the candidate himself because searchscan wasn't popping it up automatically. Matthew asserts that the reason is the BTC warp where the RA and Dec is all over the place.

I think that what he has against it is that if he includes it it makes the paper a lot harder to write.

--A. Kim


Weak Lensing

Mike, working with Gordon again. They've been working on the weak lensing analysis of MS1054. He thinks there might be a problem with the addimages program, in that it may not align the images perfectly. When you use Gordon's image adding software, you get residual ellipticity which you don't get when you use our software. (There is some debate about the conclusion that there is something wrong with our software. Is it that our software is schmearing out ellipticity that is really there due to the telescope, or that his software is introducing ellipticity that isn't really there?) One thing to check is to look at the original images to see if you can measure that ellipticity which Gordon is seeing in his summed images.

Mike has documented a program which goes through all of our fields and estimates for every little part (square arcminute?) how deep our deepest image, and what the total exposure in all of our images is on that spot. You get two plots out of it, one with number of square arcseconds as a function of deepest exposure, and one with number of square arcseconds as a function of total exposure. There is a little more than 20 square degrees with a total of more than 900s (what telescope size?). There is something less than 8 square degrees with a single image of 600s or more.


Matthew's Description of Photometric Reduction Procedures

Matthew has been trying to write up our data reduction procedures. He's got a first draft of a photometry writeup. He went through the thing on the board, (referring to lines in the draft which we didn't all have...) which shows the basic idea of the photometry. First step, to measure the number of counts on each supernova image. Subtract out an aperture of the same size on a convolved reference from the aperture on the new. Each night can be on a different telescope, different conditions. They are then ratioed using stars to go to the "primary reference".

Calibration, look at fields of Landolt stars to calibrate a single night. Each night can calibrate an image on that night, which you can then bootstrap to calibrate the primary. Basically, you measure the magnitudes of stars in the SN field (secondary standard stars) which you in turn use to calibrate the primary reference, or to calibrate each night.

The last correction ("number 4") is the supernova correction (the instrumental correction). Everything done up through now was implicity assuming star spectra for a given color. The instrumental correction should correct for small photometry errors which arise due to the difference in spectrum between a supernova and a star of the same color. In other words, the color term might be different for a sample of supernova at different colors than for a sample of main sequence stars at different colors. The instrumental correction is the difference between the R magnitude of a supernova and the R magnitude of a star where the two have the same number of counts and the same R-I color. This is a function of date since maximum light.

Note that there is a section on color correction. What we have been doing up to now is slightly wrong because we've been ignoring color terms for the ratio between a given night's secondary standards and the secondary standards measured onthe primary reference.

Finally, at the end, there is a K-correction. (Though my understanding is that we do this to the template to figure out what the template would look at at the appropriate z.)


Alex's Stretch and Covariance Matrix Scariness

Alex: trying to implement the "stretch stuff" into the lightcurve fitting program. Biggest issue is including the covriance matrix due to the template. Since this matrix depends on the epoch of the supernova, it has to be included in the chisquare program. This will take some extra coding. Alex is going to try to do it with one iteration: determine time of maximum without using the template covariance matrix, and then include the template covariance matrix in the next iteration. Make sure that the time of maximum doesn't change very much. There is also an issue that for the highest redshift supernovae we don't have templates; we need rest-U for the highest z supernovae we've observed. Peter is going to talk to Adam to see if we can get a few more nearby U-band lightcurves. We also have to find out if the U-stretch agrees with B-stretch and V-stretch (the latter two agreeing with each other).