Deepsearch Meeting Notes 1999 July 21

Greg isn't here yet.

This may be the beginning of a new meeting format. There will be short presentations from people, but first Saul is soliciting any news items.


Miscellaneous

Jason LeGaspi and Michelle went to Argon national lab, where there was a conference for undergrads. It was a conference trying to get more undergrads involved with the lab. There were high level DOE people there, like Martha Krebbs [check spelling]. Jason and Michele gave, Jason says, an overview of what the SCP was, along with their role in the very low redshift portion of things.

Saul notes that on Tuesday, Gene Lowe [spelling?] from NFS will be in town. (Sounds like a cable modem day to me.) He will be here to talk about things like Satellites.

Saul introduces Brenda, who is now back from a long vacation. She graduated on May 20, and left for the middle east (as in near the Mediterranian sea). Brenda will be working on the code to develop the whole psf fitting stuff we're supposed to get from Stetson.

Saul mentions before we break that there will be a goodbye deal for Robert Quimby, who's leaving after today. Greg is in charge. We're leaving for Jupiter at 4:30. Saul is also talking about an early rise time small meeting, but that won't involve all of us.


Greg's Conference Report: SN Cosmology, Rise Time, PSF Fitting

Greg went to a conference of some sort last week in Victoria. The subject of the conference was cosmic flows. Mostly it has to do with comparing the velocities of galaxies to their distribution to find Omega_mass. The idea is to find how far you have to go out to figure out on all of the mass that is moving us relative to the CMB. There is debate about all of this sort of thing.

There was a session on Type Ia supernovae, led by Kirshner, with Schmidt and Adam Reiss present. (Three to one.) Adam talked about using some 43 nearby supernovae to measure bulk flows. Greg was the only one who talked about using SNe Ia for cosmological purposes. Our result seems to be fairly accepted at this point; there were few what-ifs. Greg talked a little about SNAP/SAT as well, and testing dust and evolution. He also talked a little bit more about the nearby survey, and our hope to eventually generate one SN Ia per day.

The question of the rise time difference did come up. (van den Bergh asked it). Adam showed his results from his paper. Greg put up a flux plot showing where the disagreement was, and indicated that he'd done some prelimiary work indicating that we had enough covariance between time of max and stretch that suggested that Adam's result might go down from 5.8 sigma to 2 sigma. In this particular community, it seems that people aren't particularly worried about that.

The other thing that Brian claimed was wrt psf fitting on their photometry. He claimed that his photometry was a factor of two better than just aperture photometry, when done on a subtracted image. The question comes up of our lore that it doesn't help much; Saul says that there is some memo that Julia Smith wrote- she was the last one who did it. Saul claims that it was better, but marginally better that the loss in systematic and robustness wasn't worth the tradeoff.


HST Photometry

Greg also learned that Brian Schmidt's group just hasn't been concerned at all with all these low light level effects in the HST. Peter Stetson, working on the key project, had spent 18 months working on solving these problems. He thinks he's paramaterized it down to an accuracy of 3%. That 3% is a systematic. Greg says that Peter will send him some coefficients. Apparently it is better known on the WF than it is on the PC. His paramterization is based entirely on a charge transfer model; the leftover errors and things do seem to be something going on in addition to CTE.

There are also other questions, such as is there a time dependence, etc. There may not really be data for all of this, as the HST doesn't take as much calibration as they used to.

Stetson's proscription applies to psf fitting using Stetson's point spread function. Greg's going to get Stetson's psf library.

The other group is either completely ignoring all of this, or is using Whitmore's stuff. They are not doing the Concertano thing, which sounds really extreme (meaning adding 50% to our dimmest lightcurve points). Brian Schmidt claims that they would have seen this by now, but Greg doesn't consider this a very strong claim.

Greg says that Stetson was interested in explicitly asking in cycle 9 for some more calibration to explicitly test these models. Any outstanding photometry questions that effect the key project and the supernova stuff would be important enough to be worth putting in a proposal, Greg and Steston feel.


Michael W-V's Subtraction/Scanning Investigations

Michael Wood-Vasey is going to talk about what he's been doing with going over the subtraction and scanning software. He wants to figure out how to improve them to figure out how to throw out "obviously" bad stuff, so that more may be done automatically in the nearby search. He's going to show us some of the things that we need to address.

The first thing he shows us are stars that were poorly subtracted, resulting in donuts. One image had something like 100 of these. This comes from just bad focusing of the NEAT telescope. The suggestion is that if we get an image where a part of it is out of focus, we can just throw out that portion of the image.

Another issue has to do with the masking. Bright stars got masked right near the bright portion, but there were parts of the star that spilled out around the masking. The danger of masking is to avoid masking too much, or masking out an entire galaxy. Greg suggests trying to use the spikes that come out from bright saturated stars as a way of identifying stars.

He shows us another case where there was a cosmic ray, where there were was enough spillover light in the other image, that it was kept as a candidate.

Peter suggests that we should throw out supernovae too close to a very bright star, because it will be too painful to follow.

The last thing are bad columns which didn't get masked out. Sometimes they went away, sometimes they didn't; they probably have something to do with masking problems. There were also the famous interpolation artifacts, that we all got used to with swatch and others during the nearby search.


Searching for Eclipsing White Dwarves in NEAT Data

Related note: Greg was on the phone with (somebody) at Goddard who is interesting in finding eclipsing white dwarves in the NEAT data set. They may be interested in working on this. Tom Marsh may have a grad student who is interested in working on this question. These guys want to run on Dec Alphas, and Greg told them to talk to Nan Ellman.


Alex Conley's HST Update

Alex is going to talk about what he's been doing with the HST. We've got 8 SNe in set G and 3 supernovae in Set F. The Set G supernovae weren't followed very well from the ground. He's trying to figure out a good way to put down apertures on spuernovae. You have to put the aperture down very accurately. You can perhaps find them when they are close to max, but it's hard to find them precisely when they are dim. So, you need to figure out the transformation between the HST images so that you can use the position found on the near-max image to figure out the position to use on the dimmer points.

This is, of course, what we do from the ground. There, we have hundreds or even thousands of other objects in the field in order to find a very good transformation. In the typical HST PC image, there are way too few objects to easily do the transformations as we've done them.

Alex mentions Drizzle, which was developed for the HDF. That was done to get resolution of better than a pixel, by using subpixel dithers. This means they need good alignment. Unfortunately, the drizzle stuff won't work terribly well for us. It doesn't handle big rotations, and (in a related matter) it doesn't handle things moving between chips. Even Andy Fructer, the Drizzle maven, told Alex that drizzle probably wouldn't work for what we're doing.

Alex shows the full WFPC field, which includes the three bigger WF chips. He wants to be able to take advantage of the other chips in order to figure out the transformations for just the PC.

Greg suggested using the RA and Dec solution for the HST, and matching the RA and Dec from all found objects in a couple of frames. False/junk detections are a problem here. What Alex does is plot the dra and ddec, and you see a peak (not exactly at 0,0, because we don't know our pointing very well). If you throw out the junk and keep the peak (which only amounts to maybe some 10% or thereabouts of the total number of matches), then maybe you've got good matching.

This all ties in with trying to map the geometric distortions of the HST. People have put effort into trying to fix this geometric distortion, which has produced a global solution for the whole camera at once. By taking advantage of this, he can map between the golbal coordinate systems and use objects on all the chips at once. When Alex first tried this, he looked at the median residual of the fit, and was getting residuals of something like 0.6, or 0.8 pixels. Greg interjects that the algorithm for deciding real matches worked, but that the transformation was having problems. However, if he used a third order transformation, insted of just a first order transformation, he was able to improve things. This indicated that the geometric distortions hadn't been taken care of, using the values from a Holtzman et al. paper. Eventually, Alex figured out that there were in fact problems with the values in the Holtzman paper... and, he found out that the HST people knew this. He got the HST folks to send him some better coefficients, and the problem went away. (1st order did as well as 3rd order.)

With the new coefficients, the median residual got down to something like 0.4 pixels. Alex is trying to justify dividing by the square root of the number of points.

Alex put a bunch of effort into developing something that would generate fake HST images, that would have used the geometric transformation to make the images. When he wasn't able to transform those to each other, he decided that the centroiding software he'd been using (one of ours) was having problems. He wrote something to take advanate of the HST point spread function, and was able to get it to work perfectly on the fake images, but it still didn't work on the real images. So he's still fighting. (When he put noise on his fake images, he found that for a signal to noise of 3-4 (a typical value), he could get residuals to 0.3 pixel.)

So he still fights with it. Saul thinks, though, that we are maybe not too far from what we are supposed to get.