Deepsearch Meeting Notes 1999 August 4

Greg and Peter aren't here yet.


Saul's News

First news item: there was a competition for the next generation of Science Technology centers. (CfPA was a previous version of this sort of thing.) There were a couple that we were (at least vaguely) interested in that were in the running. Five centers were chosen, of which one of them was one we're interested in, which is a center for adaptive optics at Santa Cruz. Their goal is eventually to have adaptive optics with a laser guidestar; the starting point will be something using a natural guidestar. The idea will be to develop all the techniques (to use anywhere, I presume) for adaptive optics. One of the things they're going to do talk to an opthamologist who does AO looking into an eye shooting a laser or some such.

One of the things that James Graham wants to do is build an integrated field spectrometer for use with the AO thing.

Apparently, they're getting fully funded (4 million... is that per year?), but they won't be able to ramp up fast enough to fully spend it. So, if anybody can think of anything that could fit under their aegis, we might be able to get them to help fund it.

Second minor bit of news is that the Satellite proposal was presented to Martha Krebbs (do I spell that right?) last week. She tells us we should go talk to Ernie Monez (spelling?), the #2 person at DOE (her boss). Monez is the only person below Richardson, who is in turn a political person who doesn't understand science.


Dylan and Aperture Photometry Code

Dylan's been working on aperture photometry. He summarizes what aperture photometry is. There are two parts to what he's doing. The first part, which was actually really hard, was to understand what Ivan was doing. The next thing he wanted to do was add a routine that would do background subtraction.

He shows an overall summary of the IDL programs we run; readimage to read an image, reduceimage to pick out the bright objects, and finally faperture to do the aperture photometry. There was a side discussion about isophotal finding.

He shows the C functions which faperture.pro calls. The main one is apercent, which calls a centroiding function. The latter one calculates the new center of mass. Dylan has added an anulus function, which is the preliminary background subtraction technique- subtract the average in an annulus around the object.

Ivan's centroiding algorithm is your standard center of mass calculation. The photometry is a simple sum. It's not doing any sort of psf fitting. It's a little complicated because he does it by subpixels, but conceptually it's simple.

The noise calculation to the photometry is adding the background fluctuations, passed in from outside, plus a Poisson noise term. There is error in the centroiding as well (actually written by Rob), but Dylan hasn't figured all that out yet.

Dylan describes how he divides the pixels into subpixels, and adds strips of subpixels.

Dylan is now trying to do the same thing with an annulus. He's just stealing Ivan's technique. Right now, he doesn't quite have it running. He also needs to talk to Rob about interfacing it with our IDL code.

Lots of talking about doing a weighing function, rather than simple aperture photometry. I didn't get all the discussion down.

Dylan is leaving tomorrow, so he will have to make sure to show Rob where he is. He will be TA'ing a lot this fall, so he won't be able to do much, but he does want to stay vaguely connected.


Kirsten: Matching Spectra

She's been trying to figure out a better way to match spectra. Given a supernova spectrum, automatically figure out it's redshift, how much reddening it has, it's epoch, etc.

Last summer, she wrote a program that compared the data, value by value, to something like thirty template spectra, and spit out a chisquare for each. It was really slow. This summer, she's been looking at doing wavelet transforms to see if she can get something that's faster and better.

Fourier transforms are localized in frequency. You approximate your signal with a series of signs and cosines. The output you get is a bunch of frequencies and amplitude coefficients. It dosn't tell you anything about when these frequencies are applicable. An anaology has been made to music. If you FT a musical score, you will get a list of all the notes to play, but not when to play them. A wavelet transform tries to tell you the notes and when to play them.

A wavelet transform starts with a "mother" or "starting" function, called a wavelet. It's got some shape, and zero area. The one that everbody talks about is the Mexican Hat, the 2nd derivative of a Gaussian. There is a set of discrete wavelet transforms, a set of functions.

Kirsten says the way to think of the transform is to think of it as a matrix. You multiply your data by this matrix, and somehow you get a vector of coefficients which alternate "smooth" and "detail" coefficients. You then rearrange everything so that the smooth ones are on top, and the detail ones are on the bottom. You then apply this again, and again, until you only have two left. Those two you have left are your "mother function" coeffients.

At the end, most of the coefficients are really small (e.g. 10^-4), whereas the big ones are like 40. In the program, you can just set a cut. You can check it by doing an inverse transform to go backwards and recreate your original data vector, but using less information (which is the whole idea of the compression).

She tested it by constructing a spectrum by adding three Gausian profiles, and adding Gaussian noise (to the 1% and 5% levels). She did the transform, and showed us a plot of the coefficients; it was hard to see the second one given the first one. She also showed the difference between coefficients for the 5% and 1% curves, and the significant ones were different by less than 1%.

(Saul and Greg are asking for the catalog of transforms... what does a noiseless Gaussian look like when run through this transform?)

She next wanted to figure out which coefficients kept which information about which sort of Gaussains. She did see that they were all linear... regardless of the flux, there was a similar shape. When she tried it with different widths, it wasn't linear; there doesn't seem to be a direct simple relationship, so this one will be harder to pull out.

Next she tried changing the offset... but for weird stuff for the very lowest coefficients, there seemed to be an offset of the pattern in coefficients.

Talking about differential stretching, Saul suggests that it might be a good idea to take a log of the wavelength axis before doing the transformation and comparisons.