SCP Collaboration Meeting, 2000 June 13, 14:30 PDT


Reynald and Delphine French Group Overview

Reynald is quickly going to talk a little about what the French group is doing. He says that the other people in the group aren't here because yesterday was vacation day in France.

He says that they are mostly working on software development; search software and lightcurve software development. He says that there are two new grad students Julien, and the other is Farid (I failed to get the last name). At the end of the year, a colleague of Ariel, a just graduated student, will join the French group for three months.

They are discussing with Greg and Saul mostly about building an instrument (together with the people for Leon) for the SN factory. The group in Paris is mostly working on the software. They did a run at the CFHT last fall and another this Spring. They're also working on the intermediate z search with the Cambridge group and with Ariel.

There was a side discussion of MegaCAM and proposals for longterm projects and longterm proposals, which I didn't archive because my attention was drifting. (It's been more or less 5 hours of solid meetings, with long no-break sessions, so I'm kinda spacing out at this point.)

Reynald says that on the Political aspect, the current VLT proposal has a lot of French names on it. This was done in order to increase the chance of getting the time; he asserts that this does not mean that they are all going to sign the necessary papers, and that they are necessarily now members of the SCP.... He says that if there are rules about who joins the SCP, these folks will have to go through it. The only non-French names on the proposal are Chris Lidman, Isobel Hook, and "Saul Perlmutter et al.". Saul and Chris are mentioning that we should feed back to the VLT to convince them that in fact the French and Sweedish portions of the SCP are in fact central and influential parts. Saul adds that we should let VLT know that the European parts of the SCP are having trouble establishing themselves with telescope time, and ask them why. Reynald says that there are also problems with them being Physicists and getting respect from astronomers. Reynald says that there is pressure for them to join with the Italian supernova group.


Delphine on French Software Development

Delphine is going to tell us about the TOADS photometry software, but will also give a more general talk about TOADS. "TOols for Analysis and Detection of Supernovae". Developers are Pierre Astier, Sebastien Fabbro, Delphine Hardin, and Kyan Schahaneche. Reynald et al are the betatesters. The software builds on cfitsio, SExtractor, some of the EROS software (PEIDA - used e.g. for psf fitting), the CERNLIB linear system solver, and DAOPHOT. SExtractor and DAOPHOT have been wrapped so that they can be called from C++.

There are main toolboxes; IO fits, IO fits (for partial images), image handling (mean, standard deviation, etc.), source catalog creation (SExtractor and home made detector), catalogs association ("matching"), geometric transformation, convolution kernels fitting, and a crude database. The main routines (e.g. flatfielding, subtrfaction) are based on these toolboxes.

Delphine will focus on the TOADS photometry software. The problem is measuring the flux of a point-like source (SN) sitting on the object of an unknown shape (galaxy) positioned at the same location on a time sequence of images from various telescope. (This is in contrast to the detection software.) This does borrow some tools from the detection/subtraction software; new tools have been built to go together with this. It was mostly Sebastien Fabbro who did this work.

Tools designed for detection/subtraction include SExtractor, geometrical alignment between 2 catalogs, between 2 images (coming from the same telescope), PSF alignment between 2 geometrically aligned image (Image = BEST_PSF_Image (x) K). ((x)=convolution.) In Spring 1999, for INT, they had a 1st order geometric alignment; PSF convolution was Gaussian (I believe she said). For the CFHT runs, the new version uses Pierre's PSF matching software, which fits a kernel that varies smoothly across the image, based on the Alard 1999 paper.

The key is to minimize the chisquare calculated from the difference betweeen one image and a second image convolved with a kernel. The Kernel is built on a set of basis functions, so the fit is linear. The coefficients are allowed to vary slowly across the image. This method does not assume that the PSF is Gaussian; the Kernel is also not symmetric, so you can account for slight distortions on the subtractions. (Note that I [Rob] believe it still doesn't handle diagonal stretching very well, but I'm not sure that this is still current.) She shows us a sample subtraction, which subtracts out very well.

She shows a plot of residuals. For one object, she shows all the pixels around an object in the subtraction, divided by the same pixel in the reference. She did this for lots of objects and stacked the histograms. At the center of objects, there is less than 1% residual. As you go far away, the scatter goes up, naturally, since the reference value goes to zero.

She goes on to the new tools done for the lightcurve measuring software. This included DAOPHOT and tabulated PSF being integrated into TOADS; production of DAOPHOT catalog; geometrical alignment of imags from different telescopes; and fitting of SN flux and galaxy pixels on time sequence of geometrically aligned postage stamps. (I.e. a step up from just doing reference subtraction.)

The time sequence postage stamp fidding modelled each image as:

   image_k = bckgrd_k + F_k(x)PSF_k(x0,y0) + galaxy (x) PSF_k
This is a linear fit using information on the galaxy on all images (whereas reference subtraction just measures the galaxy from the reference image). The resolution is limited to the best PSF of the sequence; PSF_k=K_k(x)PSF_best. Then
   image_k = bckgrd_k + F_k(x)PSF_k(x0,y0) + galaxy (x) PSF_best (x) K_k
bckgrd_k, PSF_best, K_k, (x0,y0), and (PSF_k) are known. (Estimated in subtraction software?). What is fit is F_k & galaxy_best pixels (G_ij_best).

The total number of parameters in the fits is number of images (~15) and the size of the stamp (say 50^2).

The key then is to build the software using the tools. Procedure:

What remains to be done:


Stuff going on at LBL

Brenda, HST Data from 1998 SNe

The trick is to figure out how to do precision photometry with HST. The total dataset is 13 SNe taken over two different bands, in up to five different post-peak epochs.

The approach is to generate Tiny Tim profiles in order to model the data. This is to minimize the noise and the CTE effects. Tiny Tim is a black box code that comes out of STScI which generates models of point spread functions given the chip, type of source, etc. anywhere on the WFPC (or at least the PC). Just last month there was a revision of Tiny Tim. These are generated and translated to the position of the centroid of the supernova (good to about 1/10 pixel accuracy). This is normalized to the amplitude of the supernova, and that gives you photometry.

She does a grid search (in x, y, and amplitude), and finds the best fits by doing marginalization of a chisquare cube. From there, generate lightcurves, mix with ground based data, and publish.

First topic: coordinate system. Tiny Tim always returns an even sized array, and centers a psf on the center of a pixel. So, there's a systematic offset up and to the right from the center of your array. This is fixed by shaving off an extra row and column in order to make the psf centered on the array. This being done, she figured out where the centers were, starting on what looked like the centroid pixel. She shows us a scatter plot of where relative to the nearest pixel the centroid was found; it looks like a pretty even distribution, looking at it by eye. (This was a check for systematic errors; you'd expect a random distribution.)

She shows us a sample fit, with a 3d surface plot and a slice plot. She also shows the marginalized x, y, and amplitude, where we can see the fit values, and the uncertainty. This example ran off the grid in Y, so she has to extend the code to allow the fit to spill over into neighboring bins. (The fit considers 25 pixlels, i.e. +-2 pixels from the central pixel.)

She shows 2d marginal plots, which is how you can look to see if there are any correlated errors between the different fit parameters. In the example she shows us, the ellipses don't look tilted, which would suggest that there aren't any really strong correlated errors.

She goes through the time series of 9819.

Right now, she fits the position of the SN each time, rather than using the transformations. This is a problem for when the supernovae get so faint that you aren't able to get accurate centroids. Alex Conley is working on geometric alignment of the HST images, but right now you can't transform between images to good enough precision. Alex hopes to be able to get the transformations down to an uncertainty of 0.1. This work is still in progress.

There are still some things to sort out; she's still not getting reduced chi2's of 1.

She mentions the CTE (Charge Transfer Effect) effect. It was first discovered in December of 1993, that there was a 10% ramp in the Y direction (along the chiop). The first response was to lower the temperature of the chip, and then recommend a linear correction along that direction. In 1996, studies of Omega Cen started, doing short exposure aperture photometry in several bands. In 1998, Stetson did a ground baesd study of Omega Cen, to compare to the WFPC2 results. He showed a smaller dependence than the other work, which is attributed to the fact that he had to use brighter stars. Another paper by Sarejdini (1999): he looked at the HDF over a 2-year time period, and found results in agreement with other stuff done.

The most recent results show that the effect can be 40% for faint stars on low background with short exposures near the top of the chip. More typical is 10%, which can be corrected down to 1% or so. (Aside: for 9784, there may be a systematic error due to getting CTE wrong of a couple of percent, which is much less than the systematic error.) "Each trap demands a tribute!"

The CTE losses depends on Time, Position, Background, Filter, Exposure Time, Flux, Aperture Size, Morphology, and also the recent exposure history of the chip.... Greg notes that the reason it depends on aperture size is because the CTE is measured discretely; people don't want to extrapolate. They only quote values for measurements done in the way that they did their measurement.

(Stetson is working on figuring out CTE effects within certain PSF profiles. If he doesn't come through (he keeps saying that he's not yet ready to release it), we may have to go back to doing our photometry in predefined apertures for which we have quoted correction values that we can use.)

Finally she shows a sample lightcurve fit of 1998as (SN98122).

Saul mentions a potential bias with PSF fitting; with dim objects, the fit will tend to slide towards upwards fluctuations. This is something that probably ought to be Monte Carloed.

Alex Conley on HST Transformations

He's been working on trying to do the geometric alignment of HST images. The point is that we want to be able to find our SNe when they're faint. We've been doing these alignments of ground based data for a long time, and they work well. The problem with the HST PC is that we don't have enough objects on the chip itself in order to come up with good transformations. The PC is very small in area, and frequently we only have three or four objects (perhaps dim and extended) on the PC with the supernova. What Alex has tried to do is use objects in the three WF chips. In order to do that, you have to put all four chips on a common frame.

Alex says that there are a couple of solutions out there for the geometric distortions of WFPC2, which should give the information that he needs. None of these solutions so far work very well; he's not fully sure why, but he is investigating where the problem might be. Alex shows us some examples of the transformation not working. He shows four epochs on 988, with a circle put down by hand on one of the epochs, and then moved using the transformation to the other images. Alas, the circle moves relative to the supernova by as much as a pixel or two. So, there are still problems.

Alex says that he's gone back to some of the original Omega Cen fields used to derive the global solution. He says that one thing you see by plotting the residuals is that there are clearly two distributions, a big fat one and a narrow spike. He says it's not immediately obvious what's calling that.

Michael Wood-Vasey on Subtraction Software

He's working on the same sort of stuff that Delphin was describing. He's working on adapting the same Alard algorithm with a continuously variable kernel, specifically for subtraction. He says that he still has some nagging issues having to do with normalization of the kernel.

He's doing this eventually with the goal of getting automated scanning working with the SN Factor.

Dan Kasen on Spectrum Modelling

He's been doing stuff more theoretically related. He's been working on using wavelets as a way of hopefully objectively analyzing supernova spectra. Wavelet transforms are like Fourier transforms, but use different basis functions. They tend to have localized wiggles, which means qualitatively that it can isolate out individual features.

The important point is that this extracts important structural information. Dan shows a sample SN spectrum (a very high S/N one), and the model that comes from a Wavelet transform. The model pretty much reproduces all the important features. The model is based on only the forty largest coefficients, but still all the features are there. This represents a huge compression of the information.

He shows plots as a function of epoch for a couple of specific coefficients, which appear track two different features.

Dan says he's written a code which fits an observed spectrum to a template in wavelet space, which may be an effective way of figuring out what day the spectrum appears to be at. These wavelet things may also be a way of saying how well a spectrum fits a model, or what is the variance between supernovae spectra.

Dan says that for very nice nearby spectra, it can match the day within a day or so. For our more noisy distant supernovae, matching redshift and day, it gets maybe within five or six days. Peter notes that for the high-z supernovae, you have to worry about the host galaxy as well. It may be that a couple of wavelets end up representing the host galaxy, but that study still has to be done.

Dan says he hasn't yet done a study of type discrimination.

Dan says that he's also working with some theoretical models to help understand the physics behind the spectra. This involves Peter's radiative transfer models.

Dan, brainwashed by Peter, is putting in his vote for going after Type II supernovae in the future.


Catch up on SNAP

Saul is going to tell us informally where SNAP is. Mostly so far it's been science funding politics. Mike Levy, who is co-PI along with Saul, has been rushing around madly putting together teams of engineers, getting calculations done, talking with SSL people (who've put up satellites before). A big monster proposal was submitted to DOE and NSF to do their first review. However, to get to do that review at all, there were something like five trips just to do the groundwork to get them to even talk. This meant working up from lower level through higher level people. Back in December was a pre-review for SAGENAP (Scientific Accessment Group for Experiemnts in Non-Accelerator Physics). This is for particle physics like experiments that don't use accelerators.

SNAP as seen as a sort of particle physics project (i.e. what is the Dark Energy). The pre-review gave SNAP enough money to go for the actual review in the end of March. A number of SCP type people flew out for that review in the end of March. They presented for two and a half hours, and later that evening were given a list of questions, which the team worked on late into the night. Answers and transparencies were performed for the next morning. Questions were answered for several hours the next day.

Some of the questions were the science kinds of questions. One typical question is why various parts of the project can't be done from the ground. We keep getting asked this question, even though Greg has come up with a good answer. There are also political questions, such as who has access to this data, and will it be released to everybody. There were also questions about why $17 million is needed to do the preparation study; that question will be asked again and again.

The current status is that we haven't heard the formal response from the committee members. Instead we have informal responses which indicate that people like the thing. Probably we'll be asked back for another review in January. The good news is that the DOE released $400,000 of end-of-year money to keep doing the studies, and they seem to have written us into budgets for next year. NFS has also asked us to submit a formal proposal. Then, we also got a visit from somebody from Goddard from NASA. So far, NASA had been left out of it, because in the past NASA had the idea that if they did anything but launch the satellite, they had to run the whole thing. This was to be a DOE project.... However, it looks that perhaps this time it may be possible to get NASA scientists involved without them taking over. It is unclear at this point if that will work. We are waiting for the highest levels at DOE go talk to the highest levels at NASA.

The complaints have mostly to do with detail, but it seems that on the whole people seem to be liking the project. At this point, all the different elements to be brought to bear have been positive, and it sounds like a natural project to fly. This would be a widefield imager up around at the same time or a little before the NGST, and it might find a lot of targets that the NGST might want to follow up.

Richard Ellis is working on using the same data from SNAP for weak lensing.

Beyond that, we're on the organizational stage of things. There were a first set of signatories for the proposal. Now, we're going to have to really get down to the nitty gritty of what is needed, and who's really going to build each piece, and which people will really end up being on the final collaboration. There is a lot of management involved as well.

There have been presentations made often enough now that there are lots of transparencies (see snap.lbl.gov). That web page is basically public; there are a few things we should probably be careful about, but it probably just means that we ought to take some stuff off of it. We should feel free to go around telling everybody about this web page.

Carl raises the issue of divorcing the SCP from SNAP, which Mike Levy is supporting (and so do I). Carl thinks that scientifically that's nuts. Saul mentions that probably a lot of the group will end up getting involved in SNAP, but the grouping will probably not be the same as is the case with SCP. We don't really know yet if DOE or NASA or NSF is going to set up any rules or restrictions.


Tomorrow morning there will be more breakout sesssions. The right time to begin the main meeting will be at about 10:00AM. The "executive committee" will be meeting at ca. 8:30.