Deepsearch Meeting Notes, 1999 November 17


Killing Time With an Opposition Poll

So, we were all here sitting around shortly after 2:00. Saul wasn't here. We sent people to find him. Word came back that Peter was supposed to "stall" for 10 minutes. I left. It's now 2:25, and Saul is here. I think in the mean time Peter took a pool as to the highest redshift Ia that the other team found. The results:

WhoValue
Gerson1.25
Alex Conley1.35
Michael1.31
Greg1.21
Dan1.1
Peter1.23
Brenda1.27
Don1.15
Ana1.18
Kirsten1.28
Maria1.20
RobAGN
Carl1.36
Saul1.25
Alex Lewin1.16

So, while I was taking down that table, Greg was talking about (I think) the French design of an integral field spectrometer.


Carl's Announcements

Carl says that on December 3rd he's hosting a working group for trying to establish a language for controlling a robotic telescope over the network. The model is a lot of users, and a lot of telescopes, and all of it controlled somehow centrally with some sort of central database. He's gotten some support from the NFS education folks. He welcomes any and all, although there is limited space. It will happen in Perserverence Hall.

He also announces that the Leonids may be peaking at 6:00 tonight. He may be taking some Physics students out to Inspiration Point. In principle (theory, in other words), this Leonids is supposed to be one of the best in 30 years.

He also mentions the space station telescope. He thinks he has a good trajectory for getting this funded, which won't interfere with SNAP. The Japanese are apparently very intereted in having a UV imager; you can put some sort of wavelength shifter up on top of the CCD in order to make the thing image in the UV.


Late-Time Supernova Templates

Peter tells us that he and Greg will be working on something after the current SNAP proposal is done. The recent accepted AJ paper noted that some supernovae have different late-time templates. Peter and Greg think that we have enough nearby supernova to fit various templates to work on trying to determine a second parameter beyond stretch in order to fit for things that fall above and below the Leibengut template at late times.

One of the parameters that Peter Hoeflich has been talking about recently is the C/O ratio in the progenitory. As C/O goes down, you have less energy to put into the explosion. One of the differences you get with different C/O ratios, Hoeflich says, is a different peak to tail ratio.

Potentially, the C/O ratio could correlate with different galaxy environments. There is anecdotal evidence (i.e. nothing real) that supernovae in different environments show these different tail templates.

They will take the 60 or so nearby supernova templates that they have. This includes all of the Hamuy supernovae, another 20 from Adam Reiss, and about 20 good <Coma, <Virgo published before Hamuy. It sounds like they aren't counting on the ones from our nearby campaign yet. There seems to be enough data already analyzed to do this project, to look for this second parameter and see if it correlates with host galaxy or anything else.

Gerson says that he's looked at the 18 Hamuy supernovae out to day 50, and asserts that they all seem to fall on the same curve. Saul notes that we're talking about a 2-4% deviation out at 50 or 60 days. This will be tough to do on all but the best of supernovae. Greg says that he won't be surprised if most fall on the Leibengut, and occasionally you see outlyers.

Alex Kim wonders if some of what Peter's talking about in 1986G and 1994D could have to do with extinction. Peter says he's put in a special reddening correction for 86G. Peter notes that it could be a reflection off of a dust cloud causing the light to come up a little later on.


Hung Chung's Final Word on K-Corrections

Hung Chung gives an oversimplified view of what we do to our data. We start with the instrumental reading, and do a calibration. That calibration transforms things to the standard (Landolt) system for things with the spectra of stars. However, supernovae have a different spectrum from stars. So, we do a "star to supernova correction." This correction turns out to be small. He shows the integral that goes into this, and there are two that mostly cancel each other out. This correction is usually of order 0.005 magnitudes. (In fact, we don't really do this calculation.)

So, now that we have the magnitude on the standard system, we have to do the K-correction. The K-correction helps tell us how to get the correct luminosity distance from the magnitude now on the Johnson et al system. Hung-Chung shows an overhead qualitatively showing us what a cross-filter K-correction does, and includes the equation:

m=M + 5log(D_L) + 25 + K_BR
m is our observed magnitude (R), M is the known supernovae magnitude (B), and D_L is the luminosity distance we fit to our cosmology, as a function of z, Omega_M, and Omega_L. This is effectively the definition of the K-correction.

He shows some equations with the definition of m and M in terms of filter functions and spectral energy distances and the standard (zeropoint) spectral energy distribution (i.e. Vega). When he plugs all of these thing in, you do indeed get the K_BR equation that Alex Kim got in his K-correction paper. Hung Chung writes it a couple of ways.

There were two big issues. One, should the integral be done over counts or over energy? He tracked back some references, which gave integrals over things (count or energy) and sometimes ill-defined sensitivity functions.... Schneider, Gunn, and Hoessel (1983) said that it was a photon count energy. Hung Chung tried to figure out was how the magnitude system is defined, since that's what we're really doing. Sterken and Manfroid (1992) ("Astrophotometry- a Guide") had some equation for somethign called E_m which is what you take the log of in order to get a magnitude. This turned out to be something proportional to a number of photons. Golay, 19784, "Intro to Astro. Photometry" again gave something which is effectively photons. After all of this, he was pretty sure that it was defined in photons, but the literature was confusing because they pretty much always talk about flux, and sometimes give things in Watts units.

Eventually, Hung Chung contacted Landolt, and proposed something to Landolt (a thought experiment originally defined by Rob). He said, suppose you have two different stars of different colors, but they have the same R magnitude. Is the number of photons the same, or is the energy flux the same? Landolt, apparently, emailed back saying, aren't those two the same? Oh, well, so much for that. Landolt did give Hung Chung some references. One was Straizys, 1992, "Multicolor Stellar Photometry." This reference also indicated that magnitudes are based on photon counts.

Landolt also referred Hung Chung to prof. Harold Weaver in the Astronomy department here, who is about 3.5 times Hung Chung's age. They talked for about an hour and a half. When Hung Chung asked the question, his response was first the same as Landolt's, but eventually Hung Chung communicated what he was trying to ask. He pretty much told Hung Chung that it was the number of photons in each dlambda bin. So why would Hung Chung believe him? He just asked Weaver because Landolt told him to do so. The filters we're using right now are basically the Johnson filters; Johnson basically established the magnitude system. Johnson, Hung Chung tells us, was Weaver's grad student.

Hung Chung found one other reference which did say energy.... But apparently there was another statement in there which was obviously wrong, so he's decided that we should discredit this guy.

Saul is suggesting to Hung Chung that it might be worth writing this up for PASP. It would point out that nobody's really thought too hard about this because it didn't matter for what people have done before, but that with what's coming up people might start to care. This could matter for things like SNAP.

Now that Hung Chung believes it's counts, the next question he asks is, why is there such a small effect on our K-corrections? He showed that the difference between energy and counts for our K-corrections is small. If you use a wrong filter match, it gets pretty bad, but as long as you use the right filter match, it's always less than 0.05 (the difference), and usually less than 0.025.

Thinking about bolometers versus photon counters, he says that it would seem that the signal returned by a photon counter would go down by (1+z) since the photon counter doesn't care what the energy of the photons is. He argues that the reason you don't see the huge 1+z factor, you would see a 0.5 or 0.6 manigutde difference between counts and energy; why don't you see it?

Hung Chung talked to Alex Kim, and came up with this explanation. Think about something which is flat in energy. A bolometer measures the integral of I(lamda)dlambda, and the photon counter measures the integral of n(lambda)d(lambda), or, equivalently, lanmbdaI(lamda)dlambda. If you redshift the I(lambda) spectrum, the photon counter goes down by a factor of 1+z less than does the bolometer. The lambda-weighting of the integral is what does this. What happens if you put a filter on top of it? He puts a QE filter on top of this. ...He lost me. And, he seems to have opened a can of worms. We're getting back into our usual K-correction argument where everybody has their way of thinking about it which doesn't really do much to further understanding, but just gets us all more confused. (At least Hung Chung settled the counts vs. energy issue.)

I think the point that Hung Chung is making is that the lambda weighting shifts, and that matters for an integral which has a lambda in it....

A clean argument still needs to be made. I haven't heard something which is clear and unambiguous about this factor of 1+z.

I'm trying to get the group to table this description.