Deepsearch Meeting Notes, 1999 August 11


Hung Chung and K-corrections

Hung Chung has been working on trying to figure out how we're supposed to do K-corrections. The statement Alex Kim makes in his paper that "it should be counts" isn't really very satisfying.

Hung Chung spent a few weeks convincing himself that he knew nothing by getting books out from the library and reading on how photometry and K-corrections work. More recently, he's been trying to write some programs and rip off some programs, to figure out if we did K-corrections wrong, how wrong they are.

The main thing is to figure out how to state the problem correctly.

He shows a simplified view of the reduction process. You start with an instrumental reading. You then calibrate it, so that you put your magnitudes on the standard system (in our case, the Landolt system). This is the "initial data product." Next, in principle, we ought to do a star to spuernova correction. The reason for this is that if we do a perfect calibration, we only get the magnitudes for all the stars right. Since supernovae have different spectra, we may need to do an additional correction. For our last paper, we ignored this correction, on the hope and belief that it was small.

Finally, there is the K-correction, which is substiantial, and can be as large as 0.5 magnitude or more at some redshifts.

Greg points out that the K-correction has two parts, the cosmology part (all the 1+z's, which we aren't even sure we're doing completely right), and then the part due to all of the details of spectra and integration and such.

The zeroth part, the calibration, Hung Chung shows that we plot m_J-C - m_I (J-C = Johnson Cousins, I=Instrumental), as a function of color, for standard stars. Then, whenever we measure a star of a given color, we know how far our system is off of the standard system, so we can measure a real magnitude. In practice, we just use a linear fit to do this.

Hung Chung shows some integrals which I will try to get here, but he shows them right now without saying whether he's doing integrals over energy and counts.

The first step is the star-supernova correction. This is a "difference of a difference," a correction between your fliter and the Johnson Cousins filter for a supernova and a star of the same color of the supernova. (The calibration was already done for stars, including the difference between your filter and the standard filter.) The final product of this is the supernova magnitude in the standard system.

The second step is the cross-filter K-corrections. Hung Chung shows a spectrum, and a redshifted spectrum. The nearby (z=0) supernovae get observed in the B filter, and the distant supernovae get observed in the red filter. More or less the same part of the spectrum is observed, but not exactly the same part of the spectrum doesn't get enclosed. The K-correction is the correction necessary to make sure we are comparing apples to apples.

Hung Chung shows some corrections copied off from Alex Kim's paper. There are two integrals which are zeropoints. These are integrals over Z, which is an idealized star spectrum that has magnitude 0 in all filters. Then there are two more terms that are integrals over supernova spectra. The first one through a B filter at zero redshift, the other is through a blueshifted R filter. Then there are factors of 1+z. (He's not going to talk about the factors of 1+z right now.)

Hung Chung has done some plots of the cross filter corrections. He's done it for both photon count and energy integrals. There is a slight difference. The absolute magnitude of the K-correction varies between -0.7 and 0.7 magnitudes for a range of Z. The B-I correction gets even bigger at places. All of this is done for a supernova at max.

The difference in Energy and Counts tends to be less than 0.05 magnitudes for relevant K-corrections. That is how wrong we could be.

The real question is for the star-supernova and K-corrections, should the integrals be over flux or over counts? There are ten or twelve papers Hung Chung has found talking about single-filter K-corrections. All of these reference three apers: Humanson, Mayall, and Sandage, 1956, AJ, 61:97; Oke & Sandage, 1968, ApJ, 154:21; Schneider, Gunn, and Hoessel, 1983, ApJ, 264:337.

The first two papers show an integral over the spectral energy distribution (in ergs/whatevers), times the "sensitivity function," which alas isn't very well definied. Is it ratio of input/output signal? Is it a QE? Does it have units?

People don't define their quantities at all. They just think that we know what a senstivity function is.

--Hung Chung Phang

Schneider, Gunn, and Hoessel, very clearly do an integral over photon counts. Hung Chung thinks that this is the place where Alex got the idea that you got photon counts.

The free-for-all counts/energy yell-debate is starting. We cut it a bit short so that Hung Chung can go on.

Hung Chung shows another place where Schneider defines his magnitude system in terms of an integral over counts. Hung Chung questions whether we are using the same system. Greg points out that these folks defined the AB system, which is a different system from the Johnson/Landolt system. So, this may be a completely different way of defining magnitudes and standards than what we have.

If the standard system is a flux system, then those are the sorts of corrections we want to do, in order to stay on the standard system. Hung Chung says that he's convinced himself a lot of times that it was a certain way, but then thereafter has convinced himself that he's wrong.

He's telling his current belief. The star-supernova correction is a mix of counts and flux integrals, assuming that the standard system is a flux system. The integrals over our instrumental filters should be count integrals, while the integrals over the standard filters should be integrals of whatever the standard system is. Once we've done that, we should be thoroughly on the standard system, and everything thereafter should be according to how the standard system is defined.

What Hung Chung is going to try to do next is find some reasoning to back all this up. He wants to find some way of completely convincing himself that the Johnson really is (or is not) a flux system, with evidence and references. He wants to write a memo to get all of that written down and documented for all time. He says it's becoming a personal vendetta, since these K-corrections are so frusturating.

Second, he wants to see how big the standard star to supernova correction is. He says that right now, using different stadard stars of the same color are varying by as much as 30% of the correction. This would mean that there must be some dispersion about the calibration. The general size of these corrections so far tend to be <=0.01 magnitude; sometimes more, usually less (usually more like 0.006 or 0.007).

Greg notes that the thing which is worrisome is the cosmology part, in that whether you are working in flux or photons, there are different factors of (1+z). Apparently getting this wrong could change our answer from what we currently have to Omega_M=1, Lambda=0. So, the next thing to do will be to make sure our cosmological factors are correct.

Matthew, who's been reading some papers about what the standard system is, says he thinks it's a poorly defined mixture of everything. People have built on others' work. They all use slightly different filters, with corrections, but then later some people say the corrections are wrong, and etc. etc. etc. During all of this, the counts vs. energy gets all mixed together. Matthew thinks that nobody really knows what the standard system is.

Don's assertion is that photometric systems could never have been an energy system, since people always had photon count detectors - phototubes.

We concluded with a big group yell session about the cosmological portion of the K-correction, with some (e.g. Rob) asserting that it just didn't make sense to get as huge differences as were claimed (Omega_M=0.7) for K-corrections done differently, if they were done internally self-consistently. You can't get something to be a lot dimmer just because you're thinking in different units....


Topic two: Adam Reiss Rise-Time Stuff

What do we know about all of this, and what are people working on?

Adam Reiss spoiled our 4th of July weekend by claiming that there was a 6-sigma difference between the rise times for his nearby supernovae and the rise time that Don and Gerson determined from their composite lightcurve.

Peter and Greg, in early July, started looking at the lightcurve. In early times, there's a parabolic rise (a t^2) from t_exp ,the time of explosion, which later at some point joins up to something like a standard Leibengut lightcurve.

Gerson and Don got a t_exp by putting all of our supernovae on the same system (stretch, z). They then fit a parabola to the early time. Greg says, though, that there are them problems with correlated errors.

Peter and Greg constructed templates which were specified by texp and tjoin (where the two types of templates join). They did it for tjoin of 10 days, and looked at how chisquare behaved as they moved the explosion day. You could then look at how it compared to Adam Reiss. (They collected the data on Adam Reiss's early supernovae.) The evidence was that you had to go out to about 2-sigma from our best supernova day to get to Adam's best supernova day.

The next question was to decide on the day of t_join. Greg found that the chisquare got dramatically better as you went to t_join days closer and closer to maximum. He draws on the board chisquare contours for texp of 12 thorugh 22 (before max), and tjoin of 4 through 16 days (before max). The minimum was down at a join day very close to 4. Adam Reiss had a minimum at higher tjoin and higher texp.

Greg said that this was bizarre, to have the parabolic fit go all the way up near to max. Greg looked at the supernovae that were driving this. A few of them had high points out at +40 or +60 days or something like that. One of these might actually have been a SN II; the others were probably bad data. These things were feeding back into the chisquare of the tjoin/texp sort of thing.

Greg and Peter have been looking at our sampling; how well can you constrain a lightcurve as a function of day? The sigma is typically 0.1 magnitudes from 20 days before max to before +40 days. At +40 days, there's a very strong peak in our sampling. (Hence the leverage.)

Greg and Peter questioned whether the tail after the bend ought to be stretched or not. They looked at extremes of supernovae, and found that right at about that time, three were different "joins" (between the top hump and the "radioactive decay"). So, our best S/N is at the poing where we don't know the template very well. With data sampled how our data is sampled, by changing the template within reason here, we can shift the explosion day around by a day or so.

Since all of this feeds back into t_join, it affects how different we are from Adam's result. Greg says that at a t_join of 11 days, we're within 1 sigma of Adam's result. Greg thinks that the bottom line is that we will be able to make a strong statement that we can't make a strong statement about the rise time (and hence neither can Adam).

Peter wants to briefly point out what the systmatic is. I spaced a bit, though. He also points out that the SNe that Adam used was a subclass of Ia, which we don't see very often, that are very close to the superluminous 91t-type supernovae, particularly the two which had any data that constrained anything worth anything before -15 days.

Gerson talks about how there is a scrunch parameter, which allows the lightcurve to stretch off in different directions before 10 days (before max or after explosion? Unsure). He plots chisquare as a function of scrunch, though he's using "time of explosion" (time of crossing some really dim value) to parametrize scrunch. The chisquare, he plots, has a minimum somewhere near -17.5 days or so. He says that he can completely reproduce Adam's result with a certain value of scrunch, and it's at an earlier time.

Gerson says that our data moves based on what we assume the lightcurve is. Changing the scrunch changes the stretch and the time of maximum. The overall event fits are affected by all this.

Saul asks Don to talk about how he did an assumption free somethingoranother. Don shows how we have a lot of points, and says that Gerson's scrunch analysis shows that the earlier points have a lot of pull. We are really depending, Gerson showed (Don says), on the fact that we build in the assumptions of the fit in building in the early points.

Don tried doing the fit by throwing out the early points (-25 to -5 days), doing the fit, and then plotting the data to see where they fall. Some of the supernoave are poorly constrained in stretch without the early points, so he fit t0 and Imax, and assumed stretch=1. (He only threw out one supernova in doing this.) He found that the datapoints were spread out more on the left. He saw that it was nicely distributed about it in the order you would have expected according to the stretch (as measured in the 40SNe paper). This was the first evidence that Don had seen that stretch really applied (qualitatively, not quantitatively) at early times.

Don then took these data, fit as they had just been, and then applied the stretch to them. They then came down to the -17.6 day explosion time very well.

Saul tries to summarize by saying that we can say with good confindence that we don't have a good enough constraint with our supernovae that we can measure the rise time from explosion to max to between 17.5 and 19.0 days. Don quotes 17.5+-0.4+-1.0 days. The second number is a systematic error due to template uncertainty.

Saul says that we are close to having a couple of drafts of different papers. Gerson has one that accounts for this. Then, there's a draft that Peter is writing (with Greg) having to do with their analysis.

Gerson asks Greg what his best t_join is. Greg says he gets -4 days for all supernovae. If he take out the ones with special problems and that aren't well confirmed Ia's, he gets join times close to -10 days.

I'm sick, and I'm ready to pass out.