Deepsearch Meeting Notes 1999 October 6


Don: Early Supernova Lightcurves

Don is going to talk about the work he's done on the early supernova lightcurve. First, though, there's some prehistory.

Way back when, in January 1998 or some such, the first thing he did was take the averaged data he got through Peter and Gerson, and fit a parabola to the beginning of the data. He got a time of about -19 days.

In 1999 January 9, he gave a talk at the AAS talk, where he fit a parabola to the -20 to -10 day interval, and got -17.5+-0.4 days times stretch. However, he did this and tried varying the upper limit (-10 days). Of course, if you make this close enough to zero, it shouldn't work very well any more.

On August 6, 1999, Don gave Saul (?) a draft of a paper, and apparently nearly nothing has happened since then (Don says).

Don shows an exceprt from the (introduction?): "...any errors in the assumed early behavior appear as systematic correlated errors in the explosion time and stretch." Don says that they investigated this in two ways.

First, he tried killing everything in the "grey region" (something like up to 5 days before max). The fits were used to determine t0 and maximum intensity. He tried fitting with stretch set to one and with stretch set to what it is in the 42 SNe paper. Putting in the stretch essentially took out all of the dispersion in the early parts of the light curve. However, it's still yucky because there are big horizontal error bars due to errors in t0. (For comparison, he also shows things stacked using our normal fits.)

Gerson did a deal where he varied the slope of the early part of the curve, and made the fits to the subset of 34 (or whatever) objects, and calculated to the total chisquare. (I'm getting lost.) He has something which has a +- of 1.2 days which he says is the systematic error that he assigns to anything that's fitted. (I lost track of what he was fitting here, which is why this paragraph makes no sense as it is.)

He shows naive theory, where the parabolic dependence comes out of free expansion. L(t) goes as R(t)^2 * Te^4 = v^2(t-t0)^2*Te^4. Don then writes L(t) as L(t)=t^2*f_B(T(t)).

His first assumption was that f_B(T(t)) is essentially constant. This would give you parabolic early behavior. He says that they consider fits in the interval [t1,t2] with t1 before the explosion time and t2 variable. He shows one of the curves: [-20,-9]. He gets t0=-17.53+-0.26 (formal error); he enlarges the error by the square root of 2. (His chisquare is too good, but Rob suspects that may be partially because he hasn't considered correlated errors between the data points.)

He plots t2 (x-axis) vs. texp. There is a central region of t2 where texp has relatively small errors, and where it's stable. Outside, texp goes a little nuts. So, there's a fairly wide region in the center where it looks like it doesn't matter much. Because the scatter is about the size of a standard deviation, Don multiplies his texp error by sqrt(2), since he doesn't know where in this central region he wants to be. The other thing he plots is chisquare vs. t2; in the same central region, the chisquare stays stable. In other words, it's an exceedingly robust fit, and it doesn't depend very much on whether you fit to -10 days or whatever. The size of the central region of t2 is about 3-5 days, Don says from memory.

Note that for all of these fits, the date of max has been lined up, the stretch has been taken out, and (1+z) has been taken out.

Answer: t0=-17.5+-0.4+-1.2 time stretch.

Fit 2 is based on a paper from Arnett in 1982. From this, he handwaves out a form of... something.... There are two models (A and C) which are based on assumed 1 and 1/2 solar masses of nickels. The temperature dropoff in these two models doesn't change very much, and in both cases it is very close to linear.

In order to convert these temperatures to f_B, Don assumes that the supernova is a blackbody... hoping that that is a reasonable assumption for a broad filter in the early days of the lightcurve. He integrates the blackbody curve in the B filter.

He first plotted f_B(T)/kT (the kT was by accident) as a function of T. Interestingly, he found that it was very close to a line over a vast region (including the region of interest). As a result, he ended up making a quadratic approximation to the thing: f_B(T)=a+b*kT+c*(kT)^2. His parametrization is good to within 2% of the real integral for a range of 10000 to neary 20000 degrees.

He puts this function back on Arnett's straight lines, to get f_B(t). This gives him I(t)=at^2(1+bt+ct^2). Somewhere in there is T(t)=T0+S(t). Note that if you require that I is 1 at maximum, a is not a fitted parameter. This whole thing is shaped like the leading edge you expect for a supernova.

One thing he found was that having t1 (see above) going all the way from -10 days to 0 days, the texp only changed by a few tenths of a day. Chisquare, however, didn't have a flat region. What's more, the temperature he fit wasn't actually a physical parameter, but was asking for a temperature of 100,000 or more or something like that.

Next, he did it purely phenominological where b and c were fitted parameters, i.e. ignore the temperature and slope bit, and instead fit I(t)=at^2(1+bt+ct^2). He find that it's happiest when things coalesce and you end up with I(t)=at^2(1-sqrt(c)t)^2. (Remember, this is a 1-parameter fit, the c above. a is not a parameter, but is fixed by setting I=1 at maximum.)

This time, it fit the data very well all the way up to max (at which point there's a cusp and a double infelction point, and it goes nuts, and no longer makes any sense). Now the value of t0 and the reduced chisquare are stable with t1 from about -11 days all the way to 0 days. Again, Don says he can't attach physics to this, even though it's physically motivated. The explosion time comes down to -16.4 days this way. However, Don cautions that whenever you choose a functional form (be it a parabola or this thing or whatever), you're biasing the explosion day you're going to find.


CFHT Search Update

Saul says that there will be a Washington Pst reporter by at the end of this week as we look at the CFHT candidates. Seems a little odd, since we aren't going to be doing that much around here.

The two CFHT nights seemed to be going well. When Saul spoke to Reynald, they were well into their third field. There was enough time, but three was patchy clouds. There are definitely two fields, and maybe three fields. By now, they are supposed to have candidates for Brad to look at at WIYN.


Pseudoscience: Black Holes, Proton Absorption, and Cosmology

Gerson is telling us about a Glashow theory that supposes that black holes preferntially absorb protons (leaving behind a greater electron density). What's more, if you suppose photons with (very small) mass, then the positive charge of the black holes can't get out. This will give us a residual charge that has an energy density, which will give you an Omega_Q depends on (1+z)^6 - (1+z)^2. (Some dispute over sign.) Glashow's challenge was, can we put any limits on what Omega_Q can be given such an extreme z-dependence.

Possible Friday lunch speaker? There's a collaborator at Caltech, Gerson says.