SCP Collaboration Meeting, 2000 June 13, 10:30

Saul says that in between executive committee meetings, there will be update on what's on in different groups.


Ariel on Cosmology at FYSIKUM, SU

Ariel introduce his group. Last year, he said he would hire two. In fact, he has three students here: Gaston Folatelli, Gabriele Garavini, and Serena Nobili. These people are working with the SCP. Meanwhile, there's been an explosion in the Supernova Observation Calculator. He's working with Lars Bergstrom (a theorist) on this, and there are a couple of graduate students working on it, plus several other students. Other collaborators at the Stockholm Observaories are astronomers Claes Fransson and Thoma... (I missed the name), working on rates and photometric redshifts.

Ariel tells us a little more about the SNOC (SuperNova Observation Calculator). He's trying to write a monte carlo single package that tries to address all of the issues related to these observations. It's a simulation which does ray-tracing back to the desired source redshift with inhomogeneities parametrized as spherical cells with SIS; NFW, point or uniform mass distribution. It tries to model gravitational lensing effects throughout the universe. It takes as input cosmological parameters, filters, galaxy halo profiles, mass distribution, foreground clusters, SN type, lightcurve options, stretch corrections, etc.

Ariel says that the skeleton is there. He says that the bad news is that it's a FORTRAN program. He's showing us a typical input file for the program, where you can specify all the fun parameters. There are parameters for supernovae, cosmology, and gravitational lenses. He's also recently added in conditions for search conditions, e.g. putting in a time gap between references and search images. Finally, there's the ability to put in a foreground, very massive cluster. Saul asks if there is the ability to put in a confirmation image (a point a week after the discovery to screen on). Ariel says that that isn't in there right now, but that it would be easy to add it in.

Ariel says that there are no telescope parameters in the program right now. He says that if you want to put that in, you look at the output and set a threshold on it. Reynald notes that the treshold isn't a real threshold, but it's a slope. Ariel says that there are a lot of parameters you can set to control various supernova rate models.

Aside: Greg and Peter are telling us that the ACS is 202" on a side, or 11.33 square arcminute.

The output produces a number of things. It can give you a lightcurve, the magnification due to lensing. He shows a plot of Z vs. mbeff vs. deltam. There is dispersion, which comes from the 0.16 intrinsic plus gravitational lensing. It can also plot the redshift distriubtion you would get assuming a given magnitude, that the second magnitude must be at least 0.2 mag brighter than the original, and a constant volume rate. He shows a distribution for I<26, and then a cut with also R>25.

He shows histograms of smearing of peak magnitudes due to inhomogeneities along the beam, where all galaxies are isothermal spheres (i.e. due to lensing).

Some of the work in progress is: putting in Quintessence models, large extra dimensions (string theory!), time delays, Type II SNe (?), extinction. Re: the string theory thing, string theorists claim that there are 10 dimensions; we always thought they were on the Planck scale. Ariel says, though, that in the last couple of years it may be that some of those dimensions may be very large, with sizes up to 1cm. Gravitons (though not much else) may be able to move in those other dimensions, so if these dimensions are large, it may affect how gravity works. It might also effect gravitational lensing... it could add a color term! (I.e. a term that couples to energy.) Ariel suspects that what he'll end up doing with these studies is rule things out.

Re: SNe II, putting in things related to veolcities for EPM simulations and so forth.

Most recently, Ariel has an undergrad working on running the code and fitting the parameters. They did a ML/Chi2 fit of the cosmological parameters (using Davidon minimisation) rather than a grid search (as grid searches take forever if there are too many parmeters). Peter warns about local minima. Ariel says that most of the work the student's been doing is comparing to grid searches. He says that one result is that you get into trouble with low statistics... with huge SNAPlike statistics (say 2000 SNe), minimization and grid searching seem to be pretty equivalent.

Still to do: working with correlated uncertainties (which is harder if you don't do Chi2 fitting), and lightcurve fitting.

He shows a confidence region for a "what if" based on whether you have SNAP and SN Factor, or just SNAP. He did this assuming a contant volume rate, and a constant depth for SNAP. He also shows some systematic investigation. E.g., with SNAP, as you go down to very low error bars on Lambda, you actually have to start worrying about the biases due to gravitational lensing, as they become a relatively significant effect.

Greg points out that SNAP will have a great measurement of weak lensing of galaxy haloes, and that that will give us something which can be applied to the SNe. Ariel says that there's also the fact that if you can fit the cosmological results very well, you might feed back with some information on galaxy halo shapes.

As late as October, Ariel says that this code will be released so that others in the group can run it. He says before they do that, they want to document it, and get it into a more stable state. (I can understand that! We have so much software I'm embarrased to have anybody else use because of the undocumented and messy state it's in.)

Gerson asks if Ariel tried mimicing the 42SNe paper, and what does lensing do to it. Ariel says that the effects are negligible, as we had also calculated here before (using the extreme case).

Greg asks if Ariel does spatial demagnification, i.e. map square degrees onto real volume when it's behind a cluster. Ariel says that he doesn't do it exactly right at the moment, in terms of specifying search areas (i.e. he does space rather than area on the sky). It's not clear whether at the moment he takes into account the fact that behind a massive cluster, magnification means that you can be looking at effectively a lot less volume.

Things people want added to Ariel's Software


Gabriel on the Stockholm group

Gabriel is the student Ariel got at Stockholm for free. He's going to tell us what they're doing out there. He says that primarily they're working on the intermediate-z search at NOT. They're also working on the NOT spectroscopy from the Spring 99 data. (The nearby campaign?)

He shows that there is a huge amount of data from the Spring '99 nearby search, from several telescope. He gives a list of the number of telescopes and nights at each telescope from that data. (Spectra and photometry.) In total, it's 161 nights of data, over 35GB.

He lists the 10 Ia SNe, the 4 SN II, and the Ic. The lowest Ia z is 0.009 (SN1998ac), the highest is 0.149 (SN1999bq=nr245.7). He shows the original estimate from the first spectra (those from the telegram), and compares it to the estimates they have made out in Stockholm from their reduced spectra.

He says that they started with SN1999aa, because that's the only one for which they have three nights. (Is that three NOT nights?) (This is the KAIT supernova.) This one turned out to be a very peculiar Ia. (Peter says it's a 91T like SN, but not quite because it has a huge Ca H&K trough.) He says that this forced them to go deeply into the reduction scheme and understand it very well.

He shows some of the images of some of the supernovae.

He says that some of the spectra are very difficult to extract, especially the SNe II, because they were either two faint or because they were lost in the galaxy. Greg notes that they should be possible to extract, and also that there are final reference spectra that we obtained during the spring.

He shows plots of some of the spectra.

He says that they have a web page. For each night, they list the file that was reduced, and the parameters of the extraction and the calibration. This will be useful going back later to figure out what happened.

Plans for the future involve reduction of the data from the other telescope. He says that they will also put a database on the website with a database of the reduced data.

The URL of their webpage is www.physto.se/~snova. Gabriel says that this is not password protected; Greg and Peter note that probably it ought to be, because we've had web pages raided for data in the past.

Reynald asks what they hope to do with the data once they've reduced it. Ariel notes that there are at least 8 science topics that one could do with them. There will be discussions, presumably, about who does what.


Gaston on NOT SN Spectral Reduction

Gaston is another of Ariel's students. They've done the data reduction with IRAF packages. He shows a plot of the QE of the detector; it's a Ford-Loral CCD. It's a 15um 2048x2048 pixel detector with 0.188"/pix, about 6 e- read noise, and about 1 e-/ADU. He also plots the response of the two grisms (Grism #4 is blue, Grism #5 is red).

The steps are the standard ones; bias, flatfield, spectrum extraction, savelength calibration, flux calibration, combination. He says that they haven't done any fringing correction yet, but that may be something that's useful to do with grism #5. (There is some side discussion about extracting a SN which is blended with the galaxy. Some spectral wonks are mentioning that you can trace on the galaxy, with an offset for the SN. Presumably this will be discussed offline.)

He shows us what is done with the flatfields. They are normalized along the disperison axis, and a high order function is fit along the dispersion axis. This takes out the wavelength response of the lamp used to make the flatfield. He shows an example from SN 1999aa, and the steps involved with that. The trace pulls off the spectrum of the SN. (As a function of dispersion, it wanders in space, due to intrinsic detector variations as well as differential atmospheric dispersion.) Greg says that in order to worry about spectrophotometry, you might want to spatially rectify the spectra based on traces of bright objects.

To calibrate, you follow the same trace on an arclamp plot, with narrow lines of known wavelength. He shows a plot of residuals; the RMS dispersion is 1.5 angstroms. This is approximately half a pixel; this sounds high to Greg. Apply the solution to the spectrum, and then you have a wavelength spectrum.

Greg asks about cosmic rays; Gaston says that they've done nothing with them. Gabriel notes that they were removed by hand. Greg says that he would do it by hand in the 2d. If you're well sampled spatially, you can do a very good job, using imedit in IRAF. Gaston says in the cases, where there were multiple spectra of the same object you can do it during the combination.

The sensitivity function is fit to observations of standard stars. This is compared to known flux distributions of the standard stars. You can fit a function to represent the sensitivity function. Apply this to the supernova, and then you have a flux calibrated spectrum. The bluest part of the spectrum gets very noisy because of the response of the grism.

There's a discussion about using the tables of standard stars and how far the known spectra go to the blue... sometimes, evidently, it's not far enough.

Finally, he shows a combination of two spectra from SN1999aa. He notes that they can apply an extinction correction.

He also shows us SN1999ak. This was a much fainter SN, that was discovered too late (something like 13 days after maximum).


Serena on Systematic Effects in Reduction and Problems

Serena is Ariel's third student. She's talking about some problems and solutions they had, and on checking systematic effects. She will also mention some still open problems. She says that the simplest way to check was to use the standard stars. Ideally, you have more than one, but some nights have only one star. To do this, use the standard star to calculate the sensitivity function, and then use that function to recalibrate the star (or another star). See if you reproduce the table spectrum.

One case was 13feb99 NOT, standards GD71 and HZ44. She shows the sensitifity function and residuals using an order 6 and an order 15 Legendre polynomail. The RMS was about the same in these two caes. However, she says that they discovered that using the order 15 function, there was a problem in the blue. However, using the order 6, the problem was better (though not completely gone). What they learned with the fits is that you have to take a range of wavelenght where the sensitivity function looks more or less flat. Then they delete the points due to the features of the sky. Finally, choose a low order for the fit. (Greg notes that there's no reason to use low order, and he usually uses higher order to take out all the systematics. He says that also, therefore, he likes to use very well sampled standard stars (i.e. well sampled in the table). Ariel said that there were scary perhaps unphysical wiggles showing up in the residuals with higher order fits, and that it waa probably safer to just stick with a lower order.) Serena shows an example of the difference between the 15th and 6th order. Greg points at one of the places of difference, and says that right there there were very weak but very spread out water absorption.

Serena shows how the systematic errors were fit. She built a histogram of the resdiuals, and calculated the standard deviation from that. Note that these residuals look like broad wiggles, rather than just pixel-to-pixel noise. For Grism 4, GC71, she got 0.0... (I didn't get it). For Grism 5, it was 0.06. This is compared to the statistical errors, which is done based on the statistical errors (assuming Poisson errors). She plots poisson uncertainty as a function of wavelength. In the range 4000-7000 Angstroms, the statistical error is 0.005 to 0.007. Hence, the standard deviation calculated is a measurement of the systematics in the standard star residuals.

That was based entirely on doing one star to calibrate itself. Next, they tried use GD71 to calibrate HZ44. Greg notes that at Keck, he seems to remember that they saw that HZ44 has a companion, which he doesn't remember having seen reported anywhere. So, what you observe may depend on slit angles and so forth. (Greg says that the most obvious thing to check is if you were at the parallactic angle.) Here, the systematic deviation (standard deviation) is 0.05, in between the two stars. (Here, there is also a wavelength calibration problem; Greg notes that that can be due to a shift within the slit.)

Chris points out that if the seeing varies between two obervations, that can also cause systematics in the blue.

Peter says that in the end, what he does after getting a spectrum, is to compare it to the photometry and tilt or curve the thing to make the colors match.


Planning Issues

Saul wants to pick up on the planning issues we were talking about yesterday. He wants one more brainstorm on what we might want to address, and write down a few ideas on what sorts of plots and Monte Carlos we want to do, and when we should next meet in order to discuss them.

First, Greg has some followups on what we discussed yesterday. He points out that the ACS has two modes, a wide field 202" on a side (similar in resolution to the PC, but with a much larger field), and an even higher resolution (29"x26") mode. Greg thinks that we're more interested in the wide field mode, since we don't need to better resolve our SN, and we need the area to line up the observations with each other. He shows a page that Alex Conley prepared with the information about the ACS, and how it compares to the WFPC.

NOTE: Next fall, should we be putting our SNE in the WF, not the PC?

Saul writes up a list of questions for the HST, and questions for the Spring 2001 run.

Questions for HST (proposal due this fall):

We need to aim to settle a lot of this within a month and a half.

Question for the Spring Run:

Meet again here at 2:38 to pick up on what people are working on.