Spring 1999 Nearby Campaign: Data Organization and Reduction Data organization: Assemble all observing logs and data tapes. All raw data needs to be put on HPSS. Look at all the observing runs given by LINCAL and make sure there is data for each night. As part of this process, the database should be updated to reflect the outcome of each night of observing. It may be necessary to consult observing logs. Conversely, we must make sure we have tapes of all data on HPSS. There are several instances (observations at Lick or service observing at INT) where the only copies of data are on HPSS. All raw data needs to be logged electronically in a standardized way (TBD) so that Nearby Campaign collaborators can retrieve data to which they are entitled. The standard SCP database is not appropriate since this is for the dissemination of *raw* data to non-SCP collaborators. All data must have header information indicating where it was obtained, and by whom, and when the proprietary period (if any) expires. This is needed to keep straight which data is proprietary and who should be included on author lists. This should be added to data before it is disseminated to collaborators. Basically we must make sure that EROS does not get Smith et al data, and that Smith et al do not get EROS follow-up data. If this were to happen there could be unfortunate political repercussions (there is not a problem if we took the observations). The HPSS headers should be further standardized (see example standard keywords for spectroscopy, below) so that when these data are sent to outside collaborators we do not have to explain which keywords mean what on which telescopes (like translating the filter encoder numbers from the Lick 1-m!). Data reduction: Photometry Here we have the special circumstance that final reference images are not available. The SNe can be divided into three classes: those relatively free of host contamination, those with modest host contamination, those with serious host contamination. For photometry I think the standard SCP photometry pipeline can be used, but with some upgrades and modifications. Upgrades: We need the option to obtain photometry using PSF fitting. This effort could be developed in parallel with the HST PSF fitting photometry code. The package which determines the photometric solution based on standard star observations taken on photometric nights should be able to fit nightly zeropoint and extinction terms, but telescope-wide color terms. It should also allow for or check on 2nd order color terms and any temporal variations. I have some C code which can do some of this. I would like the sky surface brightness and seeing to be determined and recorded for each image. This is needed to provide feedback to the exposure time prediction routines (which will be used in some form for future nearby searches). There should be a measure of the flat-fielding quality. We should expect larger variations than for our deep searches. Modifications: Extra care will be needed during the surfacing phase. Since some host galaxies are large, any high-order surfacing will compromise our ability to obtain host galaxy photometry. I am also concerned that surfacing will provide inconsistent sky estimates near the SN. The best approach will be to keep surfacing to very low order (depending on the size of the host) and to make sure the same spatial scale is used for all images. Some testing will be required to determine the best approach. Note that this ties in with the question of how flatfielding quality will be determined; for a moonless sky the sky should be flat after flatfielding. Spectroscopy Here we again face the question of host galaxy contamination. I think the situation is slightly worse, since the data can only be interpolated in one dimension. Unfortunately, due to late-time nebular emission we may have to wait awhile to obtain "final reference spectra". In the meantime, we should reduce all the SNe to flux-calibrated 2D spectra. Further analysis can proceed in cases where host galaxy contamination is minor. Each reduction step should generate a keyword indicating what reduction step was done, what reference files were used, and at what time. Many IRAF tasks do this already. Reduction steps: Standardization. I would like there to be a set of keywords which will be in the headers of all the spectra. Here is a list that I can think of: RA, DEC, UT, DATE-OBS, JD, LST, AIRMASS, HA, exposure (EXPTIME), FILTER, filter description, slit position angle, slit width, spatial scale, grating, dispersion axis (DISPAXIS), observatory (OBSERVAT), spatial scale, etc. I think there should also be some provision for storing in the headers derived parameters, such as SN heliocentric redshift (and uncertainty), host heliocentric redshift (and uncertainty). More such keywords could be considered (such as strengths of various spectral features in the SN or host spectrum). Overscan subtraction. 2D bias subtraction. In deciding whether to subtract the 2D image, measure the RMS of the superbias and determine whether the RMS is larger than expected from the readout noise; if so, subtracting it will improve the data. A better test is to subtract the superbias from all the bias images and see whether the noise level decreases. This is more important issue for spectroscopy than for imaging due to the lower background typical for spectroscopy. Dark subtraction. The issues here are the amount of dark current and the pixel-to-pixel variation in the dark current. If the pixel-to-pixel variation in the dark current is large, then subtracting it will improve the data. This is more important issue for spectroscopy than for imaging due to the lower background typical for spectroscopy. Flat fielding. This is the first step in which problems can occur. Here one is trying to take out the pixel-to-pixel response of the CCD, the wavelength response of the CCD, of the grating, and of the optics. Since the flat field lamp itself has a wavelength response, there is no way at this stage to decouple the wavelength responses on scales of thousands of Angstroms. However, if one assumes that the output of the flat field lamp is a smooth function of wavelength, then any undulations in wavelength can be ascribed to the instrument. By fitting a smooth polynomial to the flat in the wavelength direction, one is really trying to estimate how the flat lamp, plus the smooth component of the CCD response and grating are behaving. This smooth response is then corrected for during flux calibration. A variant of this method is to actually divide by the 2D flat field spectrum. The main problem with this approach is that the number of photo-electrons per ADU will now vary strongly with wavelength. The solution there is to keep the scaling relation (basically the flat) around for latter use. Since flux calibration introduces the same issue, it may make sense to make it a generic feature that the scaling between photo-electrons and array values be tracked for each spectrum. An alternate school of thought is that the flat field should be fit as closely as possible along the wavelength direction, even to the point of using a median filter rather than an analytic fit, and that all the response should be taken out during flux calibration by dividing by the spectrum of a calibration star. Since at this stage one can be left with a completely non-analytic response one must have very high S/N spectra of lots of standard stars (since all stars have real features), and these standard stars must have closely spaced calibration points to allow differentiation between response variations and calibration errors. It is difficult to tell which technique is better. I have always used The first approach, or its variant, put I can see why some might think the second approach is better. The only way to find out is to do both! In fact, I think this will be a valuable internal check on our flux calibration. Illumination correction. This process corrects for illumination differences between the flat field (taken using an internal lamp or a lamp shining on the telescope dome) and the night sky. Usually a twilight spectrum is taken to represent the night sky illumination. The illumination correction is generally very smooth, however there are cases where nicks on the slit are illuminated differently by internal or dome flats than by the night sky. The most sophisticated means of performing an illumination correction is to block average the twilight spectrum in the spectral direction, normalize each spatial column to 1, block expand back to the original size, and then fit along the spectral direction for each spatial column. Note that this is not what the IRAF illumination correction task does; it only corrects for very low-order illumination variations. The IRAF approach is probably OK for all but very faint (high-z SNe) targets. Wavelength calibration. This simply involves identifying comparison lamp emission lines and determining the relation between pixel value and wavelength. Usually a 5th order polynomial will give a good wavelength solution. Note that the wavelength solution usually shifts in the spatial direction. Therefore it is necessary to trace the emission lines in the spatial direction. IRAF task IDENTIFY and REIDENTIFY help with these chores. Try to obtain reliable calibration out to the ends of the spectrum. Some extrapolation is always necessary, but check the extrapolation carefully to see whether it is believable. For instance, it the polynomial fit takes a turn outside the region where there are lines, beware. Errors in wavelength calibration will not only result in slightly wrong redshift; it errors are non-linear they will compress or stretch the SN flux, creating bogus spectral features. Since most spectrographs have flexure, the wavelength zeropoint should latter be adjusted for each spectrum using night sky emission lines. Cosmic ray rejection: I think it is important to eliminate cosmic ray hits from 2D spectra, before the spectra have a wavelength calibration applied, and certainly before extraction into a 1D spectrum. This minimizes the amount of data that is ruined. One difficulty is that CR rejection from a single spectrum (the typical case for most nearby spectroscopy) involves interpolation yet at this stage the spectrum still has a sky background which can be strongly structured in the wavelength direction due to night sky emission lines. I have a routine which helps do this as follows: a) the sky is subtracted b) CR-like objects are detected and replaced c) the user can override any replacements d) the user can then edit any missed CR's Usually this routine does a pretty good job of cleaning CR's in the sky, but leaves those on the spectrum, where they can be edited by hand. The editing job can be tedious, and I find I can only really do a good job for 10 to 15 spectra a day. However, I think this whole approach is much better than currently available automated methods. One addition that would be required is to retain a mask of edited regions (the program produces this, but doesn't save it). Image rectification. This step resamples the 2D spectrum onto a uniform grid in wavelength using the wavelength calibration determined above plus any zero-point shift. If possible the spectrum should also be rectified in the spatial direction. This is usually only possible using a spectrum of a linear array of pinholes - something that was not available at all observatories. Note that such spatial rectification only corrects for the instrument distortion in the spatial direction; a spectrum can still be curved due to differential atmospheric refraction. Flux calibration. At this point the spectra of standard stars should be extracted and the system response determined. Note that at this step, correction for atmospheric extinction is usually included. It is very important to determine whether the "standard" extinction used really is applicable, especially in the blue. The internal consistency of the flux standards is a vital measure of the internal errors, and should be propagated into the error budget. To the extent possible, the sensitivity functions for different spectrographs should be kept in some standardized form for intercomparison and use with the exposure time calculator. Spectral extraction. Here one has to decide how wide to make the extraction aperture, and how to interpolate the background (composed of sky plus host galaxy). We should try to be uniform in these choices. One issue involving spectral extraction that is not handled (to my knowledge) by available routines is that the width of the spectrum can change with wavelength do to seeing variations or focus variations in the spectrograph. The result of using a fixed-width aperture in these cases is that the spectrum is modulated by the fraction of light within the fixed-width aperture. Using a large aperture is undesirable as it greatly complicates the removal of host galaxy light. Weighting the extraction can help, since then the pixels at the edge of the aperture can be made to receive little weight. All extractions should use variance weighting. Again, I would like to have a sky spectrum in magnitudes per square arcsec.