PFU+WFI - Commissioning Plan 
and Status Report

Moon observed at AAT in U with WFI

U-band image of the Moon acquired on the AAT on 2 Feb 2001 by Chris Tinney, Fraser Clarke and Gordon Schafer. The image was 0.1s long and acquired about 40 minutes after sunset.  Copyright AAO.


Scope

The following is a combined commissioning plan and status report for WFI and PFU on the AAT. It  will be contiually updated.

In the HTML version of this document, the report sections and plan sections are highlighted in different colours.

History

Version 1.0  - Plan Only - 27 July 2000. Chris Tinney, Gary Da Costa.
Version 2.0  - Plan & Report - 31 October 2000. Chris Tinney
Version 2.1 - Updated Report - 25 January 2001. Chris Tinney. Especially for WARM 183K operation, and sensitivities. Direct Imaging Calculator updated.


Shortcuts to Results.

WFI
PFU

Introduction to WFI

General descriptions of WFI and PFUI can be found at http://www.aao.gov.au/local/www/cgt/wfi/wfi_pfu.html

The following figure shows the layout of the WFI CCDs in the mosaic's focal plane. On this page we only refer to CCDs by their 1-8 numbers ( some locations in the CICADA operating system use C-based 0-7 counting). The PFU shutter opens and closes in the EAST-WEST direction (or equivalently the top-bottom direction of the mosaic as seen on the loayout below, and as usually seen on image displays).

WFI CCD Layout

 

What's New

This page was updated in January 2001 with the results of commissioning and test data acquired 23-27 December 2000. The major change from earlier operation is the switch to a much warmer 183K operating temeperature for the focal plane, in order to improve CTE. The penalty for this is increased dark current, which  now is definitely not flat. Dark exposures are essential for WFI observers, as are bias/zero exposures, as the bias is now not flat either. Sensitivity estimates have been measured for several filters, and for each CCD in the mosaic.
 

1. WFI - Science CCD Performance

1.1 For each CCD in the mosaic, and for all available readout speeds  we must measure the following parameters.
1.1.1   Read Noise in operation on the AAT. Long-term records of read-noises and gains in various binnings are available .
1.1.2   Gain in operation on the AAT.  Long-term records of read-noises and gains in various binnings are available .
Only one speed (FAST) is available.

Read noises and gains were determined using the MSCFINDGAIN itask in IRAF for 2 sets of usable data on the nights 25 dec, and 26dec.  Focal plane was operated at 183K for these tests.

Results of 2 observations over 2 nights - Unbinned. CCD Section=[800:1000,1400:2000]



CCD      Gain (e/adu)     ReadNoise (e)
1        1.45,1.46        5.2,5.2
2        1.70,1.72        5.9,13.5?
3        1.94,1.97        5.7,6.7
4        1.73,1.72        4.9,4.9
5        2.00,2.04        4.7,4.6
6        1.67,1.67        4.7,4.5
7        1.88,1.88        4.2,3.9
8        1.68,1.71        4.0,4.6

1.1.3   Linearity. This should result in a polynomial correction which can be applied to bias corrected images.
December 2000 & February 2001 - Linearity data was acquired with the dome flat lamp calibrated as a function of time using repeated 5s and 10s exposures. The resulting 2nd order polynomial calibration has lamp fluctuation residuals of +-0.2%.

We determine a mean count rate for the linear part of the linearity curve (or at least for the middle part of the linearity curve), from which we can predict the 'true' counts at all times. By plotting the measured counts Nm against the true counts Nt we can derive a linearity correction alpha.

Nm = Nt ( 1+alpha*Nt )
or equivalently if alpha << 1,
Nt = Nm ( 1-alpha*Nm )
The non-linearity at a count level Nt~Nm is then just alpha*Nm, and when alpha is positive you need to SUBTRACT counts from the measured signal to get a linear signal.

Unfortunately, plots of Nm vs Nt are incredibly difficult to analyse - the deviations of interest are tiny, and so invisible on a plot. Much more useful is to examine Nm/Nt vs Nm. When this is done you can (a) see whether particular data points are outliers from the general trend and should be deleted, and (b) actually examine different parametrisations to see which works best.

When we do that we find that the 'standard' parametrisation above in terms of a single 'alpha' term is in fact pretty lousy. It will get you an approximate linearity correction, but not a very good one. For CCDs like CCD4 and CCD7 where the CCD's non-linearity is highly non-linear, it is especially poor. For comparison with other AAO CCDs we therefore provide the alpha number, but I can't recommend you actually use it.

To see examples of how bad the alpha parametrisation is, view the following Postscript files :   CCD1, CCD2, CCD3, CCD4, CCD5, CCD6, CCD7, CCD8 . The general trends in behaviour are similar to those seen in other MITLL CCDs. CCD4 has the linearity profile which is most discrepant from that of the other CCDs. It is also the only Phase I device in the mosaic


Linearity measurements and Rough alpha (Linear) parametrisations.

Do not use these to correct data - use the polynomials below.
      24AUG00, V filter, 1024x1024 central window, 3% `lamp' stability.
      25DEC00, g filter, 1024x1024 central window, 0.2% 'lamp' stability
      02FEB01, i filter, 1024x1024 central window, 0.2% 'lamp' stability
        CCD  Useful range    AUG00           DEC00        FEB01
       (adu above bias) Alpha*10^6     Alpha*10^6  Alpha*10^6
      1    0-53,100    -0.24 (+-0.05)   -0.154        -0.121
      2    0-54,000    -0.05      "     -0.090        -0.098
      3    0-49,500    -0.29      "     -0.203        -0.217
      4    0-52,400    +0.53      "     +0.534        +0.160
      5    0-53,000    -0.38      "     -0.145        -0.099
      6    0-56,350    -0.47      "     -0.347        -0.219
      7    0-53,100    -0.38      "     -0.235        -0.189
      8    0-52,800    -0.47      "     -0.256        -0.157


 

Linearity Measurements and Polynomial Parametrisations


Much better fits are obtained with a polynomial parametrisation
 

Nt/Nm = 1 + A1*Nm + A2*Nm*Nm + A3*Nm*Nm*Nm .....
The term A0 is usually fixed at 1, and A1 is usually fixed at 0.0. CCD7 requires higher order terms than A2. CCD4 is the most unusual device, as its non-linearity does not follow the general trend (which is to be asymptotically more linear at lower counts, and to deviate at high counts).  CCD4 requires a second order fit, with all parameters being free. With the execption of  the zeoth order term (which essentially just adjusts the CCD gain) we derived identical fits in December and February for CCD4. The remaining CCDs all also showed very similar fits on both tests.

We have some confidence therefore that these polynomial calibrations can be generally applied.

To apply these corrections, your data reduction procedure would be

  1. Overscan subtract and trim each image
  2. Bias (or zero) subtract each image
  3. Run a linearity correction program to replace the value in each pixel (Nm(x,y)) with

  4. Nm*(1 + A1*Nm + A2*Nm*Nm + A3*Nm*Nm*Nm .....)
  5. Proceed with dark subtraction, flat fielding, etc as per usual.
Recommended Polynomial Linearity Corrections for WFI.
(A log of  past measurements is available here).
CCD
A0
A1
A2
A3
Residuals
(%)
Max ADU
above bias
Uncorrected Non-linearity
< 0.5% below 
(adu above bias)
1
1
0
3.86e-12
 
0.4
52,000
36,000
Plot
2
1
0
3.18e-12
 
1.0
54,000
35,000
Plot
3
1
0
8.43e-12
 
0.9
42,000
24,000
Plot
4
1.02753
-1.6455e-6
1.7791e-11
 
0.8
52,000
<10,000
Plot
5
1
0
4.26e-12
 
1.3
44,000
32,000
Plot
6
1
0
5.93e-12
 
1.3
57,000
28,000
Plot
7
1
0
-5.50957e-12
2.39334e-16
0.6
54,000
36,000
Plot
8
1
0
5.37e-12
 
0.4
53,000
30,000
Plot

Note that for most CCDs these non-linearities are quite significant. CCD4, for example, is non-linear at all count levels! Non-linearity corrections are therefore strongly recommnded for WFI reductions.

The non-linearities observed are not dissimilar to (though in some cases more extreme than) those seen in the AAO's MITLL2a and MITLL3 detectors.

As a rule of thumb, try to always keep your targets peaking at < 30,000 adu. Non-linearity will then be  ~ few per cent (except in CCD4) without correction. 


For most observations however, linearity correction is recommended. Applying the above corrections should get the linearity <0.2% below the MAX ADU number above in each CCD.
 
See section 3.3.3 below for measurements of the 'non-linearity' at low count levels, which in this case is almost certainly due to timing errors in the PFU's shutter.
1.1.4   Full well depth and/or A-D saturation.
December 2000 & February 2001 (NB: These results updated 16 Feb 2001 - previous results were in error)

Linearity calibrations were acquired in small 1024x1024 pixel windows in the centre of each CCD using a flat field lamp over a range of exposure times. 5s and 10s exposures over the course of the sequence were used to calibrate the lamp to ~0.2%. From these data we made plots of 'Calibrated RAW ADU above bias' versus 'Exposure Time' for each CCD. From these we can derive where the full well limits lie in raw adu averaged over a window in the center of the 1024x1024 window in the center of each CCD.

IN  ALL CASES SATURATION IS DUE TO REACHING FULL WELL, RATHER THAN A/D CLIPPING. This means data obtained NEAR the saturation limit may be subject to severe non-linearity. Observers are advised to always stay several thousand ADU below these limits, and if linearity is critical to their science requirements, to stay below 30,000adu.

The December and February results are consistent with each other, giving some confidence that these results will be generally applicable to WFI observations. Look at the plots to see how I derived these full-well limits. All CCDs have full wells over 80kph.



December 2000 & February 2001 - Saturation Results
CCD  Satur'n in      Bias   Satur'n  Gain   Full Well
   (adu above bias)  (adu)  (raw adu)(ph/adu)  (kph)
1    56,000          3450    59,450  1.45    81.2    Plot
2    53,000          4100    57,100  1.71    90.6    Plot
3    42,000          3992    45,992  1.96    82.3    Plot
4    52,000          4549    56,549  1.72    89.4    Plot
5    44,000          2848    46,848  2.02    88.9    Plot
6    56,000          3368    59,368  1.67    93.5    Plot
7    54,000          4002    61,000  1.88    101.5   Plot
8    55,000          4514    59,514  1.69    92.9    Plot


 
1.1.5   Impact of Charge Transfer Efficiency on astronomical images on the AAT for a variety of filters.
Data was obtained on sky in December 2000, but has not been analysed yet.

What we have been able to do is compare the trailing of cosmic rays on dark images from both the August (cold) and December (warm) operating temeperatures. These are shown below - sorry the images are so large, but they are necessary. At the penalty of poorer dark current performance the SCTE has been greatly improved and is now acceptable in all CCDs.
 

CHARGE TRANSFER


CCD1
GOOD

CCD2
OK

CCD3
GOOD

CCD4
GOOD

CCD5
OK

CCD6
GOOD

CCD7
GOOD

CCD8
GOOD

1.1.6 Timing - Determine times to read out full mosaic, and commonly used subwindows.
 
Improved perfoirmance for the AAT WFI computer has significantly reduced the overhead in transferring images after they are read from the 56-75s (depending on load) seen in Aug 2000, to 56-59s (depending on load). The remaining timings below have not changed significantly since August.

  Binning   Size      Time to read     Time to read+transfer
Full mosaic windows
    1x1     2098x4136x8     53s          56s-59s depending on load
    2x2     1049x2068x8     23s          23s-??s depending on load
    3x3      700x1379x8     12s          ~15s
Centre of mosaic windows
    1x1     2048x2048x4     26s          26s
    1x1      256x256 x4      3s           3s
Centre of each detector windows
    1x1     1024x1024x8     11s          11s
    1x1      512x512 x8      6s           6s
One whole single detector
    1x1    2098x4138        53s          53s
Two whole detectors (one per controller)
    1x1    2098x4138        53s          53s

Notes
  • x4 sizes are windows in the centre of the mosaic (eg.256x256x4 is the central 512x512 pixels of the mosaic).
  • It takes the same time to read 1 detector or 8. The only time saving is in transferring data, and once the AAT WFI computer's CPU and DISKs are optimised even this will be negligible.
1.1.7   Effects of windowing - does windowing produce any noticeable impact on any of the other CCD parameters (other than read time of course)?
 
Presently windowing can produce small 'edge' effects on the up to 50-70 pixels near the top/bottom of the window boundary (near the top of CCDs read through the upper controller, near the bottom fo CCDs read through the bottom controller).  These are not always present, and not always present on both controllers, but are most commonly seen as a ~10% flux deficit in the affected rows.

Eg. you can look at GIF images of 1024x1024 pixel windows in the center of each chip illuminated by a flat field lamp and the V filter. On the 24aug0055 frame, CCDs 1, 2, 3, and 4 all show a bottom edge effect. CCDs 5, 6, 7 and 8 do not show any effect. On the other hand when the exposure time was increased for 24aug0060, edge effects were seen at the top of CCDs 5, 6, 7 and 8 , but not CCDs 1, 2, 3, and 4.

 
Don't trust the bottom/top 70 pixels of windowed data.
This effect is presumably caused by saturation of the readout register by the rows which are not being used, and the readout register requiring time to recover once rows stop being skipped, and readout of thw window starts.

The major problem with windowing is that it does not, at present, provide an overscan. This complicates reduction. Also the reduction pipeline based on IRAF cannot handle data sets based on several windows - they have to be reduced separately. This means it is HIGHLY ADVISABLE TO TAKE ALL DATA IN ONE WINDOW. The seconds saved at the telescope in read-time do not justify the pain in reduction.

1.1.8   Effects of binning - does binning produce any noticeable impact on any of the other CCD parameters(other than read time of course).
No noticeable effect on gain and read-noise. The comments above on windowing and data processing overheads apply also to binning. Seconds saved in read-out will rarely justify the extra pain in data reduction.
1.1.9   Quantum efficiency - these measurements will have to be performed in the lab. Standard star measurements should indicate whether we obtain results concistent with lab measurements on the telescope.
RSAA were unable to make any in-lab measurements of the WFI CCD QEs. No data obtained or obtainable in the August run. The QEs of the CCDs are approximately known from measurements made by UCO/Lick and GL Scientific. However these were not necessarily made at the temperatures at which the devices are being operated in WFI, so are not completely reliable.
1.1.10  Evaluate times for readouts in various windowings and binnings, including (a) Time to read-out mosaic, (b) Time to display, (c) time after which next exposure can be started, (d) time to write to disk, (e) time to write to tape, (f) time to display already obtained image from disk, and interaction with a current CCD read.
See report on 1.1.6 above. Data is displayed in real time (f) as it is transferred. A new exposure can be started (c) as soon as the read+transfer is completed. Data was not written to tape (e) during the night, so we did not test the impact of writing data to tape as you go on performance.
1.1.11  Evaluate dark count rates for each CCD in a central window [800:1000,1400:2000]
December 2000 : WFI is now run at a warm 183K. The dark current rates below reflect this warmer temperature, with dark counts of ~30e per 1800s being counted (or a noise contribution of about 5.5e-). This means in a typical 300-600s exposures, dark current will contribute about half as  much noise again, as read noise.

At least one of the exposure below showed anomalously high dark currents  (25DEC0035). The cause is not known.

 
26DEC0017 1800s
Guider CCDs on
(183K)
27DEC0051 1800s
Guider CCDs OFF
(183K)
25DEC0035 1800s
Guide CCDs on
(183K)
25DEC0037 1800s
Guide CCDs on
(183K)
 02feb0155/156
Guide CCDs on
(183K)
 
CCD
ADU above
bias
e/1800s
ADU above
bias
e/1800s
ADU above
bias
e/1800s
ADU above
bias
e/1800s
ADU above
bias
e/1800s
 
1
17
25
18
26
56
 
23
 
 20
 28
 
2
16
27
10
17
37
 
18
 
 20
 34
 
3
10
20
11
21
30
 
13
 
 17
 33
 
4
29
50
30
52
72
 
37
 
 33
 54
 
5
12
24
7
14
26
 
15
 
 19
 38
 
6
10
17
7
12
20
 
14
 
 20
 34
 
7
11
21
9
17
32
 
15
 
 21
 39
 
8
14
24
9
15
36
 
18
 
 22
 37
 
The dark levels also have considerable structure. This means dark frames must be acquired and subtracted from all data.
The following  images compare a 1800s dark frame taken with the guide CCDs turned on (after zero-subtraction, trimming and overscan subtraction), with an 1800s dark frame taklen with the guide CCDs turned off  (after zero-subtraction, trimming and overscan subtraction), and with a zero(or bias) frame (trimmed and overscann subtracted).

CCD4 shows considerable structure on very large scales (as well as the worst dark current). CCD1 shows a bright region on its right edge. CCDs 1,2,5,7 and 8 all show "warm blobs" where dark currents are elevated by 5-50 adu per pixel over several hundred pixels.

Turning the Guide CCDs on produces  four bright spots at the edges of the mosaic peaking at 100-200adu/hr, and slightly elevated overall dark currents.
 
 
26dec0017 1800s dark - Guide CCDs on. Stretch=-100 to 200
27dec0051 1800s dark - Guide CCDs off. Stretch=-100 to 200
Zero frame (7 frames medianned). Guide CCDs on. -100 to 200
 
 

 
1.2 BIAS Performance - for each CCD in the mosaic we must evaluate BIAS performance, including.
1.2.0 Determine rough differences between bias levels of different CCDs.
Averaging over 16 bias frames taken over 3 nights (24-26dec00) with focal plane at 183K
Examination of the bias levels in 16 frames taken over 3 nights show that bias levels fluctutate up and down by +-2adu on all timescales - between subsequent exposures, over hours, and between nights. Overscan subtraction is important.

CCD     Mean    Standard Deviation. im1     3451.6 +- 1.8 im2     4080.1 +- 1.6 im3     3992.2 +- 2.4 im4     4549.6 +- 3.1 im5     2848.4 +- 2.0 im6     3368.8 +- 1.8 im7     4002.0 +- 0.6 im8     4514.0 +- 0.4
Averaging over 42 bias frames taken over 3 nights (23-25aug00)

CCD     Mean    Standard Deviation. im1    3457.9 +- 1.5 im2    4110.4 +- 0.9 im3    4028.4 +- 2.4 im4    4611.4 +- 2.9 im5    2879.9 +- 1.7 im6    3366.7 +- 1.5 (except for last 5 biases which jumped to                       3440.6+-0.8) im7    4046.2 +- 0.8 im8    4537.7 +- 0.6

1.2.1   Is theBIAS Flat - ie well approximated by a single number for the entire CCD?
December 2000 : A significant change in WFI operation made between August and December 2000 was the change to a warm 183K operating temperature, to improve CTE. The main penalty for this is increased dark current, which is quite non-uniform. A number of 'spots' appear, which while not bright, do have increased dark current. As a result, the creation of both bias and dark frames is now essential. Most of the other August comments still hold.

August 2000 : Most of the CCDs show one of the following; bad columns, trapping sites, small leds. In general the BIAS frames seemed quite flat away from these regions. Bad columns and trapping sites generally don't subtract well with bias frames (or as IRAF calls them zero frames). Nonetheless, with SO much data in each frame, it will make sense to obtain and subtract BIAS frames always, so that observers don't have to rely on the detectors always being flat, since it will be hard to ensure they are!

By the 3-5th nights of the run (23-25aug after tweaking on the 3rd night by Mark Daowning), the biases away from bad columns, trapping sites etc, were very flat, and the subtraction of a constant value as determined from the overscan seems to be a good approximation.
 

1.2.2   If not, does the BIAS have a characterisable shape (eg. a repeatable profile in X or Y directions).
August 2000 : The only 'repeatable' structure seen in the bias/overscam frames was a Y-direction droop/rise at the bottom of a few CCDs on a few ocaissions. However, in almost every case, trying to allow for a polynomial or spline fit to the overscan region actually introduced more 'fals' bias structure than it did fix the small structure which was present. Correction with a single constant fitted to the overscan for each CCD will probably be the safest procedure in the near future.
1.2.3   Does the BIAS have cosmetic defects which subtract from images to produce useful data. If not these defects must be recorded to create a bad-pixel mask.
Most of the bad pixels and/or bad columns don't really subtract well.
Data to construct these masks was obtained.
1.2.4   Does the BIAS show pick-up noise? If so, at what level and what frequency does the noise appear. How many BIAS frames must be medianned to remove this pick-up noise. What is the impact of the pick-up noise if just treated as extra read noise?
The bias frames do show evidence for pickup noise, though not at a level significantly higher than the read-noise. Given the most common high background applications in which WFI will be used, this is probably not worth worrying about.
1.2.5   Is the BIAS variable? By how much does it change in a typical 1 hour and 12 hour observing periods.
Overscan levels seem to vary by up to +-2 adu over the course of a night. These variations can be on very short time-scales - even between subsequent bias frames.
1.2.6   The above information will be used to determine the optimum technique to be used for BIAS subtraction in the data processing pipeline, and the best overscan or BIAS frame information needed to implement that technique.
December 2000 : Because of the high dark current, it is now important to create a bias frame (essentially a bias plus the dark current accumulated during read-out). Because of the +-2ADU bias instability it is also important to do over-scan subtraction. The recipe below, therefore still holds - do overscan correction with a constant, then make a bias/zero frame and subtract that from all images.

August 2000 : At present fitting a constant to the overscan region and subtracting is recommended, followed by the subtraction of a zero/bias frame made up from 9-15 biases taken during your run.

The recommended parameters for the MSCRED version of ccdproc (related to overscan subtraction are) are

ccdtype="" interactive=no function=legendre order=1 sample=* naverage=1 niterate=2 low_reject=3.0 high_reject=3.0  grow=0.0

Overscan correction can also be done in PIPELINE, where you should be able to see the same parameters set in the /opt/cicada/config/iraf_table file.

You should follow this by doing a zerocombine using the following parameters in IRAF.

cl> zerocombine input=@zerofiles.lis output=yourzerofile combine=median reject=sigclip lsigma=5 hsigma=5 mclip=yes scale=none

You can then use the file yourzerofile.fits for zero correction of your data in either PIPELINE, or directly in IRAF.

1.3 FLAT FIELDS - again the information described below must be obtained for all CCDs in the mosaic.
1.3.1   Obtain example flat-fields for the mosaic in each of the UBVRgriz filters.

1.3.2   Determine whether there are pixels which will not flat-field, for entering into a bad-pixel mask.

1.3.4   Determine optimum flat-field creation technique. This will require exploring a number of options for each filter.

  • Dome flats (especially try dome flats with lights turned on only when shutter is open).
  • Twilight flats.
  • Dark sky flats.
  • In each case, a useful prescription for actually obtaining these flats (and how long they will take), and how to process images into flat fields must be provided for visiting astronomers.

    1.3.5   Determine whether full mosaic, unbinned images can be used to create useful flat fields for binned and/or windowed images, or whether seperate flat fields are needed for every binning/windowing.

    1.3.6   Determine utility of flat fields from one night, for flattening date from another night. (The aim here is to see whether we can create libraries of flat fields). It would be useful to unbolt WFI/PFU during the day in the middle of the run, and re-attach it to simulate a 'new run' to test this.

    1.3.7   Obtain standard star observations rastered over the entire field, so we can determine whether flat-fielding creates a truly flat field.
     

    Flat fielding with WFI can not create a truly flat field, as outer pixels are larger on the sky than inner ones (due to astrometric distortion). So you can either produce a 'FLAT SKY' flat field which makes the sky looked uniform accross the mosaic, or a 'PHOTMETRIC' flat field, which has a constant sensitivity accross the mosaic, but a non-uniform sky.

    2. WFI Guider Performance


    The most significant comment on the performance of the WFI guider CCDs is that they contain some component which is injecting light pollution into the edges of the WFI mosaic. The effects can be clearly seen in a pair of 1800s darks acquired on 24aug (24aug0004,5) - click on the images to see larger versions. As is shown above for the December warm operating temeperature, and below for the slightly colder August 2000 operating temeperature.
     
     
    Guider CCDs OFF during 1800s dark
     
    CCD5 expanded
    Guider CCDs ON during 1800s dark
    CCD8 expanded
    CCD4 expanded
    CCD1 expanded

     

    Examination of these images shows that when the guide CCDs were turned off the dark current levels for cold operation (prior to december 2000) were  typically 1.5-3 adu/1800s, but when the guider CCDs are turned on the general dark current level increases to 5-10adu/1800s, and in the bright regions at the edges of CCDs1,4,5 and 8 the contamination peaks at 50-100adu/1800s.

    These effects will be much smaller in the typical exposures expected for AAT broad band imaging (5-10min), but nonethless dark exposures will be critical if guiding is to be used.
     


     
    2.0 Make it work.
    We were unable to test the guider with the telescope in anger. Though we were able to confirm the guider will drive the telescope, and the guider PC does talk to the guider CCDs, and it can all be run from the control room.
    2.1 Evalute WFI Guide CCDs - Though detailed knowledge of the read-noises and gains of WFI guide CCDs are probably not essential for everyday operation, it would be useful to measure them periodically to ensure their continued operattion.
    2.1.1 - Read noise, dark current and gain for each guide CCD.
     
    Each guide CCD delivers an image 320x240 10um pixels, corresponding to a field of view 48x32" on the sky at a scale of 0.15"/pix. Orientation on the sky is currently unknown.


    CCD   Gain      Read Noise  Bias   Satun'n   Dark Counts
         (e/adu)      (e)       (adu)  (adu)      (e/300s)
    1     7.967     135.233     383    
    2     6.178     129.074     216              124 
    4     8.13      154.188     258              154
    3  DEADDEADDEADDEADDEADDEADDEADDEADDEADDEADDEADDEADDEADDEAD
    5    11.287     163.332     339
    6     6.89      128.802      83
    7     7.226     127.832     194
    8     6.966     124.919     121              111


    2.1.2 - Determine whether there are particular guide CCDs which are better than others.
    2.1.3 - Examine guide CCD cosmetics. Are there CCDs or CCD regions which should be avoided?
     

    The images below show the 7 working guide CCDs, illuminated by a flat field lamp though an r filter. All are pretty good, with a few noticeably bad spots, but most of the area being quite usable. The results above show none to have particularly bad read noise. Dark current is equal to or less than read noise for all conceivable guiding exposures.
    G8 
    G1
    G7 
    G2
    G6 
    DEAD
    G5 
    G4
    It may also be useful to look at the following pair of images which show what you see when the guider CCD full wells are reached. The left image is a 4s image with G8 of a dome flat field lamp (~5400adu). The right image is a 6s exposure with the same lamp, and we see that at 8500adu full well has been reached.
    G8: 4s,5400adu 
    G8:  6s,8500adu
     
    2.2  - Evalute WFI Guide CCD Performance.
    2.2.1 - Are all guide CCDs confocal with mosaic?
    No data could be obtained.
    2.2.2 - Determine how faint a star can be reliably guided on in each passband, by placinf Landolt standard stars in guide CCDs.
  • What is the fastest rate at which guiding can be done? What is the brightest star which can be guided on at this rate?
  • What is the slowest rate at which guiding can be usefully carried out? What is the faintest star which can be guided on at this rate?
  • No data could be obtained.
    2.3 Determine the best operational modes of the guider CCDs. That is what's the best way to acquire a field, open shutter, acquire guide star, close shutter, start exposure, start guiding. How to interact dithering of a field with re-acquiring guide stars?
     
    No data could be obtained.

     

    3. PFU Performance

     
    3.1 SCATTERED LIGHT / LIGHT LEAKAGE. Ensure light leaking through both the front and back of PFU has been reduced from the unacceptable level seen in January 2000 commissioning.
     
    Light leakage tested as follows using a U filter. A series of 180s exposures were taken - several with the shutter open and dome lamps on, followed by a dark with the dome lamps left on. The lights were enough to produce 12000-14000 adu in 180s with the shutter open. When the shutter was left closed, we could not detect any light signal in a 180s exposure at the few adu level. Leakage through rear/front of PFU < 4.0e-5=0.004%.

    Another test was done with a 60s dark + 60s read being performed with lights on in the dome, to record < 3adu leakage. A 5s exposure with the same lights on produced 2060adu/s above bias. Assuming a 90s effective 'dark exposure time' we have a leakage limit < 0.002%. Leakage through front/rear of PFU < 2e-5=0.002%

    In early August, further tests were performed with an AAO CCD mounted on the PFU in the Coude W room.

    Low General Light Test
     An incandescent lamp was used to evenly illuminate the entire room, including the floor at which the PFU+CCD looks. A 100s exposure allowed 55,897 adu through the shutter onto the CCD. A 100s dark produced no detectable counts above bias. From this I conclude the unit is light tight to 'over-all' illumination to better than 0.005%.

    Light Incident on Shutter Test
     An incandescent lamp was shone directly 'under the skirt' of  PFU. A 2s exposure collected 37,182 adu. A 60s dark (with the lamp still on) collected 5 adu above bias, or 0.0005% of the photons incident on the closed shutter. This seems an acceptable level of leakage.
     
     
    The light leakage problem with PFU found in January 2000 has been fixed by blackening and better baffling.

    3.2 FILTER WHEEL.
     
    3.2.1 Ensure filter wheel reliably selects filters at all observable telescope angles.
    It does.
    3.2.2 Ensure filter initialisation is reading filter names from wheel correctly.
    It does.
    3.2.3 Ensure filter selected is getting its name correctly into file header.
    It does not. Currently the filter information is read by PFU, and even transferred into a table in CICADA. But nothing is going into the files.
    3.2.4 Ensure filter wheel and filter inside holder inside wheel is stably held in place.
    Were unable to test.


    3.2.5 Test SDSS griz filters for pinhole flaws, and leakage around edge of filter.

     


    3.2.6 Measure filter change time for changes of 1,2 and 3 positions in wheel.

     
    3.2.7 Measure light leakage/contamination due to wheel motion during exposures or reads.
     
    During a 60s dark frame, the PFU filter wheel was advanced by 1 slot 9 times. The resulting exposure showed no evidence for light leakage or contamination at the 1-2 adu level. This compares favourably with the extreme (several hundred adu) level of light leakage from the filter wheel optical encoders seen in January 2000 commissioning. It is probably safe to move the filter wheel while WFI is being read out, but for the time being until more experience is gained we will continue to recomend no wheel motions while the detectors are being read.
    3.3 SHUTTER
    3.3.1 See if it works.
    Problems were had with shutter reliability on at least one night. The problem was eventually tracked down the the air pressure available to run the PFU pneuamtics. This had been dialled down to ~100, and operated unreliably at this level. It needs to be run at around 250-300. At this pressure the shutter operated (ie opened and closed and initialised) without problems.

    In fact further experience has shown the shutter to be very reliable. When problems are had they are always to do with low air pressure.

    3.3.2 See if it works reliably (lots of exposures. All telescope orientations).
    PFU did appear to operate reliably (ie open and close without sticking or failing) at a wide variety of telescope orientations
    Timing information was acquired at both 4:00W and at the zenith. Unfortunately, instabilities in WFI seem to preclude analysing this data at a high level of precision. While no evidence was seen for a systematic change in the shutter dead time (see 3.3.3 below) the measured dead times were not always the same.

    There is a strong case for having PFU return timing information on how long it thinks the exposure was, so that we can explore calibrating the delivered time to higher precision.


    3.3.3 See if it gives us the time we asked for.

    Shutter timing and uniormity data taken in August 2000 revealed a problem with the CICADA timing of readouts with shutter triggering. This was resolved in December 2000. All the shutter timing data sequences acquired below were calibrated to remove lamp variations.
    Timing data was acquired on 27 December, with the telescope at zenith.  Some data was also acquired with an AAO CCD (so the CICADA bug was not relevant) on 9 August before PFU went on the telescope, and poor data was acquired in August 2000 with the CICADA system.

    In all cases, the results are parametrised as

    Tactual = Trequested - t0
    The sign of t0 is in the sense that t0 positive implies the exposure was short of the requested time.
     

    August 9, 2000 : Estimated shutter 'dead time' was 31+-1ms (ie actual exposures were 31ms shorter than requested).

    August 24, 2000 : This data was poor, but indicated the dead time was 50+-50ms (ie actual exposures were 50ms shorter than requested).

    December 27, 2000 : A more comprehensive data set  wasacquired. The lamp was calibrated to be constant to within +-0.1% over time. The data has been processed in two ways; (M1) by plotting the observed counts as a function of requested exposure time, and making a linear least squares fit; (M2) by plotting the requested exposure time as a function of observed counts, and measuring the extrapolated exposure time when the counts become zero. You can view the data as Postscript files M1: CCD 1,2,3, 4, 5, 6, 7, 8  and M2: CCD 1, 2, 3, 4, 5, 6, 7, 8
     


    27DEC00 Exposure sequence Zenith 
    CCD  t0 : M1         t0 : M2
           (ms)            (ms)
    1    18+-4             18
    2    17+-4             18
    3    17+-4             18
    4    18+-5             24
    5    16+-7             13
    6    16+-7             13
    7    18+-6             18
    8    17+-5             15

    Mean t0 = 17.1+- 2 ms.


    The December data would seem to robustly determine the shutter delay as being 17 ms (each exposure is 17ms too short). This says that exposures longer than 2s will have absolute timing to better than 1%. Observers seeking precise timing information from short exposures are advised to measure the shutter delay for themselves, and correct their exposure times until we gain enough to experience to guarantee the delay is constant from run-to-run and with telescope position.

    February 2, 2001 : Another comprehensive data set was acquired. The lamp was calibrated to be constant to within +-0.1% over time. The data has been processed in two ways; (M1) by plotting the observed counts as a function of requested exposure time, and making a linear least squares fit; (M2) by plotting the requested exposure time as a function of observed counts, and measuring the extrapolated exposure time when the counts become zero. You can view the data as Postscript files M1: CCD 1, 2, 3, 4, 5, 6, 7, 8  and M2: CCD 1, 2, 3, 4, 5, 6, 7, 8
     


    27DEC00 Exposure sequence Zenith 
    CCD  t0 : M1         t0 : M2
           (ms)            (ms)
    1    22+-2             23
    2    20+-3             21
    3    20+-2             22
    4    22+-5             28
    5    20+-4             23
    6    20+-3             22
    7    23+-3             25
    8    22+-3             25

    Mean t0 = 22.4 +- 2 ms.


    The February data  seems to provide a robust t0 estimate, though one which is about 5ms different from that obtained in December.  So far no tests have been done to determine the shutter delay as a function of telescope position (all were done with the telescope at zenith).
     
     
     
    We  conclude that 
      1. Exposures of longer than 5s will always have uncorrected exposure times in their headers good to 0.4%
      2. Observers for whom absolute shutter timing is important should correct their exposure times to be 20ms shorter than the time requested (though they should also assume a residual uncertainty of at least 5ms in their exposure length). Resulting shutter timing for exposures of longer than 5s will be good to 0.1%.
      3. Observers for whom shutter timing is hyper-critical should obtain a set of shutter timing data during their run to check that the 20ms offset above still holds for their instrumental set-up.
     
     

    3.3.4 Determine minimum exposure time which gives 1% uniformity accross field.

    It would appear that PFU meets its specification of delivering better than 1% uniformity for exposures of > 2s.

    In fact 1s exposures seemed to be uniformly exposed at the 0.7% level in August 2000, and to the 0.5% level in December 2000. At this sort of level, flat fielding difficulties limit our ability to probe shutter uniformity more closely.
     

    December 2000

    Shutter uniformity was examined by acquiring long (10-40s) exposures and using them to flatten short (0.05-1s) exposures. Any resulting non-uniformity should be due to the shutter no uniformly exposing the field of view. In particular we searched for variations in shutter illumination in the E-W (ie up-down) direction - the direction of shutter travel.
     
     
    26DEC0006 1s flattened with 40s This image (click on it for a larger version) shows a 1s exposure obtained on 26DEC00 through a Gunn g filter. It has been bias subtracted and flattened with a 40s exposure taken directly before it.

    The count level in the image is ~420adu. You can view vertical cuts  through CCD3 and CCD6, which show the  residual peak-peak  non-uniformity is ~0.5%.

    The following exposure (with the shutter blades travelling back in the  reverse direction) was also analysed to find a similar result for both CCD3 and CCD6.

    We also acquired a similar set of data on 27DEC00 through a V filter  with a 0.05s short exposure and a 10s long exposure. In this case we saw a more marked non-uniformity,  in both CCD3 and CCD6. However its form is that of a central doughnut, so it is through this is due to flat fielding problems in dealing with light reflected of the CCDs and off the V filter to form an out of focus image of the sky. This effect would be worse in V than g as the V filter is not AR coated.

    In this exposure the non-uniformity is about 1.4% in a 50ms exposure. This is still well within the <1% for a 2s exposure specification.


     
     

    Early August tests with AAO Single MITLL CCD

    I examined shutter uniformity by flattening a short (0.3s and 1.0s) exposure, with a long (60s) exposure. The full CCD was used. The CCDs long axis was aligned with the direction of shutter blade travel. There is a detectable deviation from uniformity along the direction of shutter travel. In the 0.3s exposure this is 0.7% peak-to-peak. In the 1s exposure this is 0.4% peak-to-peak. This is well within the PFU specification of 1% uniformity for exposure times longer than 2s.

    The plots below show the deviation from uniformity in the direction of shutter travel, derived by collapsing the flattened short image along its long axis. The top plot is for the 0.3s exposure, the lower for the 1s exposure. The 'glitches' in the 0.3s exposure are thought to be due to the bias jumps MITLL2A produces, rather than the shutter itself. The shutter would produce something more like the 'domed' pattern.

    Note that the single MITLL2A used in these tests has only half the physical length of the WFI mosaic.
     
     
    Cut along shutter motion - 0.3s Exposure (flattened)
    Cut along shutter motion - 1.0s Exposure (flattened)

     

    August Commissioning Run Shutter Uniformity - the "Early Readout" problem

    The PFU shutter opens/closes in the East-West direction. On the WFI CCDs this is equivalent to the `long' direction of the individual CCDs. We prepared plots show the results of extracting 1000 pixels from the centre of each CCD from an image which was made by flattening a 1s exposure (25aug0046) with a 20s exposure (25aug0045). Modulo flat fielding errors (the cause for which I'll come to) the resulting patterns in the 'flattened' 1s exposure should be due to exposure non-uniformity as produced by the shutter. (I do not believe the normlisation errors between CCDs in the plots are significant - we should only look at the patterns withing a CCD. )

    What we saw was a ~0.6% shape in the lower four CCDs (1-4) which is consistent across all four detectors. The upper four CCDs (5-8) appear very odd, with considerable structure present. If we look at the following 0.3s exposure (24aug0047) we see exactly the opposite effect. In this exposure the top CCDs (5-8) show a smooth profile, and the bottom CCDs (1-4) show the jagged ones.

    In fact, when we looked at the images themselves, we saw that 25aug0046 was almost perfectly flattened by 25aug0045 in its lower third, and poorly flattened in its upper two thirds. We saw the reverse in 25aug0047 - the upper third was well flattened and the lower two thirds is not.

    What was causing this? We believe it was due to WFI beginnings its read-out before the PFU has finished closing its shutter. The version of CICADA (thje software controlling WFI and its SDSU2 controllers) in use in August did not implement any delay between triggering a shutter close, and starting CCD readout. That is it assumed the shutter closed immediately after being triggered, and that it moves with infinite velocity.

    In practice, all shutters will have some (usually small 1-20ms) delay between being triggered and starting closure, and a finite close time (for small shutters as small as 25ms). In the case of PFU which has a pair of slowly moving shutter blades (time to travel accross PFU ~ 1s) which aim to move in exactly the same way to achieve precise and uniform exposures, this effect is quite severe.

    So when a PFU shutter blade is not fully closed before readout starts, then one edge of the mosaic will be slightly under-exposed and will have its image slightly trailed (the extent of both being a function of the ratio of the exposure time to the shutter travel time deficit).  Because the direction of shutter travel alternates between exposures, we will see one side of the array subject to this effect on one exposure, and then then other side of the mosaic subject to it on the following exposure.

    The effect of the image trailing will also be to 'smear' flat field structure along each array in the readout direction. This smeared structure will not be corrected by flat fielding, and so will apear in the flattened images as apparent residual flat fielding errors. This smearing will always be in the same direction regardless of the direction of shutter motion, because charge is always transferred to the lower and upper edges of the mosaic for readout. This is also what we see - blow-ups of the inner regions of the mosaic between CCS4 and 5 in both  25aug0046  and 25aug0047  exposures show the trailing away from the readout registers (ie towards the centre of the mosaic).

    Finally we see similar small trails from bright stars in either the upper or lower CCDs of the (very) few images we took on sky on 21AUG.

    This "early readout" is almost certainly the cause of the 'lamp' instability seen in the shutter timing and linearity tests discussed above. Unfortunately there is no way to correct for it. In principle, we could use the counts from only the 'good' CCDs in these tests, except that we have no way of knowing which CCDs are 'good' because we receive no timing or direction information from the shutter. The long term solution is to increase the delay between when the SDSU2 controllers trigger shutter closure, and when they start the readout.

    3.3.5 Determine time offset between exposure time asked for, and exposure time given for exposures which meet the 1% uniformity criterion.
    See above 3.3.3


    3.3.6 Determine whether shutter timing flags enable us to measure the
    time offset for each exposure.

    No shutter timing flags currently available. They will be essential to high precision timing with PFU.

    4. WFI+PFU Performance

    4.1 Image Quality - Evaluate delivered image quality over the entire mosaic. This will require taking focus frames in each filter.
    Very little on-sky data was obtained in August. Much more was obtained in December 2000. Examination at the telescope seems to show good image quality accross the mosaic and in all filter in 1" seeing.

    We found we could obtain near 1" images right accross the field. No obvious trends were observable indicating that either individual CCDs did not lie in the focal plane, or that the optics was delivering aberated images.

    This suggests there will be no problem in using WFI with the triplet, but we have only very preliminary data actually processed at present


    4.2 Focus

    4.2.1 Determine focus offsets for each filter (from a zero-point in a single 'defined' filter we expect to be present in the wheel on most runs - probably the SDSS i filter). This will require a night of reasonably good seeing, and should be repeated several times throughout commissioning.
     
    We measured the following focus values for filters in CCD6 - quick examination seemd to show this gave good focus in all the other CCDs, but the data has yet to be examined in detail. In the following table we adopt the V (WFI Schott) filter as our reference.
     
    Telescope Focus values in available WFI passbands
    Filter
    Telescope
    Focus
    (mm)
    Focus Offset
    (V=0.00)
    (mm)
    FOCOFF NAME
    U (WFI Schott #48)
    39.72
    +0.77
     U
    B (WFI Schott #49)
    39.60
    +0.65
     B
    V (WFI Schott #50)
    38.95
    0.00
     V
    R (WFI Schott #51)
     38.95
     +0.04
     R
    g (WFI SDSS #90)
    39.26
    +0.31
    GG
    r (WFI SDSS #91)
    39.21
    +0.26
    GR
    i (WFI SDSS #92)
    39.12
    +0.17
    GI
    z (WFI SDSS #93)
    39.13
    +0.18
    GZ


    4.2.2 Ensure the filter selection / telescope focus interaction works. Determine a procedure (once focus offsets are known) for start-up each night so that the filter selection - focus interaction works.

    Could not do on this run. At present it is not clear whether CICADA is even setup to handle automatic adjustment of the telescope focus.


    4.3 Ghost / Light Concentration Images.

    4.3.1 Check for scattered light / light concetration images. If these are present at greater than the 1% level they should be instantly visible as large doughnuts in images with high sky levels. At levels below a percent or so, these will have to be detected by rastering standard stars.
    No obvious ghost images have become apparent, though there are hints at the 1% level in shutter uniformity observations made with the V filter.


    4.3.2 Check for ghost images - ie multiple reflections off optical surfaces caused by bright stars. These should be looked for by putting a bright star in the field, and rastering it around.

    Could not do on this run.


    4.4 Standards & Sensitivities : If a photometric night is available, obtain photometric zero-points and airmass corrections for UBVRgriz filters, IN EACH CCD. Also try to obtain wide enough range of colours to beging to get a handle on colour terms.
     

    December 2000 : Phtometric standards were observed in conditions believed to be photometric on several nights. Detailed flat-fielding and processing is still under way. The following sensitivites were measured from unflattened data. They compare the sensitivities of a number of passbands in CCD6, and also compare CCD6 with the other CCDs at B and I. These numbers have been used to update the Direct Imaging Calculator.  As expected at V the performance is very slightly worse than the TEK. At R and I it is similar or better.  CCD4 (the only Phase I device in the mosaic) has very similar sensitivity to the MITLL2A. The remaining chips are better at r and i than the MITLL2A.

    CCD6 was chosen as the 'best' single device based on its cosmetics. CCD7 has better blue sensitivity, but similar red sensitivity to CCD6.
     
    Sensitivites of CCD6 in Available WFI passbands
    Filter
    Object
    Ph/s for 22.5 mag star
    at AM=1.25
    Sky
    Ph/s per pixel 
    (0.2295"x0.2295")
    Comments
    U (WFI Schott #48)
    1.3
    0.18
     
    B (WFI Schott #49)
    17.5
    1.7
     
    V (WFI Schott #50)
    28.5
    5.5
     
    R (WFI Schott #51)
         
    g (WFI SDSS #90)
    34.4
    5.0
    for V=22.5 star
    r (WFI SDSS #91)
    37.8
    10.3
    for R=22.5 star
    i (WFI SDSS #92)
    27.6
    20.0
    for I=22.5 star
    z (WFI SDSS #93)
    Sensitivities measured using Landolt standard stars. These data were acquired over an airmass range of 0.15. 
    Given usual extinction coeffs for SSO this will produce errors of at most 5% in U and 1.5% in I. Sensitivies
    were estimated for neutral (B-V=V-I=0.0) colour stars.

    Average value accross the mosaic have been used in the Direct Imaging Calculator - you will not derive precisely
    the same sensitivities as those above. Please however, always use the Direct Imaging Calculator when preparing
    proposals.


     
    CCD
    Photons/s for a  22.5 mag star at AM=1.25
    Comments
     
    B (WFI Schott #49)
    i (SDSS #92)
     
    1
    20.2
    17.7
     Worst at I
    2
    20.1
    26.5
     
    3
    20.4
    26.3
     
    4
    15.8
    27.7
     Worst at B (Phase I CCD)
    5
    19.2
    25.2
     
    6
    17.5
    27.6
     Best cosmetics
    7
    22.6
    28.5
     Best at B and I
    8
    20.5
    26.3
    Best and worst CCDs are highlighted. These data were acquired
    over an airmass range of 0.13. Given usual extinction coeffs for
    SSO this will produce errors of at most 3.5% in B and 1.5% in I.

    5. Astrometry.

    Astrometry data can be obtained using USNO wide field astrometric fields, from the Flagstaff transit CCD instrument.
     
    Two images of the astrometric field O from the USNO transit data were obtained. These may be inadequate for a full solution.cp
    5.1 Determine radial distortion correction for triplet corrector (and for doublet corrector at some future date). Compare with 'a priori' radial distortions. Does it depend on filter, or on focus?

    5.2 Determine CCD positions within focal plane.

    5.3 Determine stability of radial distortion and CCD positions within focal plane.

    Could not do on this run.
    5.4 Explore how to take data, and determine parameters needed to map mosaic into a single image, in MSCRED.
    Could not do on this run.

     

    6. Data Processing Pipeline

    Having determined the best way to construct and correct for bias frames, and the best way to construct and correct flat fields, test the operation of the data pipeline, and determine what 'knobs' have to be turned to what settings in it.
     
    The data reduction pipeline is started with the command
    pipeline -cicada &
    It is basically a GUI front-end to the IRAF ccdproc tasks running under the IRAF mscred package. It is recommended you read the IRAF mscred documentation in order to understand what pipeline is doing to your data.

    In essence however, pipeline will overscan subtract, trim, zero subtract, dark subtract and flat field your data, if you have a Zero frame a Dark frame and a set of useful flat fields already created. At present you have to use IRAF directly to create these calibration files.

    At present the following reduction steps are recommended.

    1. Overscan subtract , and then trim all your data, using the regions defined in the BIASSEC and TRIMSEC FITS keywords (this is what pipeline will do, and it is the default action of IRAF's mscred ccdproc unless you specifiy otherwise). If these keywords are not present in your data, then you can add them in IRAF as follows.

    2.  

       
       
       

      hedit *.fits[1] BIASSEC '[2075:2098,5:4098]' add=yes verify=no
      hedit *.fits[2] BIASSEC '[2075:2098,5:4098]' add=yes verify=no
      hedit *.fits[3] BIASSEC '[2075:2098,5:4098]' add=yes verify=no
      hedit *.fits[4] BIASSEC '[2075:2098,5:4098]' add=yes verify=no
      hedit *.fits[5] BIASSEC '[1:24,39:4132]' add=yes verify=no
      hedit *.fits[6] BIASSEC '[1:24,39:4132]' add=yes verify=no
      hedit *.fits[7] BIASSEC '[1:24,39:4132]' add=yes verify=no
      hedit *.fits[8] BIASSEC '[1:24,39:4132]' add=yes verify=no
      hedit *.fits[1] TRIMSEC '[16:2059,5:4098]' add=yes verify=no
      hedit *.fits[2] TRIMSEC '[16:2059,5:4098]' add=yes verify=no
      hedit *.fits[3] TRIMSEC '[16:2059,5:4098]' add=yes verify=no
      hedit *.fits[4] TRIMSEC '[16:2059,5:4098]' add=yes verify=no
      hedit *.fits[5] TRIMSEC '[41:2084,39:4132]' add=yes verify=no
      hedit *.fits[6] TRIMSEC '[41:2084,39:4132]' add=yes verify=no
      hedit *.fits[7] TRIMSEC '[41:2084,39:4132]' add=yes verify=no
      hedit *.fits[8] TRIMSEC '[41:2084,39:4132]' add=yes verify=no

      The recomended parameters for the iraf ccdproc overscan subtraction (and trim) are

      ccdproc images=@filein.lis output=@fileout.lis noproc=no ccdtype='' xtalkco=no oversca=yes trim=yes zerocor=no darkcor=no flatcor=no function=legendre order=1 sample='*' naverage=1 niterate=2 low_rej=3 high_rej=3
       
       
       

    3. Create a zero correction using at least 7 bias frames from your run, after overscan subtraction and binning.

    4.  

       
       
       

      zerocombine input=@zero.lis output=Zero combine=median reject=sigclip lsigma=6 hsigma=3 mclip=yes scale=none
       

    5. Apply the zero correction to all images

    6.  

       
       
       

      ccdproc images=@filein.lis output=@fileout.lis noproc=no ccdtype='' xtalkco=no oversca=yes trim=yes zerocor=yes darkcor=no flatcor=no zero=Zero function=legendre order=1 sample='*' naverage=1 niterate=2 low_rej=3 high_rej=3
       

    7. If you intend to apply a linearity correction you must do it now. Suggested polynomial corrections are available. You can insert the derived linearity corrections into the file headers of each image extension as follows

    8.  

       
       
       

      hedit *.fits[1] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[1] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[1] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[1] LIN_A2 3.86e-12 add=yes update=yes
      hedit *.fits[1] LIN_A3 0.0      add=yes update=yes

      hedit *.fits[2] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[2] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[2] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[2] LIN_A2 3.18e-12 add=yes update=yes
      hedit *.fits[2] LIN_A3 0.0      add=yes update=yes

      hedit *.fits[3] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[3] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[3] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[3] LIN_A2 8.43e-12 add=yes update=yes
      hedit *.fits[3] LIN_A3 0.0      add=yes update=yes

      hedit *.fits[4] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[4] LIN_A0 1.02753     add=yes update=yes
      hedit *.fits[4] LIN_A1 -1.6555e-6  add=yes update=yes
      hedit *.fits[4] LIN_A2 1.7791e-11  add=yes update=yes
      hedit *.fits[4] LIN_A3 0.0         add=yes update=yes

      hedit *.fits[5] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[5] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[5] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[5] LIN_A2 4.26e-12 add=yes update=yes
      hedit *.fits[5] LIN_A3 0.0      add=yes update=yes

      hedit *.fits[6] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[6] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[6] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[6] LIN_A2 5.93e-12 add=yes update=yes
      hedit *.fits[6] LIN_A3 0.0      add=yes update=yes

      hedit *.fits[7] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[7] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[7] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[7] LIN_A2 -5.50957e-12 add=yes update=yes
      hedit *.fits[7] LIN_A3 2.39334e-16  add=yes update=yes

      hedit *.fits[8] LINCOM "Nt = Nm*(LIN_A0  +  LIN_A1*Nm + LIN_A1*Nm**2 +LIN_A3*Nm**3)" add=yes update=yes
      hedit *.fits[8] LIN_A0 1.0      add=yes update=yes
      hedit *.fits[8] LIN_A1 0.0      add=yes update=yes
      hedit *.fits[8] LIN_A2 5.37e-12 add=yes update=yes
      hedit *.fits[8] LIN_A3 0.0      add=yes update=yes

      Unfortunately, IRAF does not have a way to include linearity correction in its ccdproc processing. Nor is there a really useful linearisation routine. You have to use imexpr which is not bright enough to accept a list of input/output images. Furthermore, you can't make msccmd and imexpr interact in the way they should, so you have to split each MEF image into individual CCDs, then correct each individually, then join the images back together.

      If an IRAF guru can provide a script/executable to speed this process up that would be incredibly useful. All advice gratefully received by cgt@aaoepp.aao.gov.au.

      In principle, it should be straight-forward to put this a useful IRAF script. However, the IRAF scripting language is such an abortion I couldn't be bothered, so instead here is a recipe to make a giant list of commands which you can pipe into CL.

      #
      # Some csh to make a long file to linearise data in the directory
      # /data1/cgt/FEB01/20010202/A/linearity/zeroed
      # and put the results into the directory directly above.
      #
      echo mscred > doit
      echo cd /data1/cgt/FEB01/20010202/A/linearity/zeroed >> doit
      foreach j ( 02feb*.fits )
         echo mscsplit input=$j mefext=.$j:e delete=no verbose=yes >> doit
         foreach i ( 1 2 3 4 5 6 7 8 )
            echo "imexpr 'a*(a.lin_a0 + a.lin_a1*a + a.lin_a2*a**2 + a.lin_a3*a**3)' \
            output=l${j:r}_$i a=${j:r}_$i" intype=double outtype=real refim=a >> doit
         end
         echo imrename ${j:r}_0 l${j:r}_0 >> doit
         echo mscjoin input=l${j:r} output=../${j:r} delete=yes verbose=yes >> doit
         echo imdel ${j:r}_\*.fits go_ahead=yes verify=no default=yes >> doit
      end
      echo logout >> doit

      cl < /data1/cgt/FEB01/20010202/A/linearity/zeroed/doit

      Note that the imexpr processing is incredibly slow. However it does work. A dedicated routine which operated directly on MEF files would be much faster. It could also be optimised to only calculate terms for which the coefficient is non-zero.
       
       

    9. Create and apply a dark correction

    10.  
    11. Create and apply a flat field. In the first instance this flat field will probably be a twilight flat (or maybe a dome flat for red filters).

    12.  
    13. You will then wish to check how well this flat works, and will probably want to create a 'dark sky' flat from you data. Note that if you are observing in the r,i or z passbands, you'll need to defringe your data before making a dark sky flat from it.

    14.  
    15. Apply a dark sky flat

    16.  
    17. Defringe your finally flattened data if necessary.

    18.  

       
       
       
       
       

    7. IRAF MSCRED Processing Hints

    IRAF MSCRED operations can be a bit obscure. I can't claim to be an expert, but here are a few things it took me ages to work out from the IRAF documentation.
     
     


    Useful Links and other PFU/WFI Resources

    This page maintained by Chris Tinney (cgt@aaoepp.aao.gov.au)