Figaro: Techniques

Keith Shortridge, AAO

26th April 1995

1. Introduction

This document provides a guide to ways of using Figaro functions to perform certain useful operations.

UNIX users should note that all Figaro commands are show here in upper case. This helps to make them stand out in the text, but note that under UNIX commands are case-sensitive and all Figaro commands are in fact lower case. The case used for Figaro parameter names is unimportant, but under UNIX file names are case-sensitive. VMS users do not need to worry about case of commands or file names, since VMS ignores case when processing them.

2. Flat fielding

Flat field division should be relatively straightforward; you take your raw data and divide it pixel for pixel by your flat field. In practice, this may be acceptable for image data, but there are additional considerations to be taken into account when the data involved is spectral.

2.1. Image Data

In principle, you can simply divide your image by the appropriate flat field and use the result. For example:


In practice, you really don't usually want to divide your data by the (typically) very large numbers found in flat fields. Ideally, your flat field should first be reduced to numbers around unity. The easiest way to do that is to divide the flat field by its mean value, and the easiest way to find the mean value is with ISTAT. The command


will give you the mean value over the whole image. If your flat field has strange effects round the edges, you may prefer to limit the range of x and y values used by ISTAT. For example, if your image is 800 by 800,


will only look at the central 600 by 600 part of the image. ISTAT prints out the value of the mean, and you can then divide the flat field by that. For example, supposing the mean value were 350, the command


will generate a new flat field (FFDIV) that will have a mean value around unity, as desired. If running this sequence in batch mode, you may find it convenient to know that ISTAT sets the Figaro user variable STAT_MEAN to the mean value. So you could use


2.2. Spectral data

Figaro expects you to have your spectral data oriented so that the x dimension is the wavelength dimension. If this is not the case, use ROTATE to switch the axes. In theory, Figaro does not care whether wavelength increases or decreases with x value, but in practice routines tend to be tested with data whose wavelength increases with x value, and odd bugs may turn up with `reversed' data. You are recommended to get your data into the more common form, if necessary using IREVX. Note that, as always in Figaro, `x' means the horizontal axis when data is displayed using IMAGE, as seen on the image display device.

The main problem with spectral flat fields is that the flat field data will usually vary in x, not because of the instrumental response, but simply because of the spectral response of the flat field lamp. The usual way of dealing with this is to fit this spectral response -- obtained by collapsing the flat field in y -- and then multiplying by the fitted value before dividing by the flat field. Another way to look at this is to consider the result of dividing the flat field by the fit to the spectral response. The result would be an image which was essentially flat in X, but which is `humped' in Y -- since most flat fields fall off at the edges in Y due to instrumental vignetting. Dividing by this -- in a 2D manner -- will give you images where the pixel to pixel variations in the detector have been corrected, along with any spatial vignetting of the instrument, but where the overall wavelength response of the instrument is not corrected.

(Of course, if you think your flat field lamp is flat -- which may be true at high dispersion -- you can just divide by the raw flat field and you're taking out the instrumental wavelength response as well. However, the better way to deal with that is with your standard stars.)

The only problem, usually, is deciding on what represents an acceptable fit to the collapsed flat field. One approach is to fit a polynomial to it. This at least has the advantage of being automatic, and if the result is satisfactory this is probably the best thing to do.

So this recipe is as follows:

1 Collapse the flat field in Y to give a single spectrum. This is the average spectral response of the flat field lamp combined with that of the detector.

2 Fit a low order polynomial to this, giving a smooth spectrum. It may be better to fit to the log of the data if large count numbers are involved, but this is not usually critical.

3 Take each of the cross-sections of constant Y value in the flat field and divide them by this smoothed spectrum. The result is your corrected flat field calibration image.

4 Divide each pixel of your image to be corrected by the corresponding pixel of the flat field calibration image. .end list

The individual Figaro commands to do this might be the following --






which, given a flat field exposure in FF, and an image in MYDATA, performs a flat field calibration as described above and puts the result in MYDATAFF.

This is a sufficiently common way of proceeding that there is a single Figaro command that performs all of this, without the overheads of running four programs and generating the intermediate files. This is the FF command. The command


is functionally equivalent to the above sequence. However, the advantage of splitting up the process is that you can compare SPRES and SPFIT to see how good the polynomial fit to the collapsed data really is. Eg by



(You also have the advantage that you can control the limits used when collapsing the flat field; you do not have to use MIN and MAX as the limits, although that is what FF does.)

If the fit is not satisfactory, what can you do instead? Well, the problem is essentially that of generating a smoothed spectrum, and Figaro has a number of ways of doing that.

1 You can actually smooth the spectrum, using IXSMOOTH.

2 You can do it by hand, using CFIT to generate a spline fit between points indicated interactively using the cursor, having first displayed the spectrum with SPLOT.

3 A less interactive way of using spline fits would be to use MCFIT, probably with a zero spectrum for a mask, although you could always use MASK to generate a suitable mask should there be bad regions in the spectrum. (You can generate a zero mask by multiplying the original spectrum by zero, obviously.)

One of these should enable you to get a satisfactory result, although they all require more effort than does the simple FF.

3. B Stars

The atmospheric features that mess up the red end of the spectrum may be calibrated out by multiplying by a calibration spectrum obtained from an observation of an object that is featureless in this region. A `B' star is usually used, hence the term `B star calibration'.

The essential Figaro command for this calibration is BSMULT. This multiplies the spectrum to be corrected by a B star calibration spectrum -- which is essentially obtained by fitting the continuum of a B star and then dividing that fitted continuum by the B star observation. BSMULT allows for the difference in air masses of the two observations, which is what makes it necessary, rather than, say, the simpler IMULT.

BSMULT is a fairly straightforward operation. The problems come in generating the calibration spectrum. At present, the simplest way is to display the B star spectrum using SPLOT and then generate the continuum by hand using CFIT. An alternative is to generate a mask for the lines in the spectrum using MASK and then fit the masked continuum using MCFIT. The main problem with the latter approach is that the chances are that very little of the spectrum is actually uncontaminated either by stellar lines or by the atmospheric bands.

(The best solution is use an automatic program that fits splines between points on the spectrum that are known to be uncontaminated, but such a program is not available yet -- nor is a really good list of such points that will apply for all wavelength ranges and dispersions. ABLINE has a continuum fitting section that might be applied to the problem.)

The calibration spectrum should really be exactly 1.0 at all points not affected by the atmospheric bands, and it is probably worth displaying the calibration spectrum using SPLOT and then using CSET to set such regions to 1.0 interactively. (This is something else that could be made automatic eventually. Fortunately, it isn't necessary to do this very often.)

Note that BSMULT requires that both the calibration and the object being corrected have valid air masses. Air masses are stored in Figaro .dst data files as `file.OBS.SECZ' and in .sdf data files as `file.more.figaro.secz', and if necessary may be set by hand using the LET command, eg

LET FILE.OBS.SECZ = 1.4 (.dst format)


LET FILE.MORE.FIGARO.SECZ = 1.4 (.sdf format)

Also note that the correction applied by BSMULT is multiplicative -- this means that it is not suitable for data that is in logarithmic flux units, such as magnitudes.

4. Filters

If the response of a filter has been tabulated, then the table of values may be used to generate a `spiketrum', which may then be turned into a calibration spectrum. See the section on `Spiketra' for details about these things, which are essentially ways of turning sets of tabulated values into spectra by interpolation.

Say the spectrum to be corrected for a filter response is called SPECT. A table for the filter response might look like --


* Filter transmission table


3000 .05

4000 .10

4200 .55

4400 .90

4600 .95

5000 .95

5500 .95

6000 .95


This is not a particularly realistic table. A proper table should have enough points to ensure that there are a reasonable number of values tabulated over the region of the spectrum to be corrected. With this table in FILTER.TAB, a filter calibration spectrum can be produced and applied as follows --




Note that IDIV is used to apply the corrections, which is simply a case of dividing the spectrum to be corrected by the interpolated filter response. The same calibration spectrum may be used for any other spectra that cover the same wavelength range.

5. Spiketra

A `spiketrum' is a half-way house between a table of values and a spectrum. It has an element for each wavelength value in its range, but only a few of these elements -- those that correspond to the table entries -- actually have non-zero values. Obviously a spiketrum may be generated very simply given just a table of wavelengths and values at those wavelengths, and a range of wavelength values to be covered. Usually an existing spectrum is used as a template to indicate the wavelength values, and the resulting spiketrum has elements that match those of the template spectrum exactly in wavelength. If a spiketrum is plotted, the result is a set of spikes of varying height -- hence the name.

A spiketrum may be turned into a spectrum by interpolating between the spike values.

The GSPIKE command will generate a spiketrum from a table of wavelengths and data values, and the INTERP command will interpolate between the points to generate a spectrum. GSPIKE also records the nearest values that are just outside the wavelength range covered by the template spectrum, so that INTERP may make use of these as well as the actual spiketrum values. GSPIKE also allows some control to be exercised over the data structure of the spiketrum -- `SET' records included in the table file can cause the UNITS or LABEL objects in the structure to be set to appropriate values. As an example, see any of the supplied flux standard table files (files intended for GSPIKE usually have a .TAB extension); these generally set the units of the spiketrum to match those used in the table, so that GSPIKE can produce spiketra whose units are AB magnitudes, micro-Janskys, or whatever, simply depending on what is in the table file used.

For more details about GSPIKE tables, see the section on text file formats.

The main reason for the use of spiketra is that they enable what is essentially tabulated data -- instrumental response values at certain wavelengths, as calculated by CSPIKE, for example -- to be manipulated using the standard Figaro routines designed to manipulate spectra.

Since spiketra are really just spectra, they can be plotted using SPLOT. They may be modified, if necessary, using the fudge commands such as TIPPEX or LET. If SPIKE is a spiketrum and SMOOTH is the interpolated spectrum generated from it by INTERP, the following sequence will generate a hardcopy plot of the two superimposed --





Alternative commands to INTERP for interpolating between the points of a spiketrum are SPIFIT (which fits a global polynomial to the points) and LINTERP (which uses linear interpolation). The SPIED command is designed to help modify spiketrum points in order to influence the interpolation result -- ie to fudge the resulting spectrum.

6. Flux calibration

6.1. Units

First, a brief word about units. Although it may not always be obvious, the underlying philosophy of Figaro is to provide tools which you may use as you wish, and not to force you to reduce your data in a way imposed by the author of the software. It would be in keeping with this for there to be no restrictions on the units that can be used for flux calibration. However, there are practical limitations.

Magnitude units, such as the AB79 system (Oke & Gunn, 1983) give results that have a comfortable feel for optical astronomers, and also fall into a numerically convenient range, generally being numbers between 0 and 25. Unfortunately, being logarithmic (not to mention going backwards) they are, while not actually difficult to handle, sufficiently different to linear units to make it impossible to write software that can deal with them other than as a special case.

Rather than do that, Figaro compromises. The flux calibration routines insist on the calibration being performed using linear units. When the flux calibrated spectrum has been produced in such units, it may then be converted to the AB79 scale. This means that rather that have a lot of software that operates on both linear and logarithmic data, we have one routine (ABCONV) that will perform the conversion.

With that restriction, Figaro will allow you to use whatever linear units you prefer. As will become clearer soon, the units used are determined entirely by entries in the original tables used by the routine GSPIKE, and if you insist on "Joules per minute per square foot per Angstrom" you are quite at liberty to prepare your own table giving the flux density for your standard at the appropriate wavelengths in these units. You will even find that SPLOT will label your axes properly when you plot the data!

As far as linear units go, there are objections to using "ergs per sec per square cm per Hz" and even more so to "ergs per sec per square cm per Angstrom" on the grounds that the numbers involved are ridiculously small (and in the latter case can easily go outside the floating point range of the machine being used, particularly on a VAX, which is a serious problem!). So, mainly to be able to avoid having to type commands such as "SPLOT HIGH=2.5E-27 LOW=1.5E-27", the preferred units for Figaro files are Janskys, mJy, or micro-Janskys.

For most of the flux standards for which Figaro supplies tables, two tables are provided: one in AB magnitudes, usually a direct copy of the published data, and one in (milli- or micro-) Janskys, usually the result of a semi-automatic conversion from the former. If you particularly like ergs/s/cm**2/A, then you can either provide your own flux tables in these units and work from them directly, or you can use the FLCONV command to convert from a spectrum calibrated in Janskys into these units.

6.2. Standard files

The main Figaro directory (which is identified by the environment variable FIGARO_PROG_S under UNIX or by the logical name FIGARO_PROG_S under VMS) contains a number of files giving the published flux densities of standard stars, all with a .TAB extension. For example, the file G158M100.TAB begins as follows --

* G 1 5 8 -- 1 0 0


* Table file for G158-100, faint object flux standard,

* based on Filippenko and Greenstein (1984) (Preprint,

* submitted to P.A.S.P.) Note that these are fitted

* continuum fluxes, not directly measured fluxes, and

* should be used accordingly. This file is designed for

* use with the Figaro routine GSPIKE. The data here is

* given to 3 decimal places and was supplied directly by

* Alex Filippenko.


SET UNITS = "micro-Janskys"

SET LABEL = "Flux"


3300 891.251

3400 990.831

3500 1135.534

3600 1282.331

3700 1432.187

3800 1599.556

3900 1770.110

The lines beginning with asterisks are treated as comments, and the lines that begin with `SET' are used to set data objects in the file created from this table. An alternative version of this file is G158M100A.TAB, which contains the lines

SET UNITS = "AB Magnitudes"

SET LABEL = "Flux"



3300 16.525

3400 16.410

3500 16.262

3600 16.130

3700 16.010

3800 15.890

The functions of the UNITS and LABEL lines should be fairly obvious. Setting MAGFLAG to 1 in a file indicates to SPLOT that the data is in magnitude units and so should be plotted with the flux scale reversed. (Note that most .TAB files actually used by Figaro in fact use `.Z.UNITS' rather than just `UNITS'; these were written for the original version of GSPIKE where you were allowed to assume that data units always were held in a file in an item called file.Z.UNITS. Now that Figaro supports .sdf format files as well as .dst format files, this is no longer the case, and the abstract term UNITS is preferred -- however, the original format files still work, since the new GSPIKE uses a conversion table to handle these explicitly named items.) Tables based on data from, for example, Oke and Gunn's 1983 paper will also include the line


to indicate the 40 Angstrom bandwidth used by their data. The Filippenko and Greenstein data represents fitted continuum fluxes, so does not have a bandwidth -- a point we shall have to return to very shortly. From the .TAB files already supplied, it should be possible for you to deduce how to create your own, should that be necessary.

Note: There is a Figaro convention regarding the naming of the table files. A directory listing command may be used to find which files are available as tables. The UNIX command

ls $FIGARO_PROG_S/*.tab

or the roughly equivalent VMS command


will list all the table files supplied in the main directory. Note that not all of these are intended for flux calibration; some may be extinction tables, etc. If in doubt, these are all text files, and should have comments at the start describing their function. To look at these, the UNIX command


or the roughly equivalent VMS command:


will list the file for you. You should find that most flux files exist in two incarnations, as implied above, one in a Jansky based unit, one in AB magnitudes. The name of the AB magnitude file ends with `A'. So, for example, the files L74546.TAB and L74546A.TAB both represent the standard star L745-46A, but the former is in Jansky units -- and is therefore probably the one you should use -- while the latter is in AB magnitudes. The fact that the name of the object itself ends in `A' is an unfortunate complication that may be misleading.

6.3. The Final Step.

To anticipate for a moment, the final step in the flux calibration process is carried out by the command SPFLUX. SPFLUX multiplies an observed spectrum by a calibration spectrum, to create a flux-calibrated spectrum. Each element of the calibration spectrum contains a value which is effectively the instrumental response of the detector at a given wavelength, the response being in units of "Flux density units per count per second per Angstrom". The "Flux density units" may be any linear units, eg mJy. SPFLUX assumes that the spectrum to be calibrated is still in counts, and for each element calculates the wavelength range covered by that element, and then combines that with the counts in that element, the value of the calibration spectrum at that element, and the exposure time for the spectrum, to generate a flux density for the central wavelength of the element. The result is, of course, a spectrum in "Flux density units" -- whatever they happen to be.

SPFLUX is straightforward enough; the problem, as with similar functions, is to generate the flux calibration spectrum.

6.4. Published Standards

Published standards fall into two distinct classes; they are similar, but differ sufficiently that Figaro provides different sets of functions for dealing with them.

Older, brighter, standards such as those in Oke and Gunn, (1983) are published as tables giving wavelengths and the corresponding flux densities calculated by measuring the observed flux over a range of wavelength centered on the tabulated wavelength. (This is the significance of the .BANDWIDTH value shown previously) These tabulated values therefore correspond exactly to what the observed spectrum should look like, even in the presence of absorption features.

More recently, Filippenko and Greenstein (1984) have published tables for fainter stars where they fit a continuum to the observed data and tabulate the value of this continuum at various wavelength values. These values will not therefore represent the actual observed data in regions where there is absorption, and the concept of a `bandwidth' does not apply.

6.5. First Step -- Turning the Table into a Spiketrum

(You may find it useful to review the section on Spiketra before reading on.)

Figaro usually deals with tables, which are tricky to manipulate directly, by turning them into Spiketra, which can be manipulated like spectra, although they only have non zero values at points that correspond to those tabulated. No matter which type of published standard is involved, the first step is to generate a spiketrum from the table.

This requires a template spectrum to give the wavelength scale. The one to use is the observed standard spectrum, preferably already scrunched to a linear wavelength scale. (You can use unscrunched data, but it is not advised.) For example, suppose STANDOBS is such a spectrum of the standard star HD84937. There is a table of flux densities for this star (one of the Oke-Gunn standards) in the main Figaro directory, in the file HD84937.TAB. The command


will generate a spiketrum called HDSPIKE that can be used by the subsequent steps. If you want a look at it,


will show it as a series of vertical spikes, giving the flux density at each of the tabulated points in the wavelength range of the observation.

GSPIKE will search directories for the table file in the usual Figaro order: first the default directory, then the user's Figaro directory (UNIX environment variable or VMS logical name FIGARO_PROG_U), then the local Figaro directory (FIGARO_PROG_L), and finally the main Figaro directory (FIGARO_PROG_S). So a table file in any of these will be found, should you need to create your own.

6.6. Second Step -- For Oke _& Gunn Data

The command CSPIKE takes this generated spiketrum (HDSPIKE) and combines it with the observation of the standard (STANDOBS) to generate a new spiketrum, whose values are now the instrumental response sampled at the points of the original spiketrum.


generates this new spiketrum and calls it CALSPIKE. The values in CALSPIKE are calculated for each point by summing the counts in the observed spectrum over the appropriate wavelength range, dividing them by the wavelength range and the exposure time, and dividing the result into the flux density value given in HDSPIKE. The units of CALSPIKE are therefore "units per (count per angstrom per second)".

CALSPIKE can be turned into the calibration spectrum required by SPFLUX by interpolation. The most direct way to do this is just to use the command INTERP.


will produce a new spectrum, CALIB, which will be the required calibration spectrum. However, you may find that you are not happy with CALIB. It may not be smooth enough. It may be that if there are regions of absorption in the spectrum, poor alignment of the wavelength ranges will result in some spurious values. It may seem to be cheating, but the most direct way to get round this is to edit the spiketrum CALSPIKE using the SPIED command.


will display the spiketrum and allow you to delete points, insert new points, and see the results of spline interpolation or global polynomial fitting to the modified points. When you are happy, you can run INTERP on the modified spiketrum to produce a less honest, but more satisfactory, result.

Global polynomial fitting may seem a trifle crude as a way of interpolating between spiketrum points, but it may be that this gives a better result in some circumstances.


would be an alternative to the use of INTERP to generate the calibration spectrum.

6.7. Second Step -- For Filippenko & Greenstein Data

The most direct way to deal with data where the published values represent fitted continuum values is to fit a continuum to your observed data, interpolate the tabulated data, and divide the two resulting spectra to get a calibration spectrum.

Fitting a continuum is a task that is not easily automated. The simplest way in Figaro is to display the spectrum and then use CFIT. For example:



will enable you to use the graphics cursor to indicate continuum points on the displayed spectrum and thus generate the continuum spectrum by spline interpolation. A messy job, but not one you have to do often. Remember that the messy parts of the flux calibration are connected with generating the calibration spectrum, and you only have to do that once; applying it is simple.

The published points represent a smooth curve, so just interpolating directly between them should be quite satisfactory. That is, there should be no need to edit the spiketrum generated by GSPIKE prior to using INTERP, although SPIED is always available if necessary. This time, since HD84937 is an Oke-Gunn standard, let's use G158-100 instead. So the first two steps will be


generates GMFIT as the interpolated spectrum from the published points.

Dividing GMFIT by STANDFIT will now generate the required calibration spectrum. Note that IDIV will not do, since the division has to allow for the wavelength range of each element, and for the exposure time. The command CALDIV must be used instead.


will generate CALIB as the required calibration spectrum.

6.8. An Alternative Second Step -- Filippenko-Greenstein Data

An alternative way of dealing with this type of data would actually be to treat it as if it were Oke-Gunn data and use CSPIKE and INTERP as described earlier. This is probably satisfactory so long as the data does not cover a range with significant absorption features. (If it does, you can always remove the bad points with SPIED before interpolating.)

If you do this, you have to supply a bandwidth, since the table file will not have specified one. You can put one into the spiketrum generated as described earlier by GSPIKE using the LET command For this you have to know something about the internal details of the format used -- you need to know where the `BANDWIDTH' item is held. For a .dst format file, you use


and for a .sdf format file you use


This will set the bandwidth to 40 Angstroms. The effect of this is going to be similar to smoothing the observed spectrum with a filter 40 Angstroms wide in order to get the continuum values at the points specified by the table. After that, you just go through the CSPIKE, (SPIED), INTERP sequence shown earlier, finally ending up with a calibration spectrum called CALIB.

6.9. The Final Step Revisited

Given CALIB as the calibration spectrum, no matter how it was generated, it is applied as follows:


which calibrates a spectrum called OBS, creating a resulting spectrum called CALOBS.

At the risk of being obvious, you should be aware of what you have created here. Each element of CALOBS is a sample at a particular wavelength of the continuous flux density function; it is not a measure of the total flux within a wavelength bin of finite width, which the original spectra were. Apart from anything else, adding two such spectra has the magic (and totally unreasonable) effect of doubling the magnitude of the object! Generally, this is not something to worry too much about, but I personally find it disconcerting to have a spectrum like this generated with a non-linear wavelength scale. (I have a gut feeling that if the bin covers half the wavelength it should have half the value, and that is not true for these spectra.) That is the reason for the warning about not using unscrunched data.

6.10. AB Magnitudes, and a Test

A calibrated spectrum generated in Jansky units (and others, eventually) can be converted to AB magnitudes by ABCONV. Our CALOBS spectrum may be converted by


( ABCONV works out the units of the input spectrum by looking at a UNITS data object in the input file. In some cases, it may not recognise the units -- this can happen with spectra that have come from some other system via one of the translation programs. If that happens, and you know that the units are, say, Janskys, you can set the units by hand using


for .dst files, or


for .sdf format files.

ABCONV will recognise "mJy", or anything that contains "Jansky" and the words "milli" or "micro". The same remarks apply to FLCONV.)

An interesting test of the system is to calibrate an object using itself, convert the result into AB magnitudes, and then compare the result with a spiketrum generated from the published AB magnitude tables. For example, remembering that our original spectrum of HD84937 was STANDOBS, and given that there is a table called HD84937A.TAB in the main Figaro directory that has the values in AB magnitudes, we can try






The result should be the spectrum of HD84937 in AB magnitudes, with a series of spikes just touching the spectrum, indicating that (at least at the tabulated wavelengths!) the spectrum has been calibrated to the correct value.

A much tougher test would be to use one standard to calibrate another, and then compare the result with the tabulated values. This does of course require that you really do have spectrophotometric data, with no filter changes, with all of the object in the slit in both cases, and with no clouds in the way.

6.11. Summary

A) For Oke-Gunn type data --





B) For Filippenko-Greenstein type data --






6.12. References

Filippenko, A.V., Greenstein, J.L.; 1984, Pub.A.S.P. 96, 530

Oke, J.B., Gunn, J.E.; 1983, Ap.J. 266, 713

7. FFT Functions

A number of Figaro functions are available to manipulate complex data, generally with a view to its being used for some process involving Fourier transforms. While there are packaged Figaro routines, such as SCROSS, which make use of Fourier transforms internally, the functions covered in this section perform the more elementary operations, and can be put together to form a sequence of operations that duplicates the processing performed by, say, SCROSS, but enabling a finer control to be exercised over the details of the procedure. The general design of this set of routines is based on those provided as part of the SDRSYS system (Straede, 1985). In these notes the term `Fourier transform' is used rather freely; it should be realised that in all cases it is the discrete fourier transform that is meant.

7.1. Complex Data Structures

Figaro defines a `complex data structure' in a fairly precise way. A complex data structure is a structure containing two arrays, one holding the real part of the complex data, and one holding the imaginary part. These arrays have to be the same size and shape, although they may have any number of dimensions. The dimensions of the arrays must be such that the Fast Fourier Transform algorithm (Cooley and Tukey, 1965) may be applied to them. Different implementations of the FFT have different restrictions -- many require that the number of elements be a power of 2. The NAG library used by the Figaro routines requires that, in each dimension, the number of elements must have fewer than 20 prime factors, none greater than 19. The basic Figaro routines for constructing complex data structures, R2CMPLX and I2CMPLX, will zero extend data to enforce compliance with this requirement.

In a Figaro complex data structure the main data array is the real part of the complex data. This means that most of the Figaro routines may be applied to a complex data structure, and if this is done it will be the real part of the data that they operate on. For example, an easy way to look at complex data is with SPLOT, which will happily plot the real part of the data, quite ignorant of the fact that the data structure in question is complex. To plot the imaginary part, or the modulus of the data, these will first have to be extracted using CMPLX2M or CMPLX2I. It is important to note that if an ordinary Figaro function is used to change this data, for example by


which multiplies the real array by 2, the imaginary part will be unchanged. Generally, to avoid unnecessary conversions before using the NAG FFT routines, the real and imaginary arrays are held in double precision. However, this is not a strict requirement.

7.2. Creating a Complex Structure

In most cases, one starts with an `ordinary' (ie non-complex) file and wants to produce a complex structure in which this is the real part of the complex data and the imaginary part is zero. This is what the command R2CMPLX does. For example,


will generate a complex structure with the real part taken from MYSPECT, and the imaginary part will be set to zero. If the dimensions of MYSPECT are not suitable for application of the FFT algorithm, the complex data will be zero padded at high element numbers to the nearest higher acceptable dimensions. If this happens, you are warned about it, since this may have undesirable consequences. In general, you should not be attempting to transform data that does not go smoothly to zero at each end (see COSBELL), but if you are then the results obtained by transforming zero-extended data will show components due to the sudden drop in value from the end of the data to the zero values that have been appended as padding.

If R2CMPLX has to zero pad the main data array, it will also attempt to extend the data arrays in the axis sub-structures. If these are linear (if the data has been scrunched, for example), it will be able to do this. It will delete any axis structure that contains non-linear data.

You may prefer to exercise more precise control over the dimensions of your data and the way they are extended. For example, if your data is a spectrum covering 1524 bins from 3000.0 to 4000.0 Angstroms, you will find that 1524 is not an acceptable dimension. The closest acceptable values are 1521 and 1530 (and the easiest way to determine those values is to run R2CMPLX on your data and let it tell you). You could then decide that rather than let your data be zero extended by R2CMPLX, you would rather rebin it yourself to 1530 pixels. Probably you would use SCRUNCH, eg




In some cases, you may want to set the imaginary part of a complex structure as well as the real part. In this case, you can use I2CMPLX, which takes the data from a non-complex structure and uses it to set the imaginary part of an existing complex structure. This means that the complex structure has to be created initially by R2CMPLX, and then the imaginary part can be set by I2CMPLX (which will also modify the modulus array appropriately). So, if you have two spectra called RSPECT and ISPECT which you want to be the real and imaginary parts of a complex spectrum, the sequence is



and the order of operations is important; doing the I2CMPLX step first will produce a quite different result: the I2CMPLX will fail, unless it so happens that CMPLXSPECT already exists, and even if it succeeds the R2CMPLX will just produce a new version with a zero imaginary array.

I2CMPLX will zero extend its input data in the same way as R2CMPLX does, should it be necessary, and the same comments about the desirability -- or lack of it -- apply.

There is no procedure supplied to create a complex structure with the imaginary data taken from a specified file, say ISPECT, but with the real part set to zero. It was thought that this would be an unusual requirement. If needed, the following sequence may be used:




7.3. Going Smoothly to Zero at the Ends -- COSBELL

The simplest way to ensure that your original data goes smoothly to zero at the ends is to multiply it by a filter that is unity for most of the spectral range but goes down to zero at each end in a smooth manner. The most common form for such a filter is the `cosine bell', and this is what is generated by the Figaro COSBELL function. (For a detailed discussion, see Brault & White, 1971, and the references they quote).

The only parameter needed by COSBELL is the percentage of the data that is to be covered by the bell shapes at each end of the data. 10% is a common value to use. COSBELL uses an input data structure as a template and generates a structure that is the same as the template except for the data itself. Usually, you use the data to which you intend to apply the filter as the template. So, for example, to apply a 10% cosine bell to the data in MYSPECT,



At present, COSBELL cannot handle data with more than two dimensions.

7.4. Taking the Fourier Transform

Actually taking the Fourier transform of a complex data structure is quite straightforward. The forward transform is performed by the Figaro function FFT and the reverse transform is performed by BFFT. The only parameters for FFT and BFFT are the names of the input and output structures. There are no restrictions on the number of dimensions in the data.

For example, to calculate the Fourier transform of the spectrum held in the non-complex structure MYSPECT, it should be enough to do:



which results in CMPLXSPECT containing the FFT of MYSPECT. If the power spectrum of MYSPECT is required it can be obtained by CMPLX2M. For example, it may be plotted using:



Actually, the modulus is the square root of the power spectrum; the statement that the power spectrum is available represents a slightly cavalier attitude towards the proper usage of terms. However, if the power spectrum is only needed in order to get a feel for the frequency distribution of the original data, then the modulus will generally do just as well.

Often, a power spectrum needs to be plotted logarithmically to produce a sensible plot, so it is quite acceptable to try



The NAG FFT routines generate data with the low frequency terms in the lowest numbered elements in each dimension of the data. FFT (the Figaro program) re-orders the transformed data so that the zero frequency term is in the center of each dimension, and the resulting data now goes from -N to +N (where N is the Nyquist frequency). The plots of the modulus made as described above will show a plot with an axis labelled from -1 to +1, (the unit being the Nyquist frequency), that is symmetrical about the center and which peaks (for most input data) in the center. New axis structures are created by FFT to reflect the new axis values. The reason for this re-ordering of the data is that it seems to be easier to visualise complex filters, particularly in 2-dimensions, when the low frequency terms are collected in the middle of the data rather than scattered into the corners.

The reverse FFT is performed by BFFT. Note that the precise definition of `forward transform' and `reverse transform' differ between FFT implementations. The NAG routines, for example, do not introduce any scaling factors between data that is first forward and then reverse transformed and the original data. This means that the sequence





should generate a NEWSPECT that is exactly the same as the original MYSPECT. If MYSPECT were zero padded by R2CMPLX to increase its length for the FFT, then NEWSPECT will have the padded length, but (in theory, at least) the end elements should be zero. That is, NEWSPECT should look exactly like the padded spectrum before it was transformed by FFT. The NAG routines used are those in the C06 chapter, and the NAG manual should be consulted for more details.

7.5. Extracting the Real and Imaginary Parts

The previous section sneaked in a reference to the function CMPLX2R, in the hope that what it did was obvious. As the name is intended to imply, CMPLX2R is the reverse of R2CMPLX, and generates a non-complex data structure whose data array is taken from the real part of the data in a specified complex data structure.

CMPLX2R does not make any changes to the dimensions of the data array. It does not attempt to clip it back to some value it had before being padded by R2CMPLX. So, for example, if MYSPECT has a single dimension of 1524, then the sequence in the previous section that transformed MYSPECT and then transformed it back into NEWSPECT will produce a NEWSPECT with a length of 1530. Getting back to the original in this case requires that the data be subsetted:


Analogous to CMPLX2R are CMPLX2I and CMPLX2M, which extract the imaginary part of the data and the modulus of the data. Usually, there will be little point in explicitly extracting the real part, since it is already available to most Figaro functions as described earlier. However, the file generated by CMPLX2R will be smaller than the original file, since it will no longer contain the imaginary array, and this may be a point in its favour.

7.6. Operations on Complex Data

There isn't a great deal of point in just transforming data back and forth, unless it's to get a feel for the errors introduced by such a process. Usually one transforms into the fourier domain in order to do something useful there. The obvious fourier domain operations are multiplication (equivalent to a convolution operation in normal space), and multiplication by the complex conjugate (equivalent to a cross correlation in normal space).

Figaro provides a number of functions that operate on complex data to provide a complex result. The operations performed by these should be fairly obvious from their names: CMPLXMULT, CMPLXDIV, CMPLXADD, CMPLXSUB, and CMPLXCONJ. The most useful will probably be CMPLXCONJ, which produces the complex conjugate of a complex data structure, and CMPLXMULT, which performs the complex multiplication of two complex data structures.

For example, the cross-correlation function of two spectra may be determined crudely by transforming each, taking the complex conjugate of one, multiplying the two and transforming back. This is crude, because it omits any filtering and preparation of the data, but it will serve as a demonstration of the complex operations involved. If the two spectra in question are SPECT1 and SPECT2, then









will produce a cross correlation function in CORRLN.

7.7. Complex Filters and Data Smoothing

Because the data in the fourier domain is in frequency order, it is often simpler to create a filter in the fourier domain than to produce the corresponding convolution function in normal space and then transform it. In fact, determining the optimum filter for data is often best done by examining the power spectrum of the data to be filtered and then designing the filter around it. Brault and White (1971) spend some time on this topic.

At present, Figaro provides only one function that generates complex filters, and this is CMPLXFILT. This produces filters of the type described by Hunstead (1980) for use in cross-correlation measurements. These are mid-pass filters formed by taking a gaussian that peaks at zero frequency and drops to a half height at a specified high cut value, and subtracting a narrower gaussian that rises from zero reaching its half height at a specified low cut value. That is, the functional form of the data is given by


where u and v determine the low frequency and high frequency cutoff values. From this definition it is clear that u and v are just the sigma values for the two gaussians, and that what has been rather loosely termed a `half height' is in fact the point at which the gaussian has a value of exp(-1/2) = ~0.6. They are specified for CMPLXFILT in terms of the Nyquist frequency, so a filter might have, say, u=0.1 and v=0.3. The best way to get a feel for these filters is to generate them and plot them superimposed on the power spectrum of the data to be filtered.

If the low cutoff value is specified as zero, CMPLXFILT generates a low pass filter whose functional form is just


ie a gaussian that drops from unity at zero frequency, having a half width of v. Note that the cyclic nature of the fourier domain data means that the filter generated is actually symmetrical about the mid point of the data -- this is something to be remembered if you want to try to produce other, more specific, filters by generating them yourself. The imaginary parts of the filters generated by CMPLXFILT are always zero. This means that CMPLXMULT will have the effect of just multiplying both the real and imaginary parts of the data to be filtered by the real part of the filter (obviously, since [a+ib]*[c+id] = ac+ibc if d=0).

CMPLXFILT requires a template complex data structure, which it will use as the basis for the filter it produces. Normally, this will be the data to be filtered, although any data of the same dimensions will do. If the template data is n-dimensional, so will be the resulting filter. The low and high cutoff frequencies will be the same in all dimensions. Since they are specified in terms of the Nyquist frequency, this is usually fairly satisfactory.

So, to take an easy example, an elaborate way of smoothing a spectrum, MYSPECT say, by gaussian convolution, would be













where the 0,0.3 parameters for CMPLXFILT indicate that a low pass filter is to be created. The 0.3 indicates a cut at around a third of the Nyquist frequency for the data, and the actual value is one best determined by comparison with the power spectrum of MYSPECT, obtained by performing a SPLOT of CMPLXSPECT just after the FFT is obtained. Being able to do this is really the main justification for indulging in such a complex procedure when IXSMOOTH could do the same job far faster.

Note that the sequence given above is a pretty complete one. The use of ISTAT at the start is to determine the mean level of the data in the spectrum and the result of the ICSUB is to reduce the spectrum to a zero mean level. This reduces the constant component of the power spectrum and should produce a better result. (Note that ISTAT sets the Figaro variable STAT_MEAN to the mean value it determines.) The data is apodised by the application of a 10% cosine bell prior to the transform. Both the effects of the cosine bell and the subtraction of the mean level are reversed at the end of the sequence. Note that this assumes that the data did not have to be zero padded by R2CMPLX; if it were padded, the resulting MYSPECT would be longer than BELL, and the IDIV step would fail.

7.8. Cross Correlation, and a Sequence Something Like SCROSS.

As an additional example, consider the following sequence, which attempts to duplicate the functions performed by SCROSS. It is not an exact duplication, since the internals of SCROSS are rather different to those of these FFT routines (SCROSS does not use NAG, for example), and the filters used by SCROSS also differ from those generated by CMPLXFILT.



















RESULT now contains the cross-correlation of the two spectra, SPECT1 and SPECT2. PEAK is a utility produced especially for this sort of sequence, listing the position of the highest peak in the spectrum it is given in terms of a shift from the center of the first element. This is in fact the relative shift of the two spectra SPECT1 and SPECT2.

Note that the cross-correlation peak will in fact not be central in RESULT, but will be at one end or the other. It is often easier to handle data like this if the data is rotated so that the peak is roughly central, and this can be done by the function ROTX.


where 765 is half the number of pixels in the data (here assumed to be 1530). This has the effect of turning RESULT into the cross-correlation spectrum that might have been produced by SCROSS.

Some of the other strange numbers in the sequence may need explanation. The `4's in the SFIT indicate that the sequence fits a 4th order polynomial to the spectra, and then subtracts away the polynomial fit. This is in fact what SCROSS does, unless the fitting is disabled. The 0.1,0.3 parameters in CMPLXFILT are quite arbitrary, and the optimum values to use are in fact best determined either by looking at the power spectrum of the data to be filtered, or by consideration of the types of features one wants to emphasise in the determination of the correlation (Hunstead, 1980).

7.9. Summary of Figaro FFT Routines

BFFT Takes the reverse FFT of complex data.

CMPLX2I Extracts the imaginary part of a complex data structure

CMPLX2M Extracts the modulus of a complex data structure.

CMPLX2R Extracts the real part of a complex data structure.


Perform arithmetic operations on two complex data structures giving a third. Cf. IADD etc

CMPLXCONJ Produces the complex conjugate of a complex data structure

CMPLXFILT Creates a mid-pass filter for complex data

COSBELL Creates data that goes to zero at the edges in a cosine bell

FFT Takes the forward FFT of complex data.

I2CMPLX Copies a data array into the imaginary part of a complex data structure.

PEAK Determines position of highest peak in a spectrum.

R2CMPLX Creates a complex data structure from a real data array.

ROTX Rotates a data array along the X axis an integer # of pixels.

7.10. References:

Brault, J.W., and White, O.R. 1971, Astron. Astrophys., 13, 169

Cooley, J.W., and Tukey, J.W. 1965, Math. Comput. 19, 297

Hunstead, R.W. 1980, Proc. A.S.A. 4, 77

Straede, J.O. 1985, Scanner Data Reduction System User's Manual, Anglo-Australian Observatory.


The Figaro data format, in terms of what it can contain, is a superset of the FITS format. What this means in practice is that all the data on a FITS tape can be read into a Figaro file, but that not all the data held in a Figaro file can be written out to a FITS tape. Internally, a Figaro data file is a hierarchical, self-describing data structure, and it bears almost no resemblance to the internal structure of a FITS file. The FITS standard is constantly being revised and extended, and with the binary tables extension is now much more compatible with Figaro files, to the extent that one might be able to imagine juggling a Figaro file hierarchical structure into FITS tables. However, the Figaro FITS routines do not yet make use of binary tables, restricting themselves to conventional FITS keywords.

8.1. The FITS data structure in a file

The Figaro data format allows a file to have a set of FITS keywords associated with it. These are generally intended purely to record the FITS keywords associated with a FITS image read from tape or disk -- to provide somewhere to put them rather than just ignoring them -- and to provide a source of FITS keywords should you want to write the Figaro format file to tape. However, some Figaro programs use FITS keywords for specific purposes. Programs connected with specific processing for a particular instrument, such as the AAO's IRIS and FIGS detectors tend to do this. Since they know which instrument must have created the data, they know what keywords to expect to find. However, in general a program should not need to know the detailed origins of the data it is processing, and in most cases FITS keywords in the file are just there as a matter of record.

8.2. Reading FITS Tapes

The FITS command will read an image from a FITS tape and create a file containing the image and the header data. It can handle almost all FITS tapes as described in the original FITS reference (Wells et al, 1981). It will not handle UV-FITS tapes (Greisen & Harten 1981), nor can it deal with the recent `tables' extensions. However, it can handle tapes written using the newly introduced `BLOCKED' keyword (Grosbol et al, 1988).

The FITS routine recognises the mandatory keywords (SIMPLE, BITPIX, NAXIS, NAXISn) and BSCALE and BZERO. If the `FLOAT' keyword is specified it applies BSCALE and BZERO to produce floating point data. If `FLOAT' is not specified, and if the tape has BSCALE=1 and BZERO=0, then it leaves the data as integers just as they appear on the tape, since this will avoid the overheads of converting the data and will not introduce any errors in the data (which is obviously integer anyway). If BSCALE and BZERO are not 1 and 0 respectively, then floating point data will be generated no matter whether FLOAT is specified or not. (The only exception is the special case where BSCALE = 1.0, BZERO = 32768.0 and BITPIX=16. This combination happens to be easy to turn into unsigned 16 bit integer data with a simple bit flipping operation, so this is what is generated if FLOAT is set false. This combination is generated by the Figaro FITS writing programs when they write out unsigned 16 bit data.)

In general, the FITS program cannot be expected to know the meanings of most of the other FITS keywords, since these are determined solely by the writer of the tape. Rather than just loose the information contained in these keyword values, FITS creates the special FITS structure described earlier in which it places all the FITS keywords. For example, the OBJECT keyword is used to generate a character item in this structure called OBJECT. The following keywords are placed in the FITS structure, but are also assumed to have the meaning described in the FITS reference -- OBJECT, CRVALn, CRPIXn, CDELTn and CTYPEn. So FITS will use CRVAL1, CDELT1 and CRPIX1 to set up an X-axis structure with a coordinate array filled appropriately. Comments included in the FITS header will be held as comments, but the order of the comments is not guaranteed to be retained.

8.3. Writing FITS Tapes

There are similar problems with writing FITS tapes from Figaro files; how much of the Figaro file do you try to get into the tape? The command WIFITS is something of a compromise. It writes a single `image', which is always the main data array -- note that this does not have to be two dimensional -- and any data objects in the specific FITS structure described earlier (if one exists at all) that it can. Note that it can only handle scalar values -- FITS keywords have to be single valued -- so it cannot handle arrays. The result is that FITS and WIFITS are fairly compatible, in the sense that if WIFITS writes out a file created by FITS, the resulting tape file will contain the same information as did the original. (Although WIFITS will probably generate different values for BSCALE and BZERO, which it selects automatically.) Remember that WIFITS needs the output tape specifying via the TAPEO command before it can be used.

WIFITS can handle calibration data so long as the calibration arrays -- the axis data arrays -- contain linear data. If a spectrum has been scrunched, for example, the X axis data array will be a linear array. WIFITS does this by calculating the appropriate values for the FITS keywords CRPIXn, CRVALn, and CDELTn and writing these to the header. (Note that if the FITS structure also contains these keywords there could be a conflict here.)

In Figaro files, comments may be associated with FITS keywords, and WIFITS can access these and include them in the FITS header that it creates.

8.4. Accessing the FITS keywords in a file

You will probably want to be able to read the values of the FITS keywords in a Figaro file. You may want to be able to change them, either so that WIFITS can include correct values in the headers it writes, or so that a program that is going to make explicit use of these values can do so.

The EXAM command can show you the detailed contents of a data file, and this used to be the usual way of looking at FITS keywords. The LET command is the easiest way of changing individual items in a file, and this used to be the usual way of modifying FITS keyword values. However, this is now complicated by the use of more than one file structure for Figaro files.

In the traditional Figaro format files (.dst format) the FITS specific structure is called `file.FITS' and has separate structure items for each keyword. In an NDF format file (a .sdf file) the FITS keywords are held in a single character array that looks very like the original FITS header. This is much more efficient to create, but is slightly harder to access.

The preferred way to examine FITS keywords in a Figaro file is to list them using FITSKEYS. This lists all the keywords in the file, and it hides any differences between .sdf and .dst file formats. You can use EXAM if you like, but you may need to increase the value of the LINES parameter if you are looking at a .sdf file with a large number of keywords. (LINES is used to control how much of a large character array -- such as that used by the FITS keywords in .sdf files -- is output by EXAM.)

The preferred way to set the values of FITS keywords is to use the FITSET command. For example, to create a new keyword called FRED -- or to change the value of an existing one -- in a file called MYFILE you could use

FITSET MYFILE FRED 10 "Fred has the value 10"



The value specified for the keyword may be a numeric value or a character string, and the second character parameter is the comment associated with the keyword.

In .dst files you can still set the values of individual FITS keywords using LET, but LET is not able to manipulate the contents of an array such as that used by .sdf files to hold FITS keywords.

8.5. WIFITS and array values.

An old technique for getting additional information from Figaro files into FITS tapes written by WIFITS was to simply use LET to copy the contents of the .OBS structure (which in .DST files contains some information associated with the original observation) into the .FITS structure. The command used -- note that this only applies to .dst files was something like


Since WIFITS can only write out scalar numeric values and character strings from the .FITS structure, even this cannot, for example, deal with .OBS.RA and .OBS.DEC, which some older Figaro programs stores as arrays of three elements each. All that will get onto the tape will be the OBS.DECS flag, which is a single character (`+' or `-'), so you will at least be able to locate the object to within half the sky! You could calculate the values yourself and set them by hand, eg

FITSET FILE RA = 19.245 \\

FITSET FILE DEC = -30.15 \\


However, this sort of problem is really best solved by the use of a specialised Fortran program. At AAO, where files with these RA and DEC arrays are common, such a program has been written for local use. This could be made available as a template for the solution of similar problems -- it works for both .sdf and .dst file formats, since the Figaro subroutine libraries contain routines that specialise in getting the values of named FITS keywords independent of the file format used.

8.6. Disk `FITS' Formats

FITS was originally a tape format. There used to be no such thing as a FITS disk file. Nonetheless, there are obvious attractions to being able to write a disk file using such a well defined format as FITS, and there are a number of programs in existence in various places that write what tend to be described as `FITS disk files', and the use of such files has now become quite common.

Under UNIX, there is little difficulty with the concept of a `disk FITS' file, since UNIX sees any file as just a sequence of bytes, and these can be in a disk file just a readily as on a tape. The only real scope for `cheating' is if a program were to decide to use the natural byte order of the machine it runs on rather than the byte order defined by FITS. This might be justified on a machine where the natural byte order is not the FITS byte order, but the efficiency considerations are really unimportant on modern, fast, machines.

Under VMS, there is scope to provide a little more variety in the disk file format used, since -- unlike UNIX -- VMS has a `record-oriented' file system, which gathers data bytes in a disk file into separate records and expects programs to access them on a record by record basis. Since the FITS standard says nothing about this, a program under VMS may choose to write a `disk FITS' file using a variety of record formats. There is also the possibility of using the natural byte order for a DEC machine -- which is not the same as that specified by the FITS standard.

Originally, Figaro could handle only a couple of the different VMS possibilities, and there is even an old Figaro pair of commands, WJT and RJT, that handle one of these specific possibilities. Recently, FITS has been modified and it should now be able to handle any of the various possible `disk FITS' formats, even under VMS. The problem of byte order is handled by FITS now having a keyword that controls whether it swops the byte order of the data or not. If you seem to be reading a file without error, but the data look essentially random, try experimenting with this keyword.

8.7. References --

Greisen, E.W., Harten, R.H.; 1981, Astron. Astrophys. Suppl. Ser. 44, 371

Wells, D.C., Greisen, E.W., Harten, R.H.; 1981, Astron. Astrophys. Suppl. Ser. 44, 363

Grosbol, P., Harten, R.H., Greisen, E.W., Wells, D.C.; 1988, Astron. Astrophys. Suppl. Ser. 73, 359

9. Comparing images

Unfortunately, a `blink' option is not yet provided by UNIX Figaro; this is difficult to achieve with the Figdisp display server, since it does not usually provide a full 8-bit display, but ways to achieve it are being considered. Until then, IMAGE2, BLINK and CPAIR are not available under UNIX Figaro.

The ARGS and IKON displays supported by VAX Figaro are 8-bit displays, meaning that an 8 bit number can be written into each of the display pixels. Alternatively, it is possible to think of such a display as having 8 separate image planes, each one bit deep. The Figaro IMAGE command, for example, uses all eight bits to display images, and so can produce a picture with 256 different levels.

It is also possible to treat the displays as having two image planes each 4 bits deep. The Figaro command IMAGE2 writes two images into the display, one into the top 4 planes, one into the bottom 4 planes. Each of these images then has only 16 different levels, but it is possible to have both images in the display simultaneously.

Given that, it is then possible to change the planes displayed by the display so as to switch quickly between the two images. This is what the BLINK command does, changing images each time a terminal key is hit. If the keys on your terminal repeat automatically when held down (most do), you can flicker between the two images by just keeping your thumb on the space bar.

Also available, but not generally so useful, is the command CPAIR, which combines the two images in the display in a way that uses colour to highlight the differences. For example, CPAIR can be used to colour one image blue and the other red, in which case the monitor will show the combined images, and anything that shows up as pure red is something that only exists in the second image.

10. Straightening spectra

The Figaro routines described in this section originated in the need to straighten the distorted spectra produced by image tube detectors. However, even with CCD detectors -- which do not suffer the geometrical distortion of the images tubes -- some instruments, particularly echelle spectrographs, still produce curved spectra, and the techniques described here can be used to correct these as well. Indeed, nowadays the main application of these routines is to echelle data.

Instruments such as the 2D-Frutti and IPCS that use image intensifiers suffer from various distortions, in particular S-distortion. The effect -- and the reason for the name -- can be seen clearly by displaying any IPCS image of a point object, such as a star, on the display. Instead of being a perfectly horizontal line, the spectrum snakes across the image in the shape of a horizontal letter S.

Note that such image tube distortion is actually a two-dimensional distortion -- a picture of cartwheel taken through an image tube will show all the spokes bent into S shapes in a radially symmetric manner. However, the difference in pixel scale in the two dimensions for spectral detectors means that -- to a first approximation -- the distortion can be treated as though it were simply a vertical displacement in the data whose magnitude varies along the spectrum (and to a lesser extent, with position along the slit). In any case, the two dimensional distortion can be corrected by two orthogonal one dimensional corrections: the S-distortion correction described here is one, and the other can be performed as a side effect of a two-dimensional rebinning to a linear wavelength scale. It may be that such full 2-D `scrunching' is regarded as overkill -- see the section on wavelengths for more details.

The process described here is a one-dimensional correction, in which data is rebinned in the direction perpendicular to the dispersion, in such a way as to straighten the spectra.

10.1. Usual Sequence

The tricky part is to determine the exact shape of the distortion, and in Figaro this is done using the command SDIST. SDIST requires an image with one or more spectra in it. For echelle data, a flat field is usually the best thing to use since this is already in the necessary form. For image tube spectra the usual thing to do is to take an observation of a flat field through a multi-hole mask, in which case the resulting image will show one spectrum for each hole in the mask, each distorted into an S. The spectra will be roughly parallel, but the two-dimensional nature of the distortion will mean that they are not exactly so; this is the reason for using multiple spectra: it allows the change in distortion with slit position to be determined. The CDIST command can then be used to apply the results of the distortion analysis performed by SDIST in order to correct an image.

The usual sequence of operations is as follows: assume HOLES is a multi-hole calibration image -- or an echelle flat field, and OBJECT is an image to be corrected.

1 Use IMAGE to display the calibration image (HOLES) on the image display.

2 Use ICUR to indicate a starting point for each one of the spectra to be used for the analysis. The starting point should simply be a point, usually near the center of the spectrum, where the spectrum is strongest. SDIST will start from these positions as it traces out the spectra. A point is indicated by putting the cursor on it and hitting the space bar.

3 Run SDIST. This will trace out the indicated spectra and indicate on the display how much of each spectrum it was able to follow. It then fits a polynomial (10th order usually works best for image tube data, lower orders are better for echelle data) to each and writes the resulting coefficients into a disk file.

4 Run CDIST on the image to be corrected. CDIST will read in the results from the previous step and rebin the image data as indicated by the SDIST analysis. To determine the distortion values at points between the analysed spectra, for each column in the image CDIST fits a low order polynomial to positions of the fitted spectra in that column. Obviously, if only one spectrum were used for the analysis CDIST will have to assume that the distortion is constant along the slit. A good thing to try is to attempt to correct the calibration image. If it doesn't end up with perfectly horizontal spectra then something has gone wrong.

in summary:


ICUR (using space bar to indicate spectra to be used)




There are a couple of decisions that have to be made in this sequence. Firstly, there is the question of just how many of the hole spectra should be used. Generally the answer is that all should be used, unless they are going to cause problems -- a spectrum that gets too close to the edge of the image will probably not be traced properly, for example. Then there is the use of the COLUMNS parameter in SDIST to help control the tracing of the data. In most cases, the main point about the COLUMNS parameter is that making it larger reduces the number of points to which the polynomials are fitted, and so speeds up the process considerably. However, if the spectra are particularly weak, increasing COLUMNS will also improve the signal to noise of the cross-section profile of the spectra. (SDIST sums the data over the specified number of columns, then tries to locate the center of the resulting profile, assuming that the best starting guess for the center is the value obtained from the previous fit.)

Note that SDIST assumes that the spectra are more or less horizontal. If your spectra run vertically, you need to use ROTATE to rotate your data into the correct orientation. If you do this, you need to be consistent and rotate all the data you process. You may be well advised to use IREVX as well after the ROTATE to make sure the wavelength direction is wavelength increasing from left to right as well.

SDIST can use a number of different algorithms for tracing the spectra, selected by the TRACE parameter. The original algorithm used by SDIST is the G(aussian) option, and this is most suitable if the profile of the summed data is roughly Gaussian (as it usually is for stars observed using a slit). If the profile is roughly `top hat' in profile (if a dekker has been used, for example), then either E(dge) or C(enter of gravity) will probably be better. Edge fitting tends to produce rather jumpy fits, particularly if the edges are very sharp and therefore cannot be located to better than a pixel. The center of gravity fit (which takes the centre of gravity of the data within the edges) is usually smoother, but can be inaccurate if the top of the data is not flat. A crude but very robust algorithm is the B(alance) option which just takes the center of gravity of the data around the last central position. Don't confuse the balance and centre of gravity options -- the centre of gravity option needs clearly defined edges to work with, which the balance option doesn't. A YSTRACT through the data, followed by a SPLOT can give a good feel for the data profiles, and the diagnostic display produced by selecting the SOFT option can be very helpful here. The information produced by specifying the DIAGNOSTIC keyword is really for debugging new trace algorithms and is unlikely to be of general use.

In the end, there is no substitute for watching the results on the display. When SDIST plots a fitted curve, it plots it just over the range where it was able to trace the spectra. Remember that the polynomial will be unconstrained outside that range, and this may well show up in the final results from CDIST. So see if the spectra seem to have been fitted acceptably, and if not, either ignore the bad ones or try to fine tune the fit using the SDIST parameters. The display that SDIST can produce on a line graphics device is also a useful diagnostic, and does not just provide the same information as the image display. It can be worth looking at both, although this is usually easiest if SDIST is run twice, once with an image display once using line graphics. However, both may be used together if the line graphics device selected by SOFT is not the same as the image device being used.

The routine ODIST can be used to see if the results found by SDIST for one image are applicable to the data in another. If IMAGE is used to display the new image, ODIST will redisplay the SDIST results on that new image. It can also be used to look closely at the SDIST results, since IMAGE can be used to display a subset of the image rather than the whole image that is normally used prior to running SDIST.

OFFDIST may be useful if there is a slight linear offset in Y between the calibration data and the data to be corrected. This is not usually important if the data is to be corrected by CDIST, but other routines such as MASKEXT can make use of SDIST results in a way that makes such offsets (often the result of guiding changes) important. OFFDIST adds an offset in Y to the results produced by SDIST.

The final step, the correction of the object data, using CDIST, is an automatic process. So if a number of images are to be corrected there is a lot to be said for doing this as a batch job.

10.2. Self-Correcting Objects

Image tubes are not always tremendously stable; it may not be reasonable to assume that the details of the distortion do not change with time. This means that if there is only one multi-hole calibration image, probably taken at the start of the night, a distortion analysis based on this single early image may become progressively less correct as images from later and later in the night are processed. And, of course, there is always the possibility that no such image was taken at all. This is not usually a problem with echelle data.

In these cases, it is always possible to produce a distortion analysis from a spectrum of a single point object. An image of such an object can be used to calibrate itself, from a distortion point of view. The disadvantage of this over the use of a multi-hole exposure is that the correction based on a single object will be exactly correct only at the position of that object in the slit -- the sky nearby will not be quite so well corrected.

So the choice of using objects to calibrate themselves, against the use of a multi-hole calibration image, is a trade-off of one source of error against another. The choice has to be yours.

10.3. Use of the CDIST parameter ROTATE

The operation performed by CDIST involves working on the image column by column. On a virtual memory machine -- which most machines are nowadays -- this commits the cardinal sin of accessing an array in the wrong order -- against the grain of the virtual memory system. When large images are processed, the result will be excessive page faulting by the process, together with a dramatic increase in the time taken (both CPU time and elapsed time go up), and an even more dramatic increase in the anger levels of the other users of the machine, since the excessive paging will begin to saturate the disk I/O system. This condition is described as `thrashing'.

To reduce this faulting, it is possible to rotate the image before it is processed and rotate it back afterwards. CDIST has a hidden parameter `ROTATE' which makes it do this automatically. The final results obtained are the same whether or not ROTATE is specified, but the efficiency of the operation can be quite different. Specifying ROTATE adds the overheads of the two rotations, but makes the correction work properly with the virtual memory system. The rotation algorithm used understands about virtual memory, and on some machines can attempt to optimise itself to the amount of real memory you actually have available in order to operate efficiently, so for large images the improvement can be quite considerable. For small images, where the correction algorithm probably does not induce thrashing anyway, use of pre and post rotation will be inefficient since it adds the overheads of the two rotations. However, these overheads are small for small images, and it is probably acceptable to always specify ROTATE. This is why ROTATE is a hidden parameter, true by default.

Just what constitutes a `small' or a `large' image -- ie the size of image at which one starts to gain by specifying ROTATE -- is hard to say. It depends on the amount of real memory on the machine, on the load on the system at the time, and on VMS systems also on the amount of working set allocated explicitly to the user. In principle, if you have W Mb of real memory available (ask your system manager what W is likely to be) and your image is X by Y pixels, then CDIST will thrash if ROTATE is not specified and X*Y*2 > W*262144. (The 2 is because there is an input and an output image for the correction, the 262144 is the number of real (4 byte) array elements held in a Megabyte.) The best guide, however, is trial and error. As it usually is.

10.4. Making Do Without An Image Display

It is -- just, only just -- possible to make do without a display, should one not be available. The only step that really relies on the display is the use of IMAGE and ICUR to indicate the starting points for SDIST, and with a little knowledge of how this works it can be bypassed, although with difficulty.

When the space bar is used with ICUR to indicate points, the number of points so selected is set in the user variable NPIXELS and the X and Y values of the selected pixels are set in the user variable arrays XPIXELS and YPIXELS.

It is possible to use, for example, YSTRACT, EXTRACT and SPLOT to determine a suitable starting pixel for a single spectrum. Then LET may be used to set the appropriate variables




If more than one pixel is to be indicated in this way, the array values should be set in reverse order; XPIXELS[2] should be set before XPIXELS[1]. This is to ensure that a sufficiently large array is allocated. See the documentation on LET for more details.

11. Wavelength calibrations

Simple statements such as `the wavelength of channel n is so many angstroms' are not as unambiguous as they appear. This statement is really a slightly simplified version of `the wavelength range covered by channel n is from so many angstroms to so many angstroms' but some precision has been lost in the simplification. It presumably means `the wavelength of the center of the range covered by channel n is so many angstroms' -- or does it mean that `the wavelength of the start of the range covered by channel n is so many angstroms'?

These notes are intended to explain the conventions used by Figaro. Please note that they are all somewhat arbitrary and in some cases are historical rather than ideal. However, they are consistent and they are precisely defined. Hopefully, these notes will also serve as a brief guide to the use of Figaro for wavelength calibrations. If they seem a trifle pedantic, this is because experience has shown that the whole subject is a real can of worms.

11.1. Discrete and Continuous Channel Numbers

Most of the problems arise because:

* It is impossible to get away from the fact that data is held in a number of discrete `bins' or `channels', each of which has to be given an integer channel number (to be called `n' in these notes), and each of which originally covered a range on the detector.

* A center of gravity analysis of an arc line, say, will not in general produce an integer channel number as a result. So there also has to be a `continuous channel number' (called `x' here).

* The channels also map onto a wavelength scale, and wavelength scales are most naturally thought of as continuous. It is therefore quite natural to think of a wavelength/channel number relationship of the form


where f(x) might be a polynomial in x. Note that Figaro does not actually store wavelength information using polynomial coefficients, unlike many systems.

The sad fact is that there is no obviously correct method of mapping the continuous channel numbers (`x') onto the integer channel numbers (`n'). Mainly it's a question of selecting the zero point. The Figaro convention is that

* The center of channel n=200 is at x=200.00, and the channel therefore spans in x from x=199.5 to x=200.5

* The `wavelength of a channel' means the wavelength of the center of that channel, ie wavelength(n)=f(x) where x=n.

This all sounds mind-blowingly trivial, but it is possible to produce other conventions. (In particular, if the left edge of channel 1 were taken as the x=0.0 point, the center of channel 200 would be at x=199.5)

11.2. Wavelength Arrays

Figaro tries to hold wavelength information in as general a way as possible. If a file called, say, SPECT contains a wavelength calibrated spectrum, then there will be two data arrays in the structure held in the file. The main data array will be an array containing the actual spectrum, and the X-axis array will be an array of related size containing the wavelength information.

Each element of the X-axis array contains the wavelength value for the corresponding element of the spectrum -- that is, for the center of the corresponding data bin. For uncalibrated data, the X-axis array will either not exist, or will simply contain the pixel numbers.

The EXAM command may be used to examine elements of either the data or wavelength arrays, and the ILIST command can be used to tabulate them.

11.3. The ARC Command

A single arc spectrum can be generated by summing successive rows of an arc image, using the EXTRACT command. The wavelength to channel number relationship can then be determined using the ARC command. This invokes an interactive line identification program which gets the user to select lines in the arc and to specify their wavelengths. ARC makes full use of interactive graphics and a running fit to make the manual identification process as easy as possible, and it also has a automatic line identification feature that can be used to add lines to a fit. ARC is described in more detail in the section on techniques headed `The ARC command'.

Once a satisfactory fit has been obtained, ARC can be used to set the wavelength scale in the arc spectrum being identified. This is the import of the question "make use of this fit?" that the user is asked at the end of ARC. One the wavelength scale has been set, subsequent plots of that arc spectrum made by SPLOT will be made with a scale in wavelength units.

Note that Figaro programs are all designed to work with data whose X scale either increases or decreases. However, as a practical point, it is regarded as more usual to arrange data so that wavelength increases with pixel number, and there may be bugs still lurking in some routines that will only turn up when used with reversed data. You are therefore strongly recommended to use increasing X scales, if necessary using the IREVX command to reverse spectra and images.

If ARC is used to fit a spectrum very similar to the one previously fitted, it is enough to make use of the previous fit (by answering YES to the PREVIOUS prompt) and then re-identify a line used in the previous fit. ARC will ask if the previous identification is to be ignored, or the new identification, and as a final option will assume that there is a shift in the data relative to the identifications and will re-analyse the line centers accordingly. This makes the analysis of a sequence of arcs quite simple.

11.4. Applying an Arc Fit to Other Spectra

Of course, the point of getting such a fit is to apply it to other spectra. The command XCOPY copies the wavelength scale from one spectrum to another. Normally the spectrum whose wavelength scale is copied is a fitted arc and the spectrum to which it is copied is that of an observed object.

Typically, an observation of an object is bracketed by arc observations. These different arcs will inevitably give slightly different fits, either just because of random variations, or because of some instrumental drift. Particularly if there is drift involved, it may be better to use XCOPI rather than XCOPY. XCOPI takes as input two spectra with wavelength scales and generates a new scale that is an interpolation between them. It then applies that interpolated scale to the object spectrum.

11.5. Linear Wavelength Scales -- Scrunching

It is possible to rebin data whose wavelength/channel number relation is known in such a way that the relation is linear. Such an operation is known as `scrunching', and is performed by the SCRUNCH command. Scrunched data is generally easier to handle than is unscrunched. For example, if spectra with slightly differing wavelength scales must be added together they should be scrunched to a common wavelength scale first.

Similarly, there are cases where one needs data rebinned on a logarithmic wavelength scale -- cross-correlations to determine redshift need this sort of scale, for example -- and this can also be performed using SCRUNCH.

SCRUNCH has parameters that specify the start and end wavelengths for the resulting scrunched spectrum, and the number of elements (bins) it should have. Note that the wavelengths are those of the CENTERS of the end bins; it is easy to miscalculate the number of bins by 1 when aiming to get a linear dispersion of exactly so many angstroms per bin. The target wavelength range may also be specified in terms of the start wavelength and the increment, by means of the rather crude convention that if the end wavelength specified is less than the starting wavelength, then that `end wavelength' value will be taken as the wavelength increment. (This behaviour may be controlled explicitly by two keywords FINAL and INCREMENT, which are used to specify the interpretation to be placed on the `end wavelength' value.)

SCRUNCH has one keyword that can confuse users. If the spectrum being scrunched is in flux units, then the total flux in the spectrum should be conserved by the scrunching process. Note that the distinction is between flux units and flux density units, so `raw counts' are a flux unit. If a spectrum is in flux density units, then scrunching should not change the absolute value of the data. If the scrunching is simply such that it doubles the number of bins in the spectrum, then an element that has a value of, say, 10mJy should become two elements each with a value of 10mJy, since Janskys are a flux density unit. Conversely, if the spectrum were still in raw counts, then an element with 10 counts should become two elements each with 5 counts. The default for the FLUX keyword in SCRUNCH -- whose prompt is the perhaps confusing "conserve flux (as opposed to mean counts)?" is YES, and this is correct for raw counts.

11.6. 2-Dimensional Scrunching

The previous sections have all been concerned with single spectra, and this is usually satisfactory where images can easily be reduced to spectra by use of EXTRACT -- ie where there is no significant change in wavelength/channel relationship across the subset of the image that is being collapsed into a single spectrum. However, there is an alternative approach, which is to scrunch each cross section of an image separately. This may be needed for data on extended objects, or it may simply be regarded as a better (more strictly correct) means of proceeding.

To do this, one needs a good fit to every cross-section of an arc image. This is performed automatically by IARC, starting off from a manual fit produced by ARC. The usual technique is to extract a spectrum from the center of the image, probably adding together a few cross-sections to get good signal to noise, and then fit that using ARC. It is important to have as many lines as possible in the fit; it is even worth adding lines in any sparse areas to help lock down the fit, even if their identifications are uncertain -- just treat the fitted values as exact. IARC does have an automatic search for such `lock' lines, but it is usually better to pick good ones manually.

By default, IARC performs its analysis using the same line width parameter, SIGMA, as was used in the original fit performed by ARC. However, the algorithm used by IARC is quite sensitive to the value of this parameter, and better fits may sometimes be obtained by varying it. For this reason, IARC has a hidden parameter RSIGMA which may be used to over-ride the ARC value. If you have problems with IARC, there is a hidden keyword, DETAIL, which causes the details of the fit for each cross-section to be output. This can be used to see which lines the program is having trouble with. IARC will drop a line from its search list if it fails to find it in a given number of successive cross-sections -- this is to prevent it accidentally picking up the wrong line due to distortion moving another line towards the position where it lost the original line. The number of cross-sections is given by the hidden parameter GAP -- normally, GAP is set to 1, meaning that the second time in succession a line is missed, it will be dropped. Setting GAP sufficiently high will effectively disable this feature -- this may be the best way to deal with data where there is little distortion, but where a number of cross-sections may have poor data; fibre data can be of this type. If GAP is set to zero, a line will be dropped immediately if it cannot be found in a spectrum. In order to pick up as many lines as possible in each cross-section, IARC uses a technique known as `spreading' -- it first looks for the lines using a larger sigma value than that specified, and then refines the fits using the specified sigma value. In some cases this is undesirable, and may be bypassed by the use of the hidden keyword `NOSPREAD'.

Sets of spectra which have been taken in some long slit mode with a distorting detector, such as an image tube, will differ from spectrum to spectrum but will do so smoothly. Sets of spectra taken with a fibre system may vary discontinuously, with sudden linear shifts between spectra. The XCORR keyword in IARC is designed to be used with such discontinuous spectra. It causes IARC to attempt to determine a linear shift between successive spectra by cross-correlation and to apply that shift to its expected arc line positions before searching for them in the new spectrum. XCORR should be used for fibre and similar data, but is probably inappropriate for other data.

Given a fit from IARC, ISCRUNCH can be used to scrunch an image of an object. It is actually a good idea to scrunch the original arc and see what the results look like -- lines waving about in some regions show that the fit is badly constrained in those regions. IARC only performs a sequence of individual 1D arc fits, rather than a 2D fit to the whole image at once, so there is little constraint that the fits be continuous from one cross section to the next, other than that imposed by the lines themselves.

Note that the IARC results can only be used by ISCRUNCH; they are not written into the data structure in a way that can be used by other routines such as SPLOT.

For object images bracketed by arc images, ISCRUNI can be used instead of ISCRUNCH. This uses wavelength values obtained by interpolation between two different IARC fits, in an similar way to XCOPI as described earlier.

11.7. Sequence Summary

For individual spectra:

EXTRACT get spectra from image. Both arcs and objects.

ARC set wavelength scale on arc spectra.

XCOPY/I copy wavelength scales to objects from arc(s).

SCRUNCH reduce spectra to linear/log wavelength scales.

For images:

EXTRACT get spectrum from center of arc image.

ARC set wavelength scale on arc spectra.

IARC automatic fit to all cross-sections of arc image.

ISCRUNCH reduce images to linear/log wavelength scale.

EXTRACT get spectra from scrunched images.

12. Extinction

The EXTIN command is used to correct a spectrum for atmospheric extinction. It requires a `coefficient' spectrum -- a spectrum giving the atmospheric extinction coefficient for each element of the spectrum. This coefficient spectrum is normally generated by interpolating a spiketrum formed from a suitable table of extinction coefficients.

One such table is EXTIN, (actually the file EXTIN.TAB in the FIGARO_PROG_S directory). This contains the coefficients given for Palomar by Hayes and Latham (1975). So for Palomar data, the following sequence will perform the extinction correction:



generating a corrected spectrum, COBJECT, from the original uncorrected spectrum, OBJECT. The first two steps do not need to be repeated for subsequent spectra, so long as the wavelength range covered remains unchanged.

(An alternative extinction table is PALOMAR, which contains the coefficients used by Todd Borroson's Forth system. This differs slightly from EXTIN, particularly around 9000 Angstroms, where it attempts to correct for some atmospheric features ignored by Hayes and Latham. The differences can be seen by generating and then plotting the spiketra produced by the two tables.)

The AAOEXT table gives the standard AAO extinction coefficients -- those used by the SDRSYS data reduction system.

12.1. References

Hayes, D.S. and Latham, D.W.; 1975, Ap.J. 197, 593

13. LET

LET is the Figaro assignment statement. It can be used to set the value of a data object (an entity held in a Figaro data file). Making full use of LET requires that the user have some knowledge of the internal structure of Figaro files -- EXAM can be used to examine the structure of any given file. LET can also be used to set the value of a Figaro user variable. In addition, although the syntax is a little awkward, it can be used to set the value of UNIX shell variable or a DCL symbol under VMS. LET supports expression evaluation, and so can be used to perform floating point calculations within command scripts.

LET has the general format:

LET Data_object_name = value

Under UNIX, most of the symbols used in LET commands turn out to be interpreted by the command shell in use before being passed on to LET, which defeats the object of the exercise. So under UNIX, in most cases the assignment statement that is effectively the single argument passed to LET (this is actually how it is implemented -- LET has a single character string argument that it parses for itself and which takes the form of an assignment statement) has to be enclosed in quotes and so ends up having to be given as:

LET `Data_object_name = value'


13.1. Examples

Some short examples may help. Note that unless the example is specific to one operating system, these are all shown in a form suitable for VMS. UNIX users will need to put `quotes' around most of the assignment statements. Simple assignment statements with no spaces -- particularly around the equals sign -- do not need to be quoted, even under UNIX. Also remember that UNIX needs lower case for file names and for command names (so under UNIX, use `let' instead of `LET').

1 The name of an object in many tape formats is read by Figaro into the .OBS structure of a .dst file as the data object `filename.OBS.OBJECT' If this is wrong, it can be corrected by

LET filename.OBS.OBJECT = "The correct object name"

2 The WIFITS command transfers only the main data object in a .Figaro file (the image or spectrum called `filename.Z.DATA' in a .dst file) and that information it finds in a specific FITS structure (called `filename.FITS' in a .dst file). Information may be included in the .FITS structure using LET to set items in this structure:

LET filename.FITS.OBJECT = "3C273"

LET filename.FITS.TIME = filename.OBS.TIME

(the second example copies the TIME data object from the .OBS structure into the .FITS structure, where WIFITS will find it). This only works for .dst files, where LET is able to access the individual FITS items, and it is better to use the FITSET command -- see the section on FITS for more details

3 An individual data pixel in an image in a .sdf file may be changed by

LET filename.DATA_ARRAY[120,30] = 5.6

4 A Figaro user variable may be set to the result of a calculation based on other Figaro user variables:

LET PI=3.14195



Here the Figaro variable SIN_ANGLE is set to the sine ratio of 45 degrees. The {braces} contain an arithmetic expression that Figaro can evaluate, and the SIN function supported by the expression evaluator takes an argument in radians.

5 Under VMS, the DCL symbol `TIME' may be set by


6 Under UNIX, the c-shell variable `time' may be set by

set time = `let &time=file.obs.time`

13.2. Rules

Note that this is a fairly elementary implementation of `LET'. Although expressions are allowed, there are limitations on the use that can be made of data items from data files. Array arithmetic is not supported.

The general syntax of a LET command is

LET object_name separator value

`Separator' is usually an equals sign, but any combination of spaces and = signs will be accepted.

`Object_name' must be one of:

1 The name of a Figaro user variable.

2 The full name of a Figaro data object, eg `FILE.OBS.TIME'

3 The name of a single element of a Figaro data object that is a numeric array, eg `FILE.DATA_ARRAY[100,100]'

4 Either of the possibilities listed in 2) and 3) with a leading section of the name (up to but not including a separating period) missing. In this case, the user variable `$DEFAULT' is used to supply the missing part. So:


LET .DATA[10]=5

is equivalent to


This is not a much used feature of LET.

5 The name of a command language variable, preceded by an ampersand character, `&', as in

LET &P = "New value for the shell variable P"

`Value' must be one of:

1 A numeric constant, eg 5.6, -35.2E-2 etc

2 A character string constant, eg "This is a string". It has to be enclosed in double quotes.

3 The name of a Figaro user variable.

4 The full name of a Figaro data object, eg `FILE.OBS.TIME'

5 The name of a single element of a Figaro data object that is a numeric array, eg `FILE.DATA_ARRAY[100,100]'

6 Either of the possibilities listed in 4) and 5) with a leading section of the name (up to but not including a separating period) missing. In this case, the user variable `$DEFAULT' is used to supply the missing part.

7 An arithmetic expression enclosed in {braces} involving numeric constants and/or the names of Figaro user variables.

8 The string `<FLAG>' which is used to represent the `flagged' or `magic' value that indicates bad quality data.

This means that, syntactically, any of the following are acceptable:


LET J="This is a string"


LET FILE.X = "This is another string"




LET X={(-B+SQRT(B*B-4*A*C))/2A)}

whether any of them work in practice is another matter, depending on the following assignment rules:

1 A data object cannot be created by LET if its upper structure does not already exist. eg if the structure `FILE2.OBS' does not exist, LET cannot assign a value to `FILE2.OBS.OBJECT'

2 If a data object does not exist, LET will try to create it and then assign it the specified value. If the specified value is a character string, a character data object will be created. If the value is numeric, a floating point object will be created.

3 If the data object does exist, LET can only assign a value of the appropriate type to it. It cannot set a character object to a numeric value, or vice versa. It will extend a character string, if necessary, by deleting the original and creating a (larger) new one of the same name.

4 If the data object is an array element, and does not exist, LET will create an array of the size given by that element. LET will not try to extend an array, so if `FILE.X.DATA' does not exist, the sequence




will work, but following this with


will fail, since LET will only have created a three element array.

5 If the value is the name of a numeric array, LET will work on that array as a single entity. So if `FILE2.X.DATA' and `FILE.X.DATA' are both 1500 element arrays,


will change all 1500 elements of the target array. If `FILE.X.DATA' did not exist, a 1500 element array will be created. If `FILE.X.DATA' existed before as a 1000 element array, the assignment will fail since the arrays are not compatible.

6 If the value specified is the name of a data structure, the whole structure will be copied over. In this case, the data object being assigned cannot already exist.

13.3. Command language symbols

Note the difference between User Variables and command language symbols or variables. A Figaro user variable has a special meaning to the Figaro command processor, as in the command sequence



which divides all the data in the main data array of `FILE' by the value held in FILE.OBS.TIME. In this case the variable `TIME'. is something local to Figaro itself. It is not a symbol that will have any special meaning to the command processor being used, whether a UNIX command shell or VMS's DCL. A Figaro user variables can be a string, or a floating point or integer number, or an array.

A UNIX command shell variable, or a DCL symbol, is something processed by the command processor itself, and is really only a string, although strings that look like integers can be used as such. The command processors for both VMS and UNIX will substitute the values of anything they recognise as one of their variables before passing the command on to Figaro.

The following sequences show a command shell variable (or a DCL symbol, in the VMS case) being set to `800', its value being logged, and then being used in a Figaro command.

For the UNIX c-shell, `csh', or some equivalent like `tcsh'

set time = 800

echo $time

icdiv file,$time,file

For VMS, using the DCL command processor:

TIME = 800



In both cases, the command language variable `time' is set to 800, the value is output as a check and then the value of this variable is substituted in the command string passed to Figaro. Variable substitution in the UNIX shells is invoked by preceding the variable name by a `$'; in DCL it is invoked by `quoting' the variable name. In both cases, what the Figaro command processor sees is


(in lower case in the UNIX example). Figaro has no notion that a command variable was involved at all in this command.

This shows how command processor variables can be used. The important thing for the purposes of this description of the LET command is the question of how Figaro can be used to set a command variable. For example, the SCROSS command cross-correlates two spectra and prints the relative between them. You might be running this as part of a command procedure and might want to use the calculated shift in some non-Figaro part of the procedure. To do this, you really want the shift setting into a UNIX command shell variable (or DCL symbol). In this instance, SCROSS sets the calculated shift into a Figaro user variable called SHIFT, so the problem reduces simply to a question of how to set a command processor variable to the value of a Figaro user variable. LET can be used to do this; the syntax for VMS and UNIX differs.

VMS is the simpler case, since it is possible for a program to use VMS library routines to set DCL symbols directly. Under VMS, LET interprets the syntax where the target of the assignment starts with a `&' symbol as indicating that the target is the name of a DCL symbol to set, and does so. So in this case, all that is required is:


and LET will itself set the DCL symbol SYM to the value of the Figaro user variable SHIFT.

Under UNIX, there is no easy way for a program to manipulate command shell variables itself. The solution adopted for UNIX is to make LET interpret a syntax where the target is specified starting with a `&' symbol as an indication that it is to output the value of the assignment to standard output. So, in UNIX, the result of the command


is simply that LET prints out the value of the user variable SHIFT. This may not seem immediately useful, but it can be combined with the `backquote` syntax used by csh and other UNIX shells. If csh is given a command which included text enclosed in `backquotes` it treats that text as a command to execute and replaces the quoted text with the output of the command. This is what makes

set sym = `let `&sym=shift'`

work. The name of the symbol used in the LET command is actually ignored by LET, but it makes sense to include it and to use the name of the symbol you actually intend to set. In this example, LET prints out the value of SHIFT, and csh uses the output text as the value to use for its variable `sym'. The quotes around the assignment part of the LET command are to prevent the `&' being misinterpreted by csh. This is crude but relatively effective.

13.4. Limitations

Amongst other things, the following are obvious limitations in the current implementation of LET.

* Expressions involving data items in files are not allowed:


will not work.

* Variables are not allowed in array indices:


will not work.

14. Build files

`Build' files are a feature of the `SPLOT' command, and are generated instead of a real plot when then `BUILD' keyword is specified. A `build' file is an intermediate file -- a file that describes a plot exactly, and from which that plot can be generated, but is not a file that can be copied directly to a graphics device. Files produced using the `HARD' keyword can be printed directly; `build' files cannot -- they have to be processed by the `BPLOT' command.

However, `build' files have a number of advantages. They are device independent; the BPLOT command takes a DEVICE parameter, and it is this that determines which graphics device is used for the plot. So the same build file can be used to create a plot in the current soft graphics device and then re-used to create the same plot on the hard graphics device; this is faster than using SPLOT twice.

But the main advantage is that one `build' file can be appended to a previous one. This is done automatically if the NOERASE keyword is specified. The point of this may not be immediately obvious, but consider the following sequence:

SPLOT spectrum1 whole autoscale \\

SPLOT spectrum2 noerase noaxes \\

This will have the effect of plotting spectrum1 on the soft device and then superimposing a plot of spectrum2. Which allows an easy comparison of two spectra. Unfortunately, the sequence

SPLOT spectrum1 whole autoscale hard \\

SPLOT spectrum2 noerase noaxes hard \\

simply generates two plot files, one a plot of spectrum1, the other a separate plot of spectrum2 with no axes shown. This is because the files generated for the various hardcopy devices are such that there is no way of combining or overlaying them. Postscript files, being text, can be manipulated, or included in a third document, but this is too awkward for frequent use. Which is where BUILD comes in.

SPLOT spectrum1 whole autoscale build \\

SPLOT spectrum2 noerase noaxes build \\

BPLOT hard

results in a single hardcopy plot showing the two spectra superimposed. The NOERASE in the second command causes the output to be appended to the output from the first command, and the BPLOT command produces the actual plot. The plot could be checked before being sent to the hardcopy device by a `BPLOT SOFT' or equivalent command. .;

15. The `ARC' command

This section describes the use of the ARC program in Figaro. ARC is an interactive arc fitter, which displays an arc spectrum on the current soft graphics device and gets the user to indicate arc lines and enter their wavelengths. Given sufficient lines, a polynomial fit of wavelength against channel number can be performed, and the results of this fit used to fill the wavelength axis array associated with the file. The section on techniques headed `Wavelength calibration' describes the way ARC fits can be used.

ARC is primarily designed as an interactive arc fitter, but it does have an automatic line finding capability. This description will emphasise the interactive aspects of ARC, however, since this automatic capability is intended to help add lines to an already good fit, and so the first and most important thing is to get a good fit manually.

ARC has a number of features that are intended to make it particularly simple to use. These are not always obvious to the user who simply types the command `ARC' and waits to see what happens next; hence this description.

15.1. Use with Figdisp

If you are using ARC with the Figdisp display server -- this is probably the case if you are using the UNIX version of Figaro -- then there are some awkward aspects to the interaction that happens between you and ARC. This is all connected with the concept of `keyboard focus'. ARC expects you to select arc lines in the graphics window, and choose an option by hitting a key (although just clicking the mouse button will do to indicate a line). You then have to continue the dialogue with ARC in the terminal window. This means that you have to be constantly changing the window that has the `keyboard focus', ie the window that the keyboard is associated with. This is discussed in more detail in the document on `Using Image and Figdisp', which recommends either that you get used to clicking in the frame of a window to make it active (ie have the keyboard focus), or that you use a window manager that supports a `follow cursor' focus policy, where the active window is the one the cursor is in, without the need to click anywhere. Doing this can make ARC -- and other similar Figaro programs -- much easier to use with Figdisp.

15.2. Arc line lists

To get the best out of ARC you need a comprehensive line list for the arc you are trying to identify. Arc line lists are text files containing lists of arc line wavelengths. They have the extension `.arc' and are usually held in the main Figaro directory (identified by the UNIX environment variable or the VMS logical name FIGARO_PROG_S). There may also be local files in the local Figaro directory (FIGARO_PROG_L), or you, the user, may have your own in your user Figaro directory (FIGARO_PROG_U), or in the default directory. As an example, the file `argon.arc' in the main Figaro directory, begins --

* Argon lines












and carries on in the same vein for quite some time. The format is very simple -- each line of the file has a wavelength, in free-format, and blank lines and lines beginning `*' are ignored. Note that no line strength information is used. The importance of this line list will become clearer as we go on.

When the `ARC' command is given, one of the first prompts asks `Type of arc?'. ARC wants to know which arc line lists to read in. You can specify up to three arc types, although it is unusual to use more than one. If your reply is `ARC1,ARC2,ARC3', then ARC will look in the various Figaro directories for the files ARC1.ARC, ARC2.ARC and ARC3.ARC. It will then read in the arc lines from all three. The idea here is that if you have, say, a copper-argon arc, you might give the type as `COPPER,ARGON', and ARC can make use of two separate files, one for each element. In practice, it would be better to have a single file called CUAR.ARC, set up specially for the arc you are using, and to reply `CUAR'. If you have no suitable arc file, reply `NONE'.

15.3. Other initial prompts

ARC will also prompt for the arc line half width in pixels (the parameter called SIGMA), which it uses as a guide when finding the centers of the arc lines, and for the initial order of polynomial to use for the fit. ARC performs running fits, and unless you are using a previous line list (see below), the first fit will have to be made to a low order, simply because there will not be enough lines for a higher order. Once enough lines have been identified, ARC will start to use the order you specified initially. You will be able to change the values for both the sigma value and the order interactively during the fitting process.

ARC will also ask if lines from the previous fit are to be used. During a fit, ARC keeps writing out to disk the positions and wavelengths of the lines identified so far. If you reply YES to this prompt, ARC will read in the file giving the previous list of identified lines. This allows you to start again where you left off the previous time -- either because ARC or the computer crashed (perish the thought!) during the previous fit, or simply because, on reflection, you are unhappy with the fit you obtained and want to experiment with other orders, a different selection, etc. The default for this previous list file is always `ARLINES.LIS', since that is the name of the file that ARC writes. However, you may specify a different file. You may, for example, have one file that you want to use as the basis for fits to a number of arcs, which you have renamed from ARLINES.LIS (the name ARC will have given it originally when it was produced) to, say, BASEFIT.LIS.

You can also use a previous fit if the arc you are identifying is very similar to the previous arc, but is shifted slightly. This is often the case with arcs taken at intervals throughout a night. ARC will notice if the file that was used to create the previous fit is different to the file you are fitting. If this is the case, you are prompted for the XCORR keyword, which allows you to request that ARC locate the previous spectrum used and attempt to determine a linear shift by cross-correlating the two arcs and applying the determined shift to the arc line list. (If you are going to use this option, specifying a slightly larger SIGMA than that used for the previous arc will give the line finding algorithm a little more flexibility when it looks for the lines at their shifted positions.) This is a simple operation, but the shift is determined over the whole of the arc. There is an alternative, messier, way to indicate a shift to ARC on the basis of a single selected line, but this is described later.

15.4. The line selection process

ARC will read in the specified spectrum and display a portion of it on the graphics device. Initially, the portion will be 200 channels long; you can change this should you want to. You will be invited to use the cursor to indicate an arc line.

Normally, you move the cursor until it is close to a line whose wavelength you know (you will often find it useful to have a hard plot of the whole arc in front of you as you perform the fit) and select it by hitting the space bar. (Strictly, you can use any key that does not have some specific function, but there are rather a lot that do and the space bar is a safe one to use.) ARC will then try to find a line close to the point you have indicated, and if it finds one will show you its center on the display with an arrow. The algorithm used to find a line center is one described as `convolution with the derivative of a gaussian', and it incorporates some requirements as to just what constitutes a line, which can lead on rare occasions to its being unable to find a line centered near the position indicated. You can think of the algorithm as a fit to a gaussian of fixed sigma.

You will be told the channel number of the line, and asked to enter a wavelength. Strictly, you enter a wavelength followed by a type, eg

3850 ARGON

The type should be one of the names specified in response to the `What type of Arc' prompt. ARC then looks in the table for that type and selects the line whose wavelength is nearest to the one you specified. If you do not specify a type (which is by far the most usual case), ARC uses the first of the types. Since in most cases only one type was specified, this means that ARC will normally just look in the one table that it read in. Which is what you would expect. So the response is usually just a number, e


ARC will then tell you what line it assumes you mean:

Wavelength is 3350.924 OK? [YES]

If you reply Y, YES, or just hit the return key, ARC will use that wavelength. If you reply NO, or N, it will ask you for the wavelength again. If you just hit return in response to the wavelength prompt, the line will be deleted and you will back with the cursor looking for another line.

15.5. Use of the running fit and other subtleties

There are some important alternative ways of specifying the wavelength of the line.

If you know the wavelength exactly, but it is not in the line list, you can specify the type as `EXACT'. So if your response to the wavelength prompt is --

3456.789 e

`e' being short for `EXACT' -- ARC will use that value as the wavelength. If you have specified `NONE' as the arc type, ARC will assume that all wavelengths given are exact values.

Once two lines have been identified, ARC is able to estimate the wavelength of a new line by linear interpolation. It will then tell you this interpolated wavelength as well as the channel number. If more than two lines have been identified, the interpolation is based on the two nearest to the new line. You can now use `I' in your response instead of specifying a wavelength -- the result is the same as if you had typed in the interpolated wavelength. ARC will then look for the listed wavelength nearest to the interpolated value.

Similarly, once more than two lines have been identified, ARC is able to start performing a running fit to the identified lines. The fit is recalculated each time a new line is identified and the RMS of the fit is displayed. So a bad identification will usually show up immediately as a large rise in the RMS figure. ARC can now display the fitted wavelength as well as the interpolated wavelength when a new line is selected, and you can use `F' for the fitted wavelength in the same way as you can use `I'.

What this means in practice, is that with a good line list, you can identify enough lines to tie down the fit fairly well (it helps to do some lines at one end, then at the other, then work in), and then just respond `F' to each wavelength prompt. ARC will then use the line in the line list closest to the fitted wavelength. This simplifies the process considerably, but it is still under your control, and you can intervene if a new fit shows a considerable increase in RMS.

15.6. Moving about the spectrum and other commands

When you are asked to indicate a line using the cursor, you can hit -- instead of the space bar, which just selects a line -- any one of a number of keys that have specific functions.

For example, `N' moves you on to the Next portion of the arc. `B' moves you Back to the previous section. `M' Moves you to a specific X value. (Note that the value is not always in channels -- if the spectrum already has wavelength data associated with it, the X values will be in angstroms. This is sometimes a useful feature, sometimes an irritating one.)

This is how you move around from one section to another. `L' lets you change the Length of the portion displayed at any one time.

'D' deletes the identified line nearest the cursor. If you see the RMS for the fit shoot up, hitting `D' will delete the last line identified -- so long as you haven't moved the cursor since you selected it.

An important command is `Q', which Quits the identification process and moves on to the fitting and line editing stage. Note that this does not commit you to anything -- you can always come back to the interactive selection stage.

If the normal line centering algorithm cannot find a line, you can use `C' to force a line identification where the line center is determined by a center-of-gravity analysis. This is a time-consuming operation, and is not really recommended for normal use.

Strongly not recommended is the use of `E', which allows you to indicate the line center yourself on an Expanded plot. This command does nothing clever at all -- it just takes the position you indicate, which will usually be a very crude estimate of the line center.

Like most Figaro programs that involve key-selected options, ARC will respond to `H' or `?' by printing a list of the options it accepts.

15.7. Fitting and Editing -- The Menu Stage.

The `Q' command takes you to a more conventional stage where the fit is repeated and the results displayed. As soon as you hit `Q' and confirm that you indeed want to move to this next stage, a fit is performed to the lines identified so far and the results are displayed. For each line, the calculated and actual wavelengths are given, together with the discrepancy in angstroms, and the RMS for the fit that would be obtained if that line were deleted from the list of lines. Finally, the RMS for the current fit is displayed. Remember that the RMS is in terms of angstroms.

If the fit is bad, a look at the table of residuals output may help to indicate which identifications are at fault. The `RMS without this line' is a particularly useful figure when there are relatively few lines in the fit; it saves you the effort of making a tentative deletion and a refit to see if the result is really an improvement. With a large number of lines, the RMS is less dependent on any one line, and this figure becomes less useful.

You are given a number of options at this point, all of which are listed in a single prompt. This is described as the `menu' prompt. The options are as follows --

Fit -- Repeat the fit. This will show you the effects of any changes you have made, either by editing, or by use of the automatic search facility.

Disp -- Display the deviation of the fit from a linear fit. This shows the trend of the fit, and the errors in each line, and is a particularly valuable diagnostic. If your graphics device supports colour, those lines fitted automatically are shown in a different colour to those selected manually, and this can help to show if the automatic lines are really helping.

Order -- Change the order of the fit. A new fit is performed immediately.

Edit -- Delete or change the wavelength of one or more of the selected lines, without returning to the cursor selection. You have to specify the line(s) by line number, and are then prompted for the wavelength to use. A null wavelength will delete the line (and to get it back you will have to return to cursor selection).

Reselect -- Return to selection using the cursor.

Print -- Prints a copy of the fit (what ARLINES.LIS would look like if you were to exit now). Note that this is not quite the same as the display you get by doing a fit -- it does not include the `RMS if omitted' figures, for example. Eventually, it will be, but it was easier the way it is.

Auto -- Starting from your current fit and arc line list, ARC looks for additional lines in the arc at wavelengths given in the line list and adds any it finds to the identified line tables.

Xauto -- Deletes all the lines found by `Auto'

Modify -- Explains the function of the Autofit parameters and allows you to change them.

Quit -- Start to exit ARC. See the section called `On the way out of ARC' for a description of what happens next, which is not very much.

Help -- (or ?) Display a summary of this information.

The first letter of each command is sufficient.

15.8. The Automatic Line-Finding Facility

When you select the `Auto' option in the menu, ARC tries to use the fit you have obtained already as a starting point and tries to find lines in the arc that match lines in the tables. The algorithm used is very simple, and is based on the principle that the automatic fit should not add lines that will make the fit significantly worse than do the lines you have already got -- and are presumably happy with. There are only two parameters (at present) involved and, really, only one is important.

ARC takes each pixel in the spectrum in turn. If that pixel is more than CHFACT times the current sigma value from any line already found, it uses that pixel as the starting point for a line search. This is exactly as if you had selected that pixel with the cursor during the interactive part of the process. If anything resembling a line can be found, it calculates its wavelength and looks in the line tables for a line close to that wavelength.

A line is accepted if the discrepancy between its calculated and tabulated wavelength is less than SIGFACT times the current RMS value. It is this that means that the criterion for accepting new lines is based on how their wavelength discrepancies compare with those for the lines that have already been accepted. SIGFACT is the more important parameter of the two. The default value of 3.0 means that the automatic search can make the overall RMS of the fit somewhat worse, but it will give the program a fair go at finding some lines. Setting SIGFACT to 1 or less, which you may do with the `Modify' menu option, ensures that the automatic search will not make the fit worse, but it will probably not find many lines either.

Just how best to use the automatic line finder is a matter of experience and, probably, opinion. It does, however, seem fair to say that the better the original fit, the more likely the automatic fit is to make correct rather than misleading identifications. The `Xauto' menu option, which causes all automatic line identifications to be deleted, at least means that you can experiment a little without doing irreparable damage to your fit.

One approach is to let the automatic fit loose, and then tidy up after it, deleting those lines that it found but that you don't like the look of. You can do this from the line list produced by the `Fit' menu option, using the `Edit' option to delete the lines affecting the fit the most. Lines found automatically are flagged in the list by a plus sign (and are shown in ARLINES.LIS with an (A) symbol).

An alternative, and quite a simple operation, is to return to the interactive selection (the `Reselect' menu option) and examine the lines found. The lines found by the Auto option are displayed with their wavelengths in parentheses, so it is fairly straightforward to run through the spectrum with the cursor, hitting `D' at every bracketed line that looks like nothing more than a noise bump.

15.9. On the way out of ARC

If you do not return to the interactive stage, you will be asked if you want to make use of this fit. If you reply `YES', ARC will generate a set of wavelength values for the spectrum and either create a new output spectrum with these values in the X data array, or simply put these values in the X array of the current arc spectrum. See the section on Wavelength Calibrations for details of how you use these values. If you are going on to IARC, you do not need to do this. If you are going on to SCRUNCH, you do need a spectrum with an array of X wavelengths.

Finally, you have the option of producing a hard plot showing the identifications you have made. Generally, the resolution of a Printronix printer is such that this plot is not much use -- it comes into its own with a higher resolution device such as a laser printer. You can also produce a hard copy of the dispersion curve -- the plot produced by the `Disp' menu option. Note that if you produce both plots, they will be produced as separate files, both of which will have to be output to the hard copy device.

Working with shifted spectra

If you identify one arc, and then move on to another arc which is essentially the same except for a linear shift, you can make use of the cross-correlation option (the XCORR keyword described earlier), or you can try the following sequence, which gives you a little more control over the way the shift is determined:

* Reply YES to the `Use results of previous fit?' prompt.

* Reply NO to the `Determine shift by cross-correlation?' prompt.

* The displays will now show the new arc, but with identifications that are offset by the amount of the shift. The wavelengths will not match the lines properly.

* Select a line that was identified in the previous fit. When prompted for the wavelength, respond with the correct wavelength for the line.

* Since this wavelength is that of a line already in the list of identified lines, ARC will not accept it. It will ask if you want to delete the previous identification. You do not, since that identification was correct. It will ask if you want to delete your new identification. You do not, since it is correct. Answer NO to both questions.

* This leaves ARC with only one alternative. If both are correct, the arc must be shifted. So it asks if it is to re-identify the lines, assuming the apparent shift. If you reply YES, it effectively takes all the lines in its list, shifts them over and repeats the centering analysis using the shifted position as the starting point. End of arc identification, in theory.

* In practice, especially if the shift is not perfectly linear, some lines may be missed. These are logged, so you can see if this is happening. One trick is to increase the sigma (use the `S' command) before forcing the re-identification. This gives the algorithm a little more scope. You can then reduce the sigma and force another re-fit, by again reselecting a known line. (This time the re-analysis will be assuming a shift of zero.) This trick is effectively that used by IARC when working on a set of spectra.