Note added 2006:
I wrote the following in about 1996 when my CCD was a Cookbook 245 with just 378 × 242 pixels, so some of the comments may seem a little old fashioned now. I - and the rest of the world - have moved on, however the general principals still hold true for any size CCD. Because "windows" programs have finally caught up with my own processing software (which was command line based - either under DOS or later Linux) I now use these instead of my own software.
I hate overprocessed images. While they can look spectacular on first viewing, I feel I feel that the exaggerated colours, dark rings around stars and other artifacts usually visible on critical examination detract from the overall effect.
None of my images has been processed by fancy software; no fancy convolution filters have been applied; no colour corrections have been applied to the colour images. Instead, I have tried to process the images sensibly and scale them to show what is really there without adding anything extra.
While some software filters can be used to great effect, in general I don't believe that they are necessary. Artifacts from many filters and processing techniques make the resulting images un-real. I don't like them, and I don't (generally) use them. You can be the judge as to how well my images compare. Here I describe how I take the images and how I process them.
I'm not about to conduct a course in CCD imaging; there has been enough written about the subject without my adding to the noise. What I will say here is where my techniques differ from, or enhance, existing practice; or where I think more emphasis is needed.
Excellent articles on CCD imaging basics and basic image processing can be found in the (now defunct) magazine CCD Astronomy, especially articles in the Summer and Fall '94 issues; and Spring, Summer '95 and Winter '96.
(As an aside, I devised this focus technique from first principals after discussions with several observers on the difficulty of focusing a camera on a telescope. I then realised that this was the essence of the Hartmann test and have rightly named it so. The test has been independently discovered by many others but I've never seen it properly credited. I have, to my horror, seen a test mask being sold with a patent application number attached.)
But you can be even more scientific. I have an encoder attached to my focuser which allows me to accurately measure the focus position. My technique with the CCD is to take two frames, one on either side of focus, recording the encoder values for each image. I then centroid the images to work out their separations. Feeding these numbers plus encoder values into another program, it tells me where to set the encoder so that I am in focus. The procedure takes a little longer than just eye-balling the spots but gives one confidence that focus has been achieved. Here is a sample run.
Note added 2006:Since I wrote the above, expensive focus devices with fancy motors and encoders have come onto the market. These, coupled with autofocus software have improved the focus situation enormously.
The Hartmann mask technique I described above works extremely well, although the external mask can be a bit of a problem. This could be rectified by shrinking the mask and mounting it in either a separate slide, or in the filter wheel. and the encoder is as accurate as is necessary.
But for the best results there is no substitute for long exposures. I have standardized on 4 minute exposures for several reasons, foremost being ease of guiding. Summing many such long exposures allows very faint stars to be recorded. This exposure of a section of the Vela SNR, NGC 2376, is the sum of 15, 4 minute exposures and reaches stars as faint as b=22, almost to the photographic limit of professional sky survey plates taken with large Schmidt cameras. If you turn the brightness up on your monitor, you'll see some very faint nebulosity in the upper left corner. It is also just visible on the original sky survey films taken with the 1·2m UK Schmidt telescope.
After lots of experimentation, I have settled on 4 minute exposures as my standard. This is long enough to get deep images, but short enough that guiding is easy. By doing lots of these 4 minute exposures one can build up very deep images, but are not too difficult to guide.
There are 3 calibrations that are used in CCD imaging.
The same technique can be used to remove cosmic rays from long exposures, provided that you have several frames of the same field. It is also a way of flat fielding images without a flat field. This can be done in two ways.
Some software packages can do the necessary shift-and-add automatically - or at least with minimum intervention - but I do it only semi-automatically. I manually view each image just to check that it has no defects, and then find the centroid of a star and write its coordinates to a file. Each new image I scan creates a new line in the file, all I have to do is position the cursor over the selected star and press a key. This file is fed into another program which generates a script to perform the shifting. For a dozen frames this is quite quick, and it does give me the chance to check each image for problems, but when I did the 240 frames for my comet movie it took over an hour and I was having thoughts of automating the procedure.
There is one additional step that I now perform before the shift-and-add sequence which makes the final image considerably better. Let me explain the reasoning behind it. Astrometrists know that the position of a star within a group of pixel can be determined to roughly one-tenth of a pixel accuracy. This extra information can be used when performing the shift-and-add. But I can't do sub-pixel shifts, so I have to do it another way. The solution is to duplicate the pixels by some factor, thus expanding the image. For the ultimate accuracy, one could expand each image by a factor of 10, but that would make even the tiny TC245 chip create huge files. I've settled on just a doubling in size of each image as a workable compromise.
Is it worth doing? Here is a before and after comparison. On the left is how the image looks when the 15 frames are "grown" before being summed; on the right is the same area but simply added as-is. (Equal care has been taken in creating both frames.) The left-hand image shows the stars a little less square, there is just a bit more resolution in the nebulosity and between double stars, and the faintest stars are just a little easier to see. I think it's well worth the effort.
But it may not work for you. My telescope and CCD combination has pixels that are about 4 arcseconds on the sky - that's too large for my liking but that is all I had available at the time. I devised this technique to attempt to overcome the ugly, square stars that such undersampling creates. If your imaging combination produces a well sampled star size, then there will be not so much to gain from this technique (although it is well worth trying!).
Some hints on getting the most from this technique.
There are other ways to manipulate the image to good effect and I'm constantly experimenting. I now have a technique which allows very non-linear scaling in an attempt to extract a more "photographic" look from the image.
The FITS files are in turn translated to TIFF format for general displaying once I've settled on the display levels and scaling. For the image page, the TIFFs have been converted to JPEGs by standard commercial software.
My filters are made from Schott coloured glass. Interference filters would provide better throughput, but they don't usually achieve the same passbands. Schott filters DEFINE the photometric bands, after all. Below are the transmission characteristics of my B, V & R filters, whose recipe is given in the table below. Note the overlap of the filters and the complete wavelength coverage - the 3 colour receptors (cones) in the human eye respond in a samilar way.
||1mm UG1||3mm BG40|
||2mm GG385||1mm BG12||1mm BG39|
||2mm GG495||2mm BG39|
||2mm OG570||2mm KG3|
The filters are a slight compromise in order to keep them simple and all the same thickness (a significant advantage when focusing), but they are still an excellent match to the standard photometric passbands. The I filter really only needs to be 3mm of RG9 but instead of adding a spacer of clear glass I just use a single 4mm thick piece - the difference is tiny. The U filter is not very efficient, however, but then again neither are most amateur CCDs. Some improvement can be made by using 2mm of the new Schott glass called S8612 instead of the 3mm BG40, but I've not got hold of any yet and so haven't tried it. U-band photometry by amateurs is probably not worth it except for very bright stars. (S8612 is only available from US Schott.) Other possible combinations of Schott glass are possible. An excellent article on photometric filters by Dr. Mike Bessell appeared in CCD Astronomy for Fall 1995.
Some more information on my filters including how I design and assemble them is presented here. This includes notes pertinent to users of the TC-245 chip (Cookbook 245).
My exposures are in the ratio of 1:2:4 for red:green:blue. The M20 image had exposures of 3 × 2 minutes, 3 × 4 minutes, and 6 × 4 minutes for red, green and blue respectively. The times were limited by pixel bleeding else I would have exposed for longer. For fainter objects such as galaxies where my normal exposure time is 4 minutes I would anticipate exposures of 8 to 16 minutes.
One really needs to do some work to determine the correct ratio of exposures for each filter. However, there is plenty of tolerance as my exposure ratio was only an educated guess and it worked out well. The fact that no colour correction was necessary implies that I can't be far out in my guess.
This composite image should be enough to convince anybody how powerful this technique can be. Here is the original tri-colour image (which I think is quite good in itself) made from 3 × 4 minute exposures for the red component, 6 × 4 minutes for the green, and 11 × 4 minutes (clouds stopped me doing the 12th, and indeed interfered with some of the others) for the blue. The unfiltered frame was made a year before from 6 × 4 minute exposures. The resulting frame has a much "smoother" look while retaining the same colours.
With this technique it appears that you can get away with a lower resolution and lower s/n colour image as a base, and then add it to a good monochrome image. I've yet to fully explore all aspects of this technique - at the time of writing this I've only done the one frame, and that was on a "good" tri-colour image - so there may be hidden problems and trade-offs that I've not discovered. However, it looks like a very powerful technique to add to one's repertoire.
It's pretty obvious how you go about making a mosaic - just find a nice, large object and take lots of images to fully cover the whole area. But you must take all the images in exactly the same manner. Identical exposure times, obviously, or else they will all look different when put together. And you must flat field them well or else the top edge of one frame won't match the bottom edge of the next frame. So careful processing is the first step to making a good mosaic.
The next step is putting the frames together. That may or may not be easy depending on what software you have available. PCVista doesn't do this, nor does anything else I have, so I wrote my own. It is the most simple software and can only combine two frames (but repeated iterations obviously allow for any number to be added) and relies on me doing most of the work. I must find a star that is visible in both frames and provide a pair of X,Y coordinates for that star in each frame. My existing image display program does this already, so that isn't a problem. The program then reads in each frame and shifts them to create a new, larger image.
This is where the first real problem appears - how exactly to combine the two images where they overlap. You could simply overlay one image on the first, replacing the first data with the second. This is the simple way and for many cases is actually the best. But if you haven't done your processing perfectly, the join may well be pretty obvious. The second way is to average the data where they overlap. This can make a slightly better join but has a side-effect that is also noticeable - that the area of overlap has a smoother appearance due to the averaging of the data. This is of course the reason why we average many frames, but in this case it is a slightly unwanted effect. A better way is to blend the two areas using sophisticated statistical methods so that the join becomes invisible. This is much harder to do, but well worth it. My software (written in just over 2 hours) only offers the first two options.
My first mosaic was the sum of 3, 4 minute exposures with the telescope shifted (mostly) in declination between each shot. They were processed normally and then had constants added/subtracted to their backgrounds so that they were all the same. Unfortunately they were not flat fielded. I had the telescope and CCD camera in pieces that day doing some maintenance and they were not back together and aligned until too late to take twilight flats. (I might perhaps re-process these when I've taken a suitable flat field - or perhaps not, we'll see.) They were mosaiced (the verb - "to put together in a mosaic") using the "average" option in my software. The join between the first two frames (on the left - you can see where it should be by the slight shift in RA between the frames) is essentially invisble (I think) but the third frame isn't and shows the effects I discussed above. For a start I didn't shift the telescope far enough in declination (plus it shifted in RA unexpectedly) and so there is a large overlap region. This shows up the s/n change in the overlap region due to the averaging of the two frames. However, the tracking wasn't perfect in the final frame and there is a slight trailing of the stars. If I truncated the third frame the difference between it and the second frame is actually more noticeable. The averaging averages the errors and makes the trailing slightly less obvious. But the fact that the frames weren't flat fielded shows in that one edge of the join is invisible, but the other one isn't.
There is one other problem when mosacing. If you cover large areas and shift the telescope (or lens) in RA as well as declination, you have to rotate the frames in order for them to match. Over small distances (and closer to the equator) the effect is less obvious, but trying to cover too large an area can be a problem. I did do a mosaic a long time ago - but I processed it with IRAF and so it doesn't really count - that shows this problem. This image of the LMC was taken with a 171mm lens and my CCD. I shifted the telescope it was mounted on in RA between the 3 regions (each region is the sum of 9, 2 minute expoures through an R-band filter) and so I had to rotate the end two frames with respect to the centre one to make them match. Rotating by a small and arbitrary amount is not common in many software packages which is why I reverted to IRAF to perform this operation. You can mosaic more than just star images - the moon is an obvious candidate. I expect to be doing more mosaics now that I have the software.
Here is one such mosaic that I've done - M42 with my 20-cm and H-alpha filter. 12 frames were aligned to make this mosaic. The 4 arcsecond pixels don't matter when the field is so large and so the resulting image look almost photographic - except that the exposures were taken on 3 separate nights around full moon! Note that I carefull avoided the very bright stars that border this object; the diffraction spikes of one such star are visible at the top of the frame, but there is no tell-tale bleeding from it to give away that it isn't a photograph.
There is more to do besides just taking the images...
Usually seen as a chore not worth bothering about, keeping a log of what you've done is very important. Despite what you think, in a couple of years you won't remember what you did when a particular image was taken - and you may want to know. You need to keep a log book and fill it in at the time of each image, so that in the future you can be certain of the exact circumstances of that image.
I keep a fixed-format printed log and fill in the columns with a pencil while observing. The log has a header for the date, telescope and CCD format. There are columns for object name, UT start, exposure duration, filter, CCD temperature and file name. For my Cookbook camera I also want to know the reset value, LDC, automatic dark subtraction and gain settings. I frequently run multiple exposures on the same object and so a column records that number, too. Finally there is space for comments. Having specific columns reminds you what you need to fill in. I find that a necessary feature.
The Cookbook software also keeps a log. Whenever a file is saved, the time and date, exposure, filename and object name (provided you've entered it!) are written to a file. Other CCD systems will write the necessary information to the image header. This is OK, but I still find it a good idea to keep a separate log. If your system is run from a super-powerful computer running a multi-tasking OS then you would be able to keep the log electronically instead of the primative pieces of paper I use. Just be sure to do it!
You need to be aware that the system clock will be used for the start time, and this may not be correct. I have a dedicated computer for imaging and I keep its clock set to UT, but it drifts and despite having a piece of software that tries to compensate and correct the system clock, for accurate astrometry the time needs to be precise to the second. I modify the log file (or header) so that the times are as accurate as I can make them. For pretty picture taking this may not seem important, but when you discover a comet or asteroid on the frame you might change your mind...
I also keep a log of the reduction procedures I follow - things like background of each image, shifts between each image to align, even which software I use for the reduction.
If the image was worth taking, then it should be worth keeping. Making a backup of your work is the final item in the processing chain.
I keep the raw data apart from the processed data in a separate directory. It may be that you discover a better way to process your images and you need to start afresh with the raw data. Never delete the raw data!
When processing the data I name the top level with the date the data were taken, (.e.g. 19990918) and then two sub-directories for the data. RAW and PROCESSD, their use should be obvious. I archive this directory structure.
These days there is a simple choice of the medium for archiving - CDROM. Alas, this was not always the case. I tried keeping the images on floppy disc but that was too unwieldy - in the end I just didn't bother. As expected, I now regret not having done so as I can do a much better job of processing now than I did then. These days I have almost enough room on my hard disc to keep most images, but I write them out to 100Mb ZIP discs just in case. When I have enough data to write a CDROM, then they are all assembled into one place and the disc written. CDROMs have the great advantages of permanence (or at least a very long life), random access, compactness and reliability. (An easier route would be to keep the backup on a re-writeable CDROM until it was full, then transfer it to a write-once CDROM.)
I archive all my images in FITS format. While not offering a standard compressed format, for the Cookbook images that doesn't really matter. However, it may be a consideration for larger CCDs. What does matter is that softare will exist in the future to read the files. If you keep them in some proprietary format, then you may find that in 20 years you can't read them! Unlikely? Perhaps, but FITS has a better chance of surviving than other formats. You have been warned.
By the time you accumulate a few dozen images, you'll want to be able to find them again. You'll want to know which objects have been observed, when each image was taken, how good it is and where it is archived. A database is the perfect answer to this problem.
I have a routine which strips pertinent information from each FITS header in a directory and makes it available for ingestion into a database. This forms the base information to which other details are added. The database can then be searched to find exactly what you are seeking.
I use several software packages during the acquisition and subsequent processing of images:
home back to images to astrophotos
Page last updated 2006/02/12