The Art of CCD Imaging.

Note added 2006:

I wrote the following in about 1996 when my CCD was a Cookbook 245 with just 378 × 242 pixels, so some of the comments may seem a little old fashioned now. I - and the rest of the world - have moved on, however the general principals still hold true for any size CCD. Because "windows" programs have finally caught up with my own processing software (which was command line based - either under DOS or later Linux) I now use these instead of my own software.

I hate overprocessed images. While they can look spectacular on first viewing, I feel I feel that the exaggerated colours, dark rings around stars and other artifacts usually visible on critical examination detract from the overall effect.

None of my images has been processed by fancy software; no fancy convolution filters have been applied; no colour corrections have been applied to the colour images. Instead, I have tried to process the images sensibly and scale them to show what is really there without adding anything extra.

While some software filters can be used to great effect, in general I don't believe that they are necessary. Artifacts from many filters and processing techniques make the resulting images un-real. I don't like them, and I don't (generally) use them. You can be the judge as to how well my images compare. Here I describe how I take the images and how I process them.

I'm not about to conduct a course in CCD imaging; there has been enough written about the subject without my adding to the noise. What I will say here is where my techniques differ from, or enhance, existing practice; or where I think more emphasis is needed.

Excellent articles on CCD imaging basics and basic image processing can be found in the (now defunct) magazine CCD Astronomy, especially articles in the Summer and Fall '94 issues; and Spring, Summer '95 and Winter '96.

Taking the Image


Focus is critical for any use of the telescope. I've seen people spend an hour focusing their telescope and CCD, and heard tales of even longer spells. I can't understand why when there is such an easy way. The test I use is based on the Hartmann Mask optical test, but it uses a mask with only 2 holes in it instead of many. The mask is just a piece of cardboard with two holes placed opposite each other at the periphery of the telescope diameter. If the telescope is out of focus with the mask in place, there will be two star spots visible on the CCD (or whatever imaging device is being used). When the telescope is in focus, the two spots will have merged into one. That's all there is to it. It takes literally seconds to focus my telescope.

(As an aside, I devised this focus technique from first principals after discussions with several observers on the difficulty of focusing a camera on a telescope. I then realised that this was the essence of the Hartmann test and have rightly named it so. The test has been independently discovered by many others but I've never seen it properly credited. I have, to my horror, seen a test mask being sold with a patent application number attached.)

But you can be even more scientific. I have an encoder attached to my focuser which allows me to accurately measure the focus position. My technique with the CCD is to take two frames, one on either side of focus, recording the encoder values for each image. I then centroid the images to work out their separations. Feeding these numbers plus encoder values into another program, it tells me where to set the encoder so that I am in focus. The procedure takes a little longer than just eye-balling the spots but gives one confidence that focus has been achieved. Here is a sample run.

Note added 2006:Since I wrote the above, expensive focus devices with fancy motors and encoders have come onto the market. These, coupled with autofocus software have improved the focus situation enormously.

The Hartmann mask technique I described above works extremely well, although the external mask can be a bit of a problem. This could be rectified by shrinking the mask and mounting it in either a separate slide, or in the filter wheel. and the encoder is as accurate as is necessary.

Exposure time

Many people get a CCD and expect to be able to knock off an image in only a few seconds. Indeed you can, but it will be noisy and not as good as an image exposed for considerably longer. I follow two plans:
  1. expose for as long as possible - limited by bleeding from saturated stars
  2. take many exposures and add them to reduce noise
Summing lots of short exposures is the only way to get satisfactory results if there are bright stars in the field. This is often the case for galactic objects; such as M42, Eta Carinae, M20, etc. By summing 20 or 40 or 100 short exposures you can produce excellent results. The resulting image will never be as good as one individual image exposed for longer, but in these situations there is little that can be done. For example, this shot of a section of the Eta Carinae nebula was made from 34, 15 second exposures. It might have been better to do 100, 10 second exposures as even in 15 seconds the brightest stars are producing streaks.

But for the best results there is no substitute for long exposures. I have standardized on 4 minute exposures for several reasons, foremost being ease of guiding. Summing many such long exposures allows very faint stars to be recorded. This exposure of a section of the Vela SNR, NGC 2376, is the sum of 15, 4 minute exposures and reaches stars as faint as b=22, almost to the photographic limit of professional sky survey plates taken with large Schmidt cameras. If you turn the brightness up on your monitor, you'll see some very faint nebulosity in the upper left corner. It is also just visible on the original sky survey films taken with the 1·2m UK Schmidt telescope.


I find that my mounting can only reliably track sufficiently accurately for un-guided exposures of about 2 minutes duration - any longer and trailing ruins some of the exposures. I am therefore forced to guide if I want to expose to make the long exposures necessary for the deepest images.

After lots of experimentation, I have settled on 4 minute exposures as my standard. This is long enough to get deep images, but short enough that guiding is easy. By doing lots of these 4 minute exposures one can build up very deep images, but are not too difficult to guide.


Getting the best out of a CCD system requires some extra work above just taking an image of the object. While you don't have to do this extra work for a quick snapshot, to get the best from your system it is required to be done, and done well. The main thing to understand in order to get the most from these calibrations is that noise is often significant. The best way to overcome noise is to take many exposures and average them.

There are 3 calibrations that are used in CCD imaging.

  1. Bias frames
  2. Dark exposures
  3. Flat Fields
The previous references discuss the best way to deal with these so I'm not going expand on them, except for these few points.
  1. I seldom bother separating the bias from the dark frame. I know that this is not the best way to do it, but the extra effort involved is not worth it for the one or two ADU improvement. The night sky contributes far more noise to the image.
  2. Flat fields are very important for the best results. But if they're not done properly then they will cause more harm to the image than if they're not done at all. You need lots of photons to ensure that noise is minimal in the images. I take lots of twilight sky flats at the start of an observing session - at least 5 and usually 10 to 20 - and median them together (to remove the stars) to create a master flat field. There is no ideal flat field source, but the twilight sky is the most readily available and it is generally good enough. I do not believe that "dome flats" are adequate, nor any other artificial contrivance. (However, it is possible to survive with this method of calibration provided there is no vignetting in the optical system and that the telescope is adequately baffled so that spurious artifacts appear.)
  3. Flat fields are done for each filter.
  4. As my CCD stays on my imaging telescope for long periods, I only need to make a master flat field whenever something is changed. This may be many months.

Processing the Image.

There are several stages in processing CCD frames.
  1. Bias and dark frame subtraction
  2. flat field division
  3. shift and add
  4. display manipulation

Bias and Dark frames

The first two are straightforward and well covered in the standard literature. I will add that the drift subtract mode of the 245PLUS software makes these tasks (and all other tasks, too) significantly easier. Before drift subtract was implemented it was a tedious task to ascertain the bias level of each frame and add/subtract a constant on order to bring them to a common level. My continued requests to Richard Berry that each frame have an "overscan" added were finally answered by going one step further. Drift subtract performs the necessary first part of the image processing for you by reading an overscan from the CCD then setting the level to be a constant value. I was initially sceptical that this would be done correctly, but after a few tests I was convinced that it was being done right. I couldn't live without it now.

Flat fields

I've just discussed the taking of flat fields. I create a master flat field by median filtering 10 to 20 sky flats. A median filter in this context is different from the median filter in some software. The latter works on a single frame, the former on large numbers of frames. If you take your sky flats each on slight different areas of the sky (at least a few arcseconds) then any stars visible in one frame won't fall on the same pixels in any other frame. The median filter is like a super average, but looks at each and every frame at once and rejects any pixels that significantly differ from all the others - thus removing any stars that are in the frames. The resulting frame can then be divided into the nights' observations.

The same technique can be used to remove cosmic rays from long exposures, provided that you have several frames of the same field. It is also a way of flat fielding images without a flat field. This can be done in two ways.

  1. Median filter a whole nights' observations to remove all the stars, thus creating a super flat field from the night sky. This is actually the best flat field of all, but fails for most amateurs for several reason, such as; they always bring a bright "fuzzy" to the centre of the field and so there is always something in the centre that won't median away; and the exposures aren't long enough to get sufficient counts from the sky to get above the noise of the system.
  2. Take several exposures of the one field, but shift each one by a few pixels from the last. Then shift each frame so that the stars again line up and median combine them. The resulting image will be a little "flatter" than if the images had just been summed; and all the cosmic rays and other small scale blemishes will be removed. However, it won't fix vignetting or other large scale effects.


I usually take many frames of the field I am imaging. After the frames are calibrated comes the task of assembling the final image. This requires finding the position of a star common to all frames and then shifting them so that they coincide. Finally, all the frames are summed (or "medianed" - see above) to create the finished image.

Some software packages can do the necessary shift-and-add automatically - or at least with minimum intervention - but I do it only semi-automatically. I manually view each image just to check that it has no defects, and then find the centroid of a star and write its coordinates to a file. Each new image I scan creates a new line in the file, all I have to do is position the cursor over the selected star and press a key. This file is fed into another program which generates a script to perform the shifting. For a dozen frames this is quite quick, and it does give me the chance to check each image for problems, but when I did the 240 frames for my comet movie it took over an hour and I was having thoughts of automating the procedure.

There is one additional step that I now perform before the shift-and-add sequence which makes the final image considerably better. Let me explain the reasoning behind it. Astrometrists know that the position of a star within a group of pixel can be determined to roughly one-tenth of a pixel accuracy. This extra information can be used when performing the shift-and-add. But I can't do sub-pixel shifts, so I have to do it another way. The solution is to duplicate the pixels by some factor, thus expanding the image. For the ultimate accuracy, one could expand each image by a factor of 10, but that would make even the tiny TC245 chip create huge files. I've settled on just a doubling in size of each image as a workable compromise.

Is it worth doing? Here is a before and after comparison. On the left is how the image looks when the 15 frames are "grown" before being summed; on the right is the same area but simply added as-is. (Equal care has been taken in creating both frames.) The left-hand image shows the stars a little less square, there is just a bit more resolution in the nebulosity and between double stars, and the faintest stars are just a little easier to see. I think it's well worth the effort.

But it may not work for you. My telescope and CCD combination has pixels that are about 4 arcseconds on the sky - that's too large for my liking but that is all I had available at the time. I devised this technique to attempt to overcome the ugly, square stars that such undersampling creates. If your imaging combination produces a well sampled star size, then there will be not so much to gain from this technique (although it is well worth trying!).

Some hints on getting the most from this technique.

My greatest disappointment is that I haven't kept all the original files from my imaging and so I can't go back and improve my earlier images. That's why my later images are bigger than the earlier ones.

Display Manipulation

Once the final image has been created, only then do I bother trying to display it at its best. I do this with a display program rather than by manipulating the data files themselves. My display program can only do two things:

There are other ways to manipulate the image to good effect and I'm constantly experimenting. I now have a technique which allows very non-linear scaling in an attempt to extract a more "photographic" look from the image.

The FITS files are in turn translated to TIFF format for general displaying once I've settled on the display levels and scaling. For the image page, the TIFFs have been converted to JPEGs by standard commercial software.

Colour work

M20 M42 M17
I've only recently made a filter wheel and so I've not yet done much colour work (so many little time!). If you're interested, the plans for my filter wheel are here. The above image of M20 is the very first tri-colour image I took (even before I had the filter wheel) - click on it for a larger view. I think it is a fine image for a first attempt. The other 2 images (M42 and M17) are later attempts, but the technique was unchanged. (More image are shown on my image page.) So how do I do it?


The filters I use are photometric B,V,R and not just any old bits of coloured glass. I believe that the passbands of such filters are better suited to tri-colour work than many others which are available. Considering that all you are after is a pretty picture, why does it matter? If the object you were imaging was a continuum source then it wouldn't, but the subject is typically an emission line object and the colours are defined by only a few lines - the strongest of which are Hydrogen alpha (at 656nm), forbidden Oxygen III (at 501nm) and Hydrogen beta (at 486nm). (I've simplified the argument a little here, but it's basically correct.) So if we aim to get these lines in the red, green and blue filters then our colours will come out about right (or at least how we expect to see them!). The difficult one to get right is the crossover between [OIII] and Hbeta between the green and blue filters. Many filters not intended for this purpose don't get it right and the resulting images require additional colour balancing to make them look more as expected. While the photometric filters aren't perfect, they are good enough - plus it means that I have photometric information should I want it.

My filters are made from Schott coloured glass. Interference filters would provide better throughput, but they don't usually achieve the same passbands. Schott filters DEFINE the photometric bands, after all. Below are the transmission characteristics of my B, V & R filters, whose recipe is given in the table below. Note the overlap of the filters and the complete wavelength coverage - the 3 colour receptors (cones) in the human eye respond in a samilar way.

Transmission curves for B, V, R filters

glass #1
glass #2
glass #3
1mm UG1 3mm BG40
2mm GG385 1mm BG12 1mm BG39
2mm GG495 2mm BG39
2mm OG570 2mm KG3
4mm RG9
Recipe for standard photometric filters.

The filters are a slight compromise in order to keep them simple and all the same thickness (a significant advantage when focusing), but they are still an excellent match to the standard photometric passbands. The I filter really only needs to be 3mm of RG9 but instead of adding a spacer of clear glass I just use a single 4mm thick piece - the difference is tiny. The U filter is not very efficient, however, but then again neither are most amateur CCDs. Some improvement can be made by using 2mm of the new Schott glass called S8612 instead of the 3mm BG40, but I've not got hold of any yet and so haven't tried it. U-band photometry by amateurs is probably not worth it except for very bright stars. (S8612 is only available from US Schott.) Other possible combinations of Schott glass are possible. An excellent article on photometric filters by Dr. Mike Bessell appeared in CCD Astronomy for Fall 1995.

Some more information on my filters including how I design and assemble them is presented here. This includes notes pertinent to users of the TC-245 chip (Cookbook 245).

Exposure times

As I've already said, you can do CCD work with short exposures and get a noisy result, or you can take longer exposures and get a nicer image. The problem is exacerbated with colour work as the CCD chip is far less sensitive in the blue than the red, thus requiring different exposures for each colour. Typically, the green exposure needs to be something like a factor of 2 longer than the red, and the blue more like 4 to 20 times longer than the red. It depends on the chip and the filters. If you get it right, then the image processing is much easier.

My exposures are in the ratio of 1:2:4 for red:green:blue. The M20 image had exposures of 3 × 2 minutes, 3 × 4 minutes, and 6 × 4 minutes for red, green and blue respectively. The times were limited by pixel bleeding else I would have exposed for longer. For fainter objects such as galaxies where my normal exposure time is 4 minutes I would anticipate exposures of 8 to 16 minutes.

One really needs to do some work to determine the correct ratio of exposures for each filter. However, there is plenty of tolerance as my exposure ratio was only an educated guess and it worked out well. The fact that no colour correction was necessary implies that I can't be far out in my guess.


The images are processed normally (dark subtracted, flat fielded etc.) and then the individual colour frames are shifted and added. Then the R,G,B frames are re-aligned so that they are all the same size and position. I then create TIFF files from each colour by carefully examining the histograms of each colour. The minimum value is chosen by looking at the sky values, while the maximum value is determined by experimentation. If the exposures for each colour are in the right ratio, it is then a simple matter to use the same range for each colour; e.g. the scaling for red might be 120 to 1000, for green 110 to 990, and for blue 100 to 980. Once I have the three RGB TIFF files, they are then imported into a commercial colour package (Adobe Photoshop in this case) where they are merged into a colour image. That's it. If done right, then no more needs to be done to the image.

Advanced processing

Here I am adding a few more tricks I have learned since I originally wrote this document. Some of the techniques have been learned from other people (credit naturally given where due) while others are things that I've done myself. Some are not particularly advanced - perhaps just changes to earlier techniques - but I've included them here anyway.

LRGB colour processing

Here is a trick I learned from the web page of Kunihiko Okano (see also Al Kelly's page for more examples). The technique is deceptively simple yet extremely powerful. It uses the "lab colour" mode of Adobe Photoshop. This splits a normal RGB image into 2 colour components (called A and B) and a "lightness" frame (L) which contains the spatial information. One simply replaces this L frame by a high s/n normal, unfiltered image and then re-combine them. (One must of course correctly align the new frame with the colour image!) That's it!

This composite image should be enough to convince anybody how powerful this technique can be. Here is the original tri-colour image (which I think is quite good in itself) made from 3 × 4 minute exposures for the red component, 6 × 4 minutes for the green, and 11 × 4 minutes (clouds stopped me doing the 12th, and indeed interfered with some of the others) for the blue. The unfiltered frame was made a year before from 6 × 4 minute exposures. The resulting frame has a much "smoother" look while retaining the same colours.

With this technique it appears that you can get away with a lower resolution and lower s/n colour image as a base, and then add it to a good monochrome image. I've yet to fully explore all aspects of this technique - at the time of writing this I've only done the one frame, and that was on a "good" tri-colour image - so there may be hidden problems and trade-offs that I've not discovered. However, it looks like a very powerful technique to add to one's repertoire.

Bigger images - the mosaic

I had always planned to make mosaic images - the combining of many smaller images to make a much larger one - but until recently hadn't taken any and so I hadn't written the necessary software. Well now I have. Here is my first mosaic (done with my software). It is an image of Comet Hale-Bopp on January 2nd 1998. When I saw the comet through my telescope the night before I was amazed at how big it still was (I hadn't been out observing for a while) and wanted to capture it with my CCD. A mosaic was the only way without loosing resolution.

It's pretty obvious how you go about making a mosaic - just find a nice, large object and take lots of images to fully cover the whole area. But you must take all the images in exactly the same manner. Identical exposure times, obviously, or else they will all look different when put together. And you must flat field them well or else the top edge of one frame won't match the bottom edge of the next frame. So careful processing is the first step to making a good mosaic.

The next step is putting the frames together. That may or may not be easy depending on what software you have available. PCVista doesn't do this, nor does anything else I have, so I wrote my own. It is the most simple software and can only combine two frames (but repeated iterations obviously allow for any number to be added) and relies on me doing most of the work. I must find a star that is visible in both frames and provide a pair of X,Y coordinates for that star in each frame. My existing image display program does this already, so that isn't a problem. The program then reads in each frame and shifts them to create a new, larger image.

This is where the first real problem appears - how exactly to combine the two images where they overlap. You could simply overlay one image on the first, replacing the first data with the second. This is the simple way and for many cases is actually the best. But if you haven't done your processing perfectly, the join may well be pretty obvious. The second way is to average the data where they overlap. This can make a slightly better join but has a side-effect that is also noticeable - that the area of overlap has a smoother appearance due to the averaging of the data. This is of course the reason why we average many frames, but in this case it is a slightly unwanted effect. A better way is to blend the two areas using sophisticated statistical methods so that the join becomes invisible. This is much harder to do, but well worth it. My software (written in just over 2 hours) only offers the first two options.

My first mosaic was the sum of 3, 4 minute exposures with the telescope shifted (mostly) in declination between each shot. They were processed normally and then had constants added/subtracted to their backgrounds so that they were all the same. Unfortunately they were not flat fielded. I had the telescope and CCD camera in pieces that day doing some maintenance and they were not back together and aligned until too late to take twilight flats. (I might perhaps re-process these when I've taken a suitable flat field - or perhaps not, we'll see.) They were mosaiced (the verb - "to put together in a mosaic") using the "average" option in my software. The join between the first two frames (on the left - you can see where it should be by the slight shift in RA between the frames) is essentially invisble (I think) but the third frame isn't and shows the effects I discussed above. For a start I didn't shift the telescope far enough in declination (plus it shifted in RA unexpectedly) and so there is a large overlap region. This shows up the s/n change in the overlap region due to the averaging of the two frames. However, the tracking wasn't perfect in the final frame and there is a slight trailing of the stars. If I truncated the third frame the difference between it and the second frame is actually more noticeable. The averaging averages the errors and makes the trailing slightly less obvious. But the fact that the frames weren't flat fielded shows in that one edge of the join is invisible, but the other one isn't.

There is one other problem when mosacing. If you cover large areas and shift the telescope (or lens) in RA as well as declination, you have to rotate the frames in order for them to match. Over small distances (and closer to the equator) the effect is less obvious, but trying to cover too large an area can be a problem. I did do a mosaic a long time ago - but I processed it with IRAF and so it doesn't really count - that shows this problem. This image of the LMC was taken with a 171mm lens and my CCD. I shifted the telescope it was mounted on in RA between the 3 regions (each region is the sum of 9, 2 minute expoures through an R-band filter) and so I had to rotate the end two frames with respect to the centre one to make them match. Rotating by a small and arbitrary amount is not common in many software packages which is why I reverted to IRAF to perform this operation. You can mosaic more than just star images - the moon is an obvious candidate. I expect to be doing more mosaics now that I have the software.

Here is one such mosaic that I've done - M42 with my 20-cm and H-alpha filter. 12 frames were aligned to make this mosaic. The 4 arcsecond pixels don't matter when the field is so large and so the resulting image look almost photographic - except that the exposures were taken on 3 separate nights around full moon! Note that I carefull avoided the very bright stars that border this object; the diffraction spikes of one such star are visible at the top of the frame, but there is no tell-tale bleeding from it to give away that it isn't a photograph.


There is more to do besides just taking the images...

Record keeping

Usually seen as a chore not worth bothering about, keeping a log of what you've done is very important. Despite what you think, in a couple of years you won't remember what you did when a particular image was taken - and you may want to know. You need to keep a log book and fill it in at the time of each image, so that in the future you can be certain of the exact circumstances of that image.

I keep a fixed-format printed log and fill in the columns with a pencil while observing. The log has a header for the date, telescope and CCD format. There are columns for object name, UT start, exposure duration, filter, CCD temperature and file name. For my Cookbook camera I also want to know the reset value, LDC, automatic dark subtraction and gain settings. I frequently run multiple exposures on the same object and so a column records that number, too. Finally there is space for comments. Having specific columns reminds you what you need to fill in. I find that a necessary feature.

The Cookbook software also keeps a log. Whenever a file is saved, the time and date, exposure, filename and object name (provided you've entered it!) are written to a file. Other CCD systems will write the necessary information to the image header. This is OK, but I still find it a good idea to keep a separate log. If your system is run from a super-powerful computer running a multi-tasking OS then you would be able to keep the log electronically instead of the primative pieces of paper I use. Just be sure to do it!

You need to be aware that the system clock will be used for the start time, and this may not be correct. I have a dedicated computer for imaging and I keep its clock set to UT, but it drifts and despite having a piece of software that tries to compensate and correct the system clock, for accurate astrometry the time needs to be precise to the second. I modify the log file (or header) so that the times are as accurate as I can make them. For pretty picture taking this may not seem important, but when you discover a comet or asteroid on the frame you might change your mind...

I also keep a log of the reduction procedures I follow - things like background of each image, shifts between each image to align, even which software I use for the reduction.


If the image was worth taking, then it should be worth keeping. Making a backup of your work is the final item in the processing chain.

I keep the raw data apart from the processed data in a separate directory. It may be that you discover a better way to process your images and you need to start afresh with the raw data. Never delete the raw data!

When processing the data I name the top level with the date the data were taken, (.e.g. 19990918) and then two sub-directories for the data. RAW and PROCESSD, their use should be obvious. I archive this directory structure.

These days there is a simple choice of the medium for archiving - CDROM. Alas, this was not always the case. I tried keeping the images on floppy disc but that was too unwieldy - in the end I just didn't bother. As expected, I now regret not having done so as I can do a much better job of processing now than I did then. These days I have almost enough room on my hard disc to keep most images, but I write them out to 100Mb ZIP discs just in case. When I have enough data to write a CDROM, then they are all assembled into one place and the disc written. CDROMs have the great advantages of permanence (or at least a very long life), random access, compactness and reliability. (An easier route would be to keep the backup on a re-writeable CDROM until it was full, then transfer it to a write-once CDROM.)

I archive all my images in FITS format. While not offering a standard compressed format, for the Cookbook images that doesn't really matter. However, it may be a consideration for larger CCDs. What does matter is that softare will exist in the future to read the files. If you keep them in some proprietary format, then you may find that in 20 years you can't read them! Unlikely? Perhaps, but FITS has a better chance of surviving than other formats. You have been warned.


By the time you accumulate a few dozen images, you'll want to be able to find them again. You'll want to know which objects have been observed, when each image was taken, how good it is and where it is archived. A database is the perfect answer to this problem.

I have a routine which strips pertinent information from each FITS header in a directory and makes it available for ingestion into a database. This forms the base information to which other details are added. The database can then be searched to find exactly what you are seeking.


I use several software packages during the acquisition and subsequent processing of images:

If you would like to download some of my software (or links to other stuff I use) then click here.

home back to images to astrophotos

Page last updated 2006/02/12
Steven Lee