Deconvolution with PixInsight applied to humble images

Ok, let’s face it. Processing astroimages is a complex world, and some of the available techniques are really hard to master. But, who said it was going to be easy?

One of these techniques, one of the most difficult to use, but also maybe one of the most powerful, is deconvolution.

There are a lot of wonderful tutorials, out there in the internet, about deconvolution. In fact, most of us, who struggle to improve, have learnt from them.

Sometimes, these tutorials represent the state of the art. Accordingly, they work with awesome sets of images, of high quality. Some of these images are generated using top-end equipment (even professional one, as it’s the case of the extraordinary tutorials created by the PixInsight development team).

Deconvolution needs two things. One such thing is image quality. In fact, this technique generates improvement depending on the signal to noise ratio (SNR). Deconvolution doesn’t like noise, so it’s hard to get the most out of it using deconvolution with noisy images.

The second thing deconvolution needs is patience. A lot of it. There are so many parameters in it that using this technique needs practice and adaptation to each situation (that is, there’s not a unique deconvolution processes for all the images).

When I decided to post this article, I had in mind my own learning process, which, by the way, it’s still on-going, and how much I would have appreciated examples with, let’s say, “mundane”, humble images (I mean, images with all the classic deffects of a newbie: noise, limited guiding, etc.).

My images are precisely that. I don’t master the acquisition process, and my equipment is not the best out there. To begin with, the problems I have with guiding restrain the maximum exposures which can be used. All in all, my images are way far from those used in the above mentioned tutorials.

Ok, so let’s begin.

First of all, we must understand what deconvolution is about.

If we think of the process of imaging, there are many things which interfere with the incoming light and affect the images. Under ideal circumstances (those which don’t exist at all!), a point of light would be imaged just like a perfectly diffracted point (a point showing the classical Airy disks). In the real world (welcome!), that “point” is, in fact, spread by the effect of things as air turbulence, vibrations, bad guiding, or other nasty critters.

If we just could measure the distorting effect that all these things have, we could, somehow, “subtract” it, to generate perfect pictures. Of course, the problem is that we have no way of doing that, as the effects are a lot and they change by the seconds.

Deconvolution involves algorithms to try to reverse all those bad effects. It does so by guessing how the theoretical, optimally diffracted scenario has changed to the real situation. If we look at the shape of our stars in the image, we’ll see their light somehow spread, if not with other more severe deviations due to the seeing or bad guiding: these are the effects which we want to get rid of, and deconvolution will, through an iterative method, try to do it.

Deconvolution can be very useful to unhide delicate details of galaxies, for example.

First important thing to take into account while working with deconvolution is that the image has to be lineal. It makes no sense, due to the above explanation, to deconvolute an image which has been altered. Stretching the image histogram means altering it. So deconvolution must be used BEFORE other processing techniques. Only those techniques which keep the linearity of the image are acceptable to apply before deconvolution (dark and flat regularization, or even background neutralization are ok).

For our example, I will use a cropped section of M101. This section contains the galaxy core and the internal parts of the spiral structure. The imaging set consists of LRGB subs, taken with a 8″ Meade LX200, with a focal reducer delivering a F7.7 system, and a QSI583-WSG CCD with Astrodon filters. Guiding was done with a DSI Pro CCD. The L subs (4 of them) are binned 2×2, while the RGB (3 each one) are binned 3×3 to save exposition time. Exposures are 300 seconds each.

This is the cropped section, after stacking and registering. No histogram stretching has been applied so far, and only the ScreenTransfer function of PixInsight has been used to display a meaningful range of luminosity (this function does not affect the real image):

 

M101-core-before-deconvolution1

I promised a noisy image, didn’t I?

As you can see, the stars are bloated, and one can barely see detail in the core or the spiral arms of the galaxy. So, we’ll try deconvolution on it. Don’t expect miracles: deconvolution cannot turn good a very bad image.

One of the first thing to do is to choose the area that we want to improve with deconvolution, and protect the rest of the image. Deconvolution can led to unwanted results (artifacts and noise) if applied on the background. We want to be sure that it only affects the bright parts of the image, those with higher SNR.

In PixInsight, we use masks for this goal. There are a lot of ways to generate masks, and one of the easiest is to use the Lightness component. When extracted from the image, we must stretched this Lightness component until we get what we want:

M101-core-luminance-mask

Think of it that way: the image may be an excellent mask for itself, if we get rid of the noise in the background and turn it “black”. That’s why we apply an aggressive stretching. Everything interesting in the image is preserved (the light).

We drag this mask to the lateral bar of our M101 image. This is the way PixInsight apply masks.

M101-with-mask-applied

PixInsight shows in red the area which will be protected. As you see, we are protecting the whole background, and some parts of the spirals are mid-protected. They contain a lot of noise, and we don’t want deconvolution to feast on them!

We can disable the red showing, by deselecting the “show mask” feature (right click on the image, and then find this option in the popup menu). The mask will still be there though, as we want.

Ok, now we have to face one of the biggest difficulties of the whole process. Deconvolution usually creates nasty rings artifacts around stars and other bright spots. This is caused by the deconvolution algorithms, whenever they find an abrupt change in luminosity:

M101-with-ringing

Look at those ugly rings around the brightest stars!

Fortunately, deconvolution in PixInsight allows the use of a deringing feature, which uses a deringing mask. This mask is intended to protects the stars, and don’t let deconvolution algorithms damage them.

But it’s not that easy. In my experience, building a good mask for stars is not straightforward, and requires a try-and-error approach.

We use the star mask generator in PixInsight, applied to the Lightness component which we created before:

Stars-Mask-Process

Some parameters to comment. The noise threshold avoids noise to be considered “star”. By increasing this parameter, one can focus only on stars (set it too high, though, and you’ll loose the dimest stars).

Structure growth tells the function how big the mask for each star must be. Increasing it protects a larger region around the star. Of course, we want a balance here, as we want to protect just what we need but not more.

Smoothness generates a smooth transition along the edge of the mask for each star.

One of the biggest difficulties of the star mask creation with my M101 image was due to the bright stars which are on top of the galaxy. The masking algorithm didn’t process them well. At some point, it simply didn’t identify them as stars as they’re surrounded by bright background (the galaxy). I’m glad that the mask preprocessing parameters came to my rescue.

These parameters modify the histogram before applying the star identification algorithms. In our case, by pushing the midtones limit up the biggest stars can be isolated from the galaxy background. Let’s see the effect of such a high midtones number, using it in the histogram of the Lightness component just for checking:

M101-core-stars-mask-effect-of-Midtones

You see? Almost everything has been washed out. This way, only the biggest stars will be recognized. By the way, we don’t have to worry much about the faintest stars in the image, because, as we’ll see, they will benefit from deconvolution.

If we apply the star mask generator on the Lightness component, we obtain our beloved mask to avoid ringing artifacts:

M101-core-stars-mask

Here we are. Most stars will be protected. Of course, the growth of the individual masks is the result of a trial-and-error procedure!

In Pixinsight, we don’t need to invert that mask if used as a local deringing mask in the deconvolution menu. In this case, white means protection.

The deconvolution menu looks like this:

Deconvolution-Process As you see, the star mask has been selected as the local deringing support. Remember, also, that our image background is globally protected with the Lightness mask we’ve applied before.

The parameters which I play with are the StdDev (increasing it makes deconvolution more aggressive), the iterations number (the times the algorithm will be applied; you’ll know the right number when you get no more improvement or some artifacts begin to appear), and the local amount parameter in the deringing section (1 means that the star mask which we have created will be used as it is; lower numbers smoothen the mask).

After trying (and trying), I came to the above configuration for my image. Let’s see the result. For better comparison, just move the mouse on the image, and it will change between the original and the deconvoluted one:

M101-core-after-deconvolution

Look as deconvolution has remarkably sharpened the image. Some of the blurring is now gone. It’s like using a corrected lens before our eyes. See, for example, how much some of the faint stars which appear close to the core have improved.

As I told before, we are limited by the original images (very bad in my case), but you’ll agree that the outcome is very good indeed.

After deconvolution, we can go on with our regular processing. In this case, I have applied a quick-and-dirty approach, for the sake of simplicity: histogram adjustments, some color boosting, and noise reduction routines. And this is the final result:

M101-core-after-deconvolution-and-adjustments

When happy with it, you can dare to apply the whole process to the non-cropped image. It’s wise, as we’ve done, to work first with previews (cropped sections), as deconvolution is time consuming.

This is my final M101, processed with deconvolution:

M101-v2

I admit it. It is not the best M101 picture of the universe,… but deconvolution has turned it decent, unveiling a lot of faint details and structures of the otherwise washed-out, original image.