A Grand Post-Processing Thread!

this is the main message area for anything solar :)
Post Reply
User avatar
MalVeauX
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 1858
Joined: Tue May 09, 2017 7:58 pm
Location: Florida
Has thanked: 1171 times
Been thanked: 1360 times

A Grand Post-Processing Thread!

Post by MalVeauX »

Hey all,

I'm always interested in new ways to approach processing and of course, lots of that information is always spilled into several threads over courses of years. I think it would be great to see some of today's modern techniques being used from small to large scale, what software, how things are being done, the why's of several aspects, and maybe even a systematic approach to things rather than the artists's eye, so to speak. The techniques we use for single stack limb data versus double stack limb data for example is a totally different way to do things potentially. Mean while, how we approach just the disc surface itself without the limb is entirely different too. There just are so many potential ways to do things, and while we keep getting better software and easier ways to share information, maybe we can all help each other to squeak more out of our data.

It would also be nice to have a centralized place to plop down some data and ask others to use their experience and skills to process things to see what kinds of results can be had with the same or different software and different techniques. I know in the past, folk really enjoyed seeing some RAW data to process to see what the potential was.

:band :seesaw :band2

Very best,


User avatar
MalVeauX
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 1858
Joined: Tue May 09, 2017 7:58 pm
Location: Florida
Has thanked: 1171 times
Been thanked: 1360 times

Re: A Grand Post-Processing Thread!

Post by MalVeauX »

I suppose I'll start off with a question, because it's something I waffle on myself.

The subject of how to apply deconvolution the best way possible for the data presented after a stack. There's a lot of glorious math behind this subject, however, for mere mortals like myself, I simply understand it as a means to deblur an image. However, how to do it best without bloating it or producing artifacts? How does dynamic range play into this? How does noise play into this? How to know when you've gone too far? How to know when you should have gone farther? Plus different software produces different results with different options to control things. And then there's secondary processing that generally follows it, such as unsharp masks and other sharpening/contrast post processing techniques.

I look forward to anyone's thoughts and experience on deconvolution and how they approach it! And to offer up something to work on with the advice given, I am attaching a stack of 180 frames of RAW unprocessed HA data from a filament a few weeks ago under good seeing with a 150mm aperture.

Very best,
Attachments
Filament & Plage, 180 frames stacked, 150mm aperture
Filament & Plage, 180 frames stacked, 150mm aperture
Sun_103057_lapl2_ap6359.png (3.03 MiB) Viewed 1474 times


User avatar
MAURITS
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 8507
Joined: Tue Nov 27, 2018 4:37 pm
Location: Belgium
Has thanked: 2412 times
Been thanked: 4786 times
Contact:

Re: A Grand Post-Processing Thread!

Post by MAURITS »

Marty, this is an incredibly clever idea and I hope one day to do my part. :bow


Regards,
Maurits

Vista del Cielo Observatory

www.vistadelcielo.be
User avatar
rsfoto
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 6166
Joined: Mon Jun 18, 2012 8:30 pm
Location: San Luis Potosi, México
Has thanked: 9413 times
Been thanked: 5574 times

Re: A Grand Post-Processing Thread!

Post by rsfoto »

Hi Marty,

Interesting.

Well for me the first thing is to get the best possible raw material. Sorry if this sounds inadequate but every time I teach somebody to do something I tell him " From Marmalade you can make Shit, but you can not make Marmalade out of Shit " and so I spent tweaking my etalons as good as possible.

Now I am lucky to have a software where I can save in real time a colour palette over the black and white images and convert the black background of the Sun into any color I want and that saves me the trouble of double exposure technique. In this way I get the prominence details and the Sun surface details in one shot.

Next step was to get a recipe which is useful for developing my AVI's. I have not changed this recipe since many years as long as I see that my raw material in inside my quality standards. I work exclusively with AVIStack v2.0. Registax for me has something in the final result I do not like but I can not explain what it is what I see.

After having the single images processed in AVI Stack I make the Mosaic in Photoshop ( Thanks Alexandra for many years ago drawing my attention to the interactive Layout in the Photomerge menu).

I check the image in Photoshop and eventually apply a very soft unsharp mask and that is it.

The variables are so different to everybody that it is nearly impossible for a recibe which would work for all.

Many many years ago I used to colour my images but that is over. I do not find it natural and I have seen that my images have enough fine detail which possibly would disappear when colouring.

About your your question " deconvolution " I just have to say that I do not even know what " ... to deconvolute " means ... :shock:

Sorry not being able to help you ;)

Rainer


regards Rainer

Observatorio Real de 14
San Luis Potosi Mexico

North 22° West 101°
Bruce G
Ohhhhhh My!
Ohhhhhh My!
Posts: 177
Joined: Mon Jul 16, 2018 8:18 pm
Has thanked: 128 times
Been thanked: 121 times

Re: A Grand Post-Processing Thread!

Post by Bruce G »

I struggle with deconvolution too, not because I don't understand what convolution is and does, but because we are handed a pile of deconvolution kernels with no guidance as to which sort of situations they were built for. Deconvolution is an inherently unstable process. It is a process of division. In the frequency domain, it literally _is_ division. So when numbers in the denominator get small, the answer gets unstable very quickly.

Convolution is a process by which two signals become combined. Typically, one signal represents a "truth" to be measured and the second represents a system response function, usually the result of filtering and/or imperfect system response. In the case of telescopes, our "truth" would be a pinpoint star, with no atmospheric distortion (assume that we are using a space-based telescope). We know that the true response of this star should be zero (black) everywhere except the precise location of the star. What we see though, is a somewhat fuzzy blob that is the result of convolving the star's perfect pinpoint with the response function of our telescope's optics.

This imperfect response function is given the name "point spread function" (PSF), see https://en.wikipedia.org/wiki/Point_spread_function. Peforming a convolution between a mathematical representation of the star (an impulse) with the mathematical representation of the telescope PSF will result in the image produces by the telescope. The perfect pinpricks of light from stars are blurred by the PSF. If there is some way to know the characteristics of the telescope's PSF, then we could run the mathematics backward and recover the perfect pin pricks of light from the blurred image. That is what deconvolution does. An image of the Sun is simply made from a bazillion points of light, all undergoing convolution through the telescope optics.

Clearly, the important part here, the "gotcha", is knowing the PSF of your telescope. It seems so beautiful in the text books: Take the Fourier transform of your image, divide by the Fourier transform of your PSF and, voilà, you get your perfect image. It's literally as simple as that. As simple as that, _IF_ you know your PSF. And there lies the problem. Fortunately the PSF of simple optical devices (and a refracting telescope is a simple optical device) tends to be similar to a scaled version of what is known as a "sinc function" and PSFs can be approximated, producing imperfect, but still very much improved image results from the deconvolution process.

The problem that I have with the deconvolution programs is that there is no discussion whatsoever regarding the shape of the deconvolution kernels (the PSFs that you can choose from in the program). The fastest way to screw up a deconvolution is to not understand what PSF you are using and why you are using that particular one. How can we possibly hope to produce a suitable result from a very formally described mathematical process, if we do not obey the underlying assumptions of the mathematics???

As a result, I typically avoid the deconvolution programs, relying on high pass filtering (applied using either the Overlay or Soft Light blending mode in Photoshop) along with USM in Photoshop (using multiple light USM runs applied in a Photoshop action, as described by Mark Townley). I _would_ use deconvolution if I understood the deconvolution kernels, but I have not found adequate documentation. If I am not mistaken, wavelet deconvolution is merely taking generic PSFs of various scales and applying them empirically to achieve an improved appearance. ...But I could be mistaken. In this case, there is no real mathematical formalism to the process. You are simply applying a process that improves your preconceived appearance of what the image should look like, much like applying USM.

For the occasions where I do use deconvolution, I typically resize my image upward by a factor of 2 to 4. That is to allow me to see detail in the result of the deconvolution, particularly when using the AstraImage Photoshop plugin, which doesn't allow you to change the deconvolution preview size. I often find deconvolved results to look sort of dirty and the cause of that is usually overprocessing of the deconvolution, similar to how overprocessed USM in Photoshop looks ugly. The oversized image allows me to see artifacts like "dotting" to be visible before they become a problem in th eimage. The resizing of the image costs me a lot of time during the actual deconvolution, but I feel that it's worth it. Resizing the image means that you have to resize the deconvolution kernel to match and, once again, knowing nothing about the deconvolution kernels dramatically inhibits learning about to make a given method work well.


User avatar
GreatAttractor
Almost There...
Almost There...
Posts: 964
Joined: Sat Jun 01, 2013 1:04 pm
Location: Switzerland
Has thanked: 747 times
Been thanked: 753 times

Re: A Grand Post-Processing Thread!

Post by GreatAttractor »

Bruce, it is my understanding that for (our) solar imaging, when sharpening a stack, we need to remove the blurriness caused by 1) atmospheric turbulence (even though the stacker tries to choose the sharpest image fragments, it has to use some blurrier ones too), and 2) shift-and-add procedure (which stretches fragments to the average location of each reference point); we're not trying to (primarily) compensate for telescope's PSF.

At least in my case (a refractor at ~f/11), when Stackistry shows me the "mosaic of best fragments" of a video, I'd be happy with that already (as it's as sharp as my post-processed stacks), if it wasn't of course noisy, of low dynamic range and "shredded"/misaligned.

So in ImPPG I coded the simplest symmetric Gaussian kernel for deconvolution (which is mentioned in the README file) and it usually seems to work fine.


My software:
Stackistry — an open-source cross-platform image stacker
ImPPG — stack post-processing and animation alignment
My images

SW Mak-Cass 127, ATM Hα scopes (90 mm, 200 mm), Lunt LS50THa, ATM SSM, ATM Newt 300/1500 mm, PGR Chameleon 3 mono (ICX445)
User avatar
MalVeauX
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 1858
Joined: Tue May 09, 2017 7:58 pm
Location: Florida
Has thanked: 1171 times
Been thanked: 1360 times

Re: A Grand Post-Processing Thread!

Post by MalVeauX »

GreatAttractor wrote: Mon Mar 11, 2019 9:03 pm Bruce, it is my understanding that for (our) solar imaging, when sharpening a stack, we need to remove the blurriness caused by 1) atmospheric turbulence (even though the stacker tries to choose the sharpest image fragments, it has to use some blurrier ones too), and 2) shift-and-add procedure (which stretches fragments to the average location of each reference point); we're not trying to (primarily) compensate for telescope's PSF.

At least in my case (a refractor at ~f/11), when Stackistry shows me the "mosaic of best fragments" of a video, I'd be happy with that already (as it's as sharp as my post-processed stacks), if it wasn't of course noisy, of low dynamic range and "shredded"/misaligned.

So in ImPPG I coded the simplest symmetric Gaussian kernel for deconvolution (which is mentioned in the README file) and it usually seems to work fine.
I use your software, IMPPG is just wonderful. :bow

Could you describe perhaps how you think is best to approach using the deconvolution sliders and what you'd look for visually or on the histogram or some other means to judge how to choose iterations and the sigma value? Or is it just strictly a visual play and touch type thing?

Very best,


Bruce G
Ohhhhhh My!
Ohhhhhh My!
Posts: 177
Joined: Mon Jul 16, 2018 8:18 pm
Has thanked: 128 times
Been thanked: 121 times

Re: A Grand Post-Processing Thread!

Post by Bruce G »

GreatAttractor wrote: Mon Mar 11, 2019 9:03 pm Bruce, it is my understanding that for (our) solar imaging, when sharpening a stack, we need to remove the blurriness caused by 1) atmospheric turbulence (even though the stacker tries to choose the sharpest image fragments, it has to use some blurrier ones too), and 2) shift-and-add procedure (which stretches fragments to the average location of each reference point); we're not trying to (primarily) compensate for telescope's PSF.
Thank you. It's great to hear from someone who knows the inner workings.
Yes. That is a valid point. I failed to continue my pristine example into actual practice. So there is no actual PSF that we could ever hope to achieve. In that case, I agree that about the best we can do is assume a gaussian type of blurring and try to tune the shape a bit. The math part remains the same. I was using imPPG, but switched to AstraImage, though I don't remember why. AstraImage is the one with the overwhelming number of methods and kernels.

Why does the L-R method seem to be preferred? What guidance can you give us in the way of parameter tuning?

Thanks

Bruce
Last edited by Bruce G on Wed Mar 13, 2019 4:11 pm, edited 1 time in total.


User avatar
GreatAttractor
Almost There...
Almost There...
Posts: 964
Joined: Sat Jun 01, 2013 1:04 pm
Location: Switzerland
Has thanked: 747 times
Been thanked: 753 times

Re: A Grand Post-Processing Thread!

Post by GreatAttractor »

I think I learned about the L-R deconvolution from Michael Wilkinson on SGL forum. It seemed like a straightforward enough (and fun) to implement method, and at that time I needed something more convenient than R6 for batch processing.

The tutorial links on ImPPG's homepage should be useful. In the first one there's an animated GIF trying to show how to adjust the "sigma".


My software:
Stackistry — an open-source cross-platform image stacker
ImPPG — stack post-processing and animation alignment
My images

SW Mak-Cass 127, ATM Hα scopes (90 mm, 200 mm), Lunt LS50THa, ATM SSM, ATM Newt 300/1500 mm, PGR Chameleon 3 mono (ICX445)
User avatar
MalVeauX
Way More Fun to Share It!!
Way More Fun to Share It!!
Posts: 1858
Joined: Tue May 09, 2017 7:58 pm
Location: Florida
Has thanked: 1171 times
Been thanked: 1360 times

Re: A Grand Post-Processing Thread!

Post by MalVeauX »

GreatAttractor wrote: Tue Mar 12, 2019 6:48 pm I think I learned about the L-R deconvolution from Michael Wilkinson on SGL forum. It seemed like a straightforward enough (and fun) to implement method, and at that time I needed something more convenient than R6 for batch processing.

The tutorial links on ImPPG's homepage should be useful. In the first one there's an animated GIF trying to show how to adjust the "sigma".
Thanks, going through your tutorial helps explain a lot.

It does indeed seem to be a touch and feel thing. Adjust iterations and sigma on deconvolution until it looks as unblurred as it can get, before it bloats and. I've been doing this, and I tend to back off a little on the deconvolution to soften it gently a bit from that point. Sometimes I use higher iterations to avoid using a large sigma number.

I've yet to be able to really successfully use the adaptive unsharp mask. After it was pointed out I've been trying to make use of it on limbs mostly where it's really useful. But finding the values can be difficult.

Thanks again for your software, it's truly great! :bow

Very best,


Post Reply