Denoise
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Denoise
I heard about this on BBC Click at the weekend. This is something we will all want!!!
https://www.digitalartsonline.co.uk/new ... ur-images/
https://venturebeat.com/2018/07/09/nvid ... sy-photos/
Full paper here https://arxiv.org/pdf/1803.04189.pdf
Alexandra
https://www.digitalartsonline.co.uk/new ... ur-images/
https://venturebeat.com/2018/07/09/nvid ... sy-photos/
Full paper here https://arxiv.org/pdf/1803.04189.pdf
Alexandra
- Carbon60
- Way More Fun to Share It!!
- Posts: 15055
- Joined: Wed Mar 07, 2012 12:33 pm
- Location: Lancashire, UK
- Has thanked: 9525 times
- Been thanked: 9256 times
Re: Denoise
Cool. Thanks for the links, Alexandra.
Anything to help improve image quality is always welcome. We just need this to be available free and Windows compatible
Stu.
Anything to help improve image quality is always welcome. We just need this to be available free and Windows compatible
Stu.
H-alpha, WL and Ca II K imaging kit for various image scales.
Fluxgate Magnetometers (1s and 150s Cadence).
Radio meteor detector.
More images at http://www.flickr.com/photos/solarcarbon60/
Fluxgate Magnetometers (1s and 150s Cadence).
Radio meteor detector.
More images at http://www.flickr.com/photos/solarcarbon60/
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
That's pretty impressive denoising, but it appears to need training on your subject of interest.
I use ReduceNoise V8 from Neat Image for my denoising. It takes a little practice to learn how to tweak the filter appropriately, but I _love_ the results. There are very few images that I process these days that do not get a pass through the denoising software. I also use the denoising when I apply a high pass filter. I copy the area of my image that I want to sharpen to a new layer in Photoshop and run the high pass filter only on the copy. Then I can run the denoising software on the sharpened copy to squash the noise (inherent in sharpening) while retaining the increased sharpness of significant features. It really, really works well (again, after a little practice).
I'll take a read of the whole paper to see what sort of tricks they're using. Thanks for the links Alexandra!
I use ReduceNoise V8 from Neat Image for my denoising. It takes a little practice to learn how to tweak the filter appropriately, but I _love_ the results. There are very few images that I process these days that do not get a pass through the denoising software. I also use the denoising when I apply a high pass filter. I copy the area of my image that I want to sharpen to a new layer in Photoshop and run the high pass filter only on the copy. Then I can run the denoising software on the sharpened copy to squash the noise (inherent in sharpening) while retaining the increased sharpness of significant features. It really, really works well (again, after a little practice).
I'll take a read of the whole paper to see what sort of tricks they're using. Thanks for the links Alexandra!
- Ibbo
- Way More Fun to Share It!!
- Posts: 1482
- Joined: Thu Feb 13, 2014 10:13 pm
- Location: North of Nottingham England
- Has thanked: 2775 times
- Been thanked: 1808 times
- ffellah
- Way More Fun to Share It!!
- Posts: 12470
- Joined: Mon Oct 27, 2014 6:46 pm
- Location: Westport, CT USA
- Has thanked: 10716 times
- Been thanked: 7166 times
Re: Denoise
Thank you, Alexandra, for the articles and starting this conversation. I have never even thought of denoising before and now I am.
Franco
Franco
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
It would be good if everyone can share about denoise, I have terrible trouble with Halpha pictures but not at all with white light. Thanks Bruce for the tip, I will have a look at that.
Alexandra
Alexandra
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
Here are some notes I made when going through the paper:
(1) "...we can learn to reconstruct signals from only corrupted examples, without ever observing clean signals, and often do this just as well as if we were using clean examples."
This is the core concept of the paper
(2) "This [mathematical observation] implies that we can, in principle, corrupt the training targets of a neural network with zero-mean noise without changing what the network learns."
The researchers learned that, provided that the noise is zero-mean*, the neural net denoising algorithms could not only learn just as well using noisy training targets, but also just as fast. This is a very counter-intuitive conclusion but is easily proved in the math. The trick is that the only things that "make sense" to the algorithms are the parts of the image that are consistent (not noise). The hidden layer weights are averaged over all pixels, so they become stable as training goes on (if you're not familiar with neural nets, the hidden layer is where the "learning" occurs and is stored).
(3) "For finite data, the variance of the estimate is the average variance of the corruptions in the targets, divided by the number of training samples"
There is no free lunch. If you have very noisy images, you will still need to have a lot of training samples. But at least you won't be required to have any noise-free samples, a condition that is sometimes impossible to fulfill.
(4) "... saturation (gamut clipping) does break our assumptions, as parts of the noise distribution are discarded"
Watch that histogram! If, for example, you allowed the solar disc to clip in order to obtain more detail in prominences, that would violate this (zero-mean) condition of the denoising process.
(5) "To avoid backpropagating gradients from missing pixels, we exclude them..."
Hot or dead pixels are similar to clipped data in that they represent missing data. But even worse, because hot or dead pixels represent consistent values in otherwise noisy data, the neural net would lock on to them as truth. They must be eliminated. It's possible that this exclusion process might be used to eliminate intentionally clipped data, as in my prominence example above.
(6) "... sub-Nyquist spectral samplings in magnetic resonance imaging (MRI) can be learned from corrupted observations only"
This is another unexpected observation from the researchers, but on reflection is not so surprising. In a way, it is similar to stochastic sampling in which a random distribution of sample points, rather than a regular grid, produces a better description of an object. The regular grid's information is limited by Mr. Nyquist, whereas the randomly sampled data is not. Similarly, compressed sensing uses a random distribution of sample points to achieve high levels of signal fidelity with relatively few samples, often achieving excellent fidelity to the original with far, far, sub-Nyquist sample numbers.
This paper isn't a particularly easy read, though with the crowd that comes to this forum, that might not be as true as usual. It also represents some very new work. I doubt that we will be seeing any free downloadable software to use this process in the near future** and it doesn't look like an easy path to follow on your own. But it might trigger some ideas for other researchers to follow.
I noticed that one of the authors is from MIT CSAIL. I have looked into other work that was done at CSAIL, notably the "motion microscope." They have produced some truly remarkable work there, so I would take this paper very seriously. It is likely to be a significant contribution to the art, at least for a certain, common, class of images.
Bruce G
* Any noise source that would eventually null itself out with very long averages fits this description.
** A note in Alexandra's first link states "The team's paper will be presented this Thursday at the [International Conference on Machine Learning in Sweden] - though there's no news on where this will be licensed to applications developers so it could be put to good use by photographers, designers and other creators."
The date on the web page is 10 July, so the paper should have been presented last Thursday
(1) "...we can learn to reconstruct signals from only corrupted examples, without ever observing clean signals, and often do this just as well as if we were using clean examples."
This is the core concept of the paper
(2) "This [mathematical observation] implies that we can, in principle, corrupt the training targets of a neural network with zero-mean noise without changing what the network learns."
The researchers learned that, provided that the noise is zero-mean*, the neural net denoising algorithms could not only learn just as well using noisy training targets, but also just as fast. This is a very counter-intuitive conclusion but is easily proved in the math. The trick is that the only things that "make sense" to the algorithms are the parts of the image that are consistent (not noise). The hidden layer weights are averaged over all pixels, so they become stable as training goes on (if you're not familiar with neural nets, the hidden layer is where the "learning" occurs and is stored).
(3) "For finite data, the variance of the estimate is the average variance of the corruptions in the targets, divided by the number of training samples"
There is no free lunch. If you have very noisy images, you will still need to have a lot of training samples. But at least you won't be required to have any noise-free samples, a condition that is sometimes impossible to fulfill.
(4) "... saturation (gamut clipping) does break our assumptions, as parts of the noise distribution are discarded"
Watch that histogram! If, for example, you allowed the solar disc to clip in order to obtain more detail in prominences, that would violate this (zero-mean) condition of the denoising process.
(5) "To avoid backpropagating gradients from missing pixels, we exclude them..."
Hot or dead pixels are similar to clipped data in that they represent missing data. But even worse, because hot or dead pixels represent consistent values in otherwise noisy data, the neural net would lock on to them as truth. They must be eliminated. It's possible that this exclusion process might be used to eliminate intentionally clipped data, as in my prominence example above.
(6) "... sub-Nyquist spectral samplings in magnetic resonance imaging (MRI) can be learned from corrupted observations only"
This is another unexpected observation from the researchers, but on reflection is not so surprising. In a way, it is similar to stochastic sampling in which a random distribution of sample points, rather than a regular grid, produces a better description of an object. The regular grid's information is limited by Mr. Nyquist, whereas the randomly sampled data is not. Similarly, compressed sensing uses a random distribution of sample points to achieve high levels of signal fidelity with relatively few samples, often achieving excellent fidelity to the original with far, far, sub-Nyquist sample numbers.
This paper isn't a particularly easy read, though with the crowd that comes to this forum, that might not be as true as usual. It also represents some very new work. I doubt that we will be seeing any free downloadable software to use this process in the near future** and it doesn't look like an easy path to follow on your own. But it might trigger some ideas for other researchers to follow.
I noticed that one of the authors is from MIT CSAIL. I have looked into other work that was done at CSAIL, notably the "motion microscope." They have produced some truly remarkable work there, so I would take this paper very seriously. It is likely to be a significant contribution to the art, at least for a certain, common, class of images.
Bruce G
* Any noise source that would eventually null itself out with very long averages fits this description.
** A note in Alexandra's first link states "The team's paper will be presented this Thursday at the [International Conference on Machine Learning in Sweden] - though there's no news on where this will be licensed to applications developers so it could be put to good use by photographers, designers and other creators."
The date on the web page is 10 July, so the paper should have been presented last Thursday
Last edited by Bruce G on Thu Jul 19, 2018 4:01 pm, edited 10 times in total.
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
I haven't done all that many WL images, but except for Poisson noise (shot noise) in the darker areas (proms), the two types of images, produced by the same camera, should be pretty similar noise-wise. Would you mind posting an example of what you are fighting in Ha?
A lot of the denoising image work that I have done is in reconstruction of old photos. Besides denoising programs from Neat Image, Imagenomics and others, I do one other trick that I have found useful in the denoising process - with both old photos and astronomical ones - is to enlarge the photo prior to the denoising. I usually blow up an image to about 4x its original dimensions. This shifts noise to lower frequencies and makes it so that the process isn't so fussy at the high frequencies. Also, enlarging the image prior to deconvolution not only greatly reduces the sensitivity of the process to the deconvolution kernel, but (for me) makes deconvolution artifacts far easier to detect and avoid. In particular, AstraImage's preview box is too small to see the subtle effects of deconvolution. Enlarging the image prior to deconvolution lets you see a lot more in the preview. But as you might suspect, the price you pay for a 4x image size increase is a 16x processing time increase. I'm willing to pay that price.
I have both Neat Image and AstraImage installed as Photoshop plug-ins which not only makes getting to them easier, it allows you to use Photoshop's powerful selection tools and layering to control the effects. For example, I often employ a different noise filter for darker or background objects than I use for bright or foreground objects.
If you create a new layer prior to performing a process on it, then you can modify the layer extensively before adding it back into the image. For example, sometimes sharpening that is appropriate for most of an image will oversharpen bright areas, particularly if they have abrupt edges. If the sharpening is applied to a layer, then the oversharpened areas can be softened or completely erased prior to recombining with the original image, giving a much more pleasing result.
Last edited by Bruce G on Tue Jul 17, 2018 2:11 pm, edited 3 times in total.
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
Thanks Bruce for all the time and effort you have put in to extracted the golden nuggets.
I find that my WL images are always smooth and not dotty, whereas all the Halpha but particularly the ones I took with the Quark (same camera, same settings) always have so many little dots. I always admire all the images here that are smooth as silk in Halpha but with no loss in detail. I don't know how they do it. If I try even the smallest amount of denoise in Photoshop all the fine details are ruined and I prefer the dotty image.
Alexandra
I find that my WL images are always smooth and not dotty, whereas all the Halpha but particularly the ones I took with the Quark (same camera, same settings) always have so many little dots. I always admire all the images here that are smooth as silk in Halpha but with no loss in detail. I don't know how they do it. If I try even the smallest amount of denoise in Photoshop all the fine details are ruined and I prefer the dotty image.
Alexandra
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
Alexandra,
The quick answer is that Photoshop denoising simply isn't up to the task. The more advanced denoising algorithms really make a difference. While applying a soft denoising in Photoshop had always been a step in "developing" a digital image for me, once I got the Neat Image program, I completely stopped doing denoising in Photoshop.
If things are getting dotty, I would suspect that the deconvolution is being pushed a bit too hard. Again, though costly in time, I think that performing deconvolution on a magnified image allows sufficiently improved control that it is worth the effort. You will have to determine a new favorite kernel size and iteration count.
The quick answer is that Photoshop denoising simply isn't up to the task. The more advanced denoising algorithms really make a difference. While applying a soft denoising in Photoshop had always been a step in "developing" a digital image for me, once I got the Neat Image program, I completely stopped doing denoising in Photoshop.
If things are getting dotty, I would suspect that the deconvolution is being pushed a bit too hard. Again, though costly in time, I think that performing deconvolution on a magnified image allows sufficiently improved control that it is worth the effort. You will have to determine a new favorite kernel size and iteration count.
-
- Almost There...
- Posts: 604
- Joined: Sun Mar 22, 2015 7:48 pm
- Has thanked: 1246 times
- Been thanked: 731 times
Re: Denoise
I tried several denoising tools some years ago. I finally remained with neatimage.
But I after some use, I was not convinced by the results, and stop using it.
For solar imaging, we look for the finest detail. The noise is generally at a very high frequency level. There is no need to denoise low frequencies, so need for sophisticated tool.
Finally, I came back to denoising by a gaussian filter, adjusting the size to 0.3pixel in general.
Often, I decrease the effect of the filtering by using a "fade" function.
Sometimes, I increase the size of the filter to 0.4, 0.5. But then the primary pic has not enough fine details and is probably undersampled.
Sometimes, I feel more confident with the "dotty" pattern and have the feeling of not having destroyed details.
CS
Alex
But I after some use, I was not convinced by the results, and stop using it.
For solar imaging, we look for the finest detail. The noise is generally at a very high frequency level. There is no need to denoise low frequencies, so need for sophisticated tool.
Finally, I came back to denoising by a gaussian filter, adjusting the size to 0.3pixel in general.
Often, I decrease the effect of the filtering by using a "fade" function.
Sometimes, I increase the size of the filter to 0.4, 0.5. But then the primary pic has not enough fine details and is probably undersampled.
Sometimes, I feel more confident with the "dotty" pattern and have the feeling of not having destroyed details.
CS
Alex
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
The first sentence, the second sentence and the first half of the third sentence are generally true, but the second half of the third sentence does not follow from the first. The problem is that we need to remove high frequency noise, yet at the same time retain detail - inherently high frequency - in the image. So, while our concern is usually high frequency noise and there usually isn't a need to remove low frequencies, we are concerned about the finest detail and we still can benefit from sophisticated filtration methods. It is precisely this ability to minimize noise without simultaneously blurring detail that separates sophisticated denoisers from simple low pass filtering such as a gaussian filter.
After writing this post, I realized that became a lot longer than I had intended and much was rather meaningless to someone who is not interested in Neat Image. On the other hand, I wish I had been able to find something like this when I first started using the program, so I want to put the information out there. It would have saved me a lot of frustration. But to save most of you the agony of reading the bulk of this, I broke it into two parts. If you are not interested in the Neat Image product, you can stop reading when you get to Neat Image Specifics.
Here is why I feel that the sophisticated filters (specifically Neat Image ReduceNoise V8) are better for denoising than low pass filtering or even median filtering:
1) The filter is built specifically for the image (preset filters can also be used). Camera settings such as Gain and Gamma can change noise content and character dramatically. Filter coefficients are optimized for the exact noise conditions of the image*.
2) The filter can be adjusted across multiple frequencies. This provides much more control over the manner in which the filter affects the image.
3) Neat Image (NI) appears to work on image gradients**, an approach that I heartily support. Our eyes extract information from retinal signals using a similar approach, looking for lines, edges, corners, etc. (this all happens in the retina, not in the brain). It appears that the gradients are combined with the filter coefficients, allowing filtration to be varied locally depending on the image gradient. Thus, edges that are recognized as having a strong gradient won't be hit as hard by the filter as will areas with low gradients.
The downside of the sophisticated filters is that they are yet one more piece of software to purchase and master.
Neat Image Specifics
Just to be clear, I have no relationship with Neat Image or any other software company. I also used Imagenomics Noiseware, but found the Neat Image product easier to use, so that's all I'm addressing. I did like Noiseware's filter adjustment, not only by frequency, but by image intensity. In other words, you could make the filter hit the dark regions really hard, while doing little to bright regions. Noise Ninja is another highly regarded product, but I have not tried it.
Neat Image starts off by having you select a rectangular patch of the image in which you say that there is no significant image detail. This provides the noise model for the image. You may also use noise models from external sources, such as from images processed in the past or specifically built for a given camera/sensor. It analyzes that patch for noise character and amplitude. This is its first advantage over simple filters. It's going to focus on noise based on what you showed it, not all possible frequencies. Once NI constructs its filter, it is up to the user to set two sliders for each of five frequency bands. This is similar to that manner in which wavelet sharpening programs have a set of slider to allow sharpening at different spatial scales, except that there are two things to control for each band, the noise reduction amount and the noise amplitude. Adjusting these settings for beginners can be confusing and often results in overprocessing of the image and/or tiny, but annoying and often plentiful, image artifacts.
The adjustment by frequency is the next advantage that NI has over low pass filtering. In many, if not most, cases, we are primarily concerned with high frequency noise. But astro photographers have other problems too. For example, light pollution can create a gradient that spans the entire width of the image. Using the ultra low frequency slider would let you correct for the gradient, despite the fact that it is a very low frequency phenomenon and would be untouched by any sort of a useful low pass filter.
For a long time, I was very frustrated in trying to determine the proper setting for the noise level sliders. It was very difficult to detect what effect they were having by directly examining the image, especially since I didn't know what sort of effect I was looking for. It turns out that the image artifacts, which manifest as randomly placed little dashes throughout the image, are a direct result of improper noise level settings.
Just to confuse things further, Neat Image includes an adjustable option named Artifact Removal with a slider for Dots and another for Dashes. The little icons near the slider look just like the sort of artifacts that appear from improper noise level settings. Check the check box and problem solved, right? Well, I can't tell you how many ways I adjusted those sliders with nothing happening to my results. It turns out that the artifact removal that Neat Image is talking about has to do with dot- and dash-like artifacts in the _input_ image, not the processed image.
I won't try to present a NI tutorial here, but I have found a couple of things that have helped me gain control over the NI settings.
1) Unless you know that you want to reuse a specific filter, select Filter -> Reset Noise Filter Settings in the Device Noise Profile tab to be sure that you are starting off from a known point.
2) In Noise Filter Settings tab, set the Quality Mode to Normal because you will be making a lot of adjustments and you will want to see the results quickly. The Quality Mode can easily be reset after filter tuning
3) Set the Noise Reduction Amount Luminance slider to 100%. This allows you to see the full effect of the adjustments. At the end of process, The Luminance slider is moved back to replace a pleasing amount of noise. If you run NI as a Photshop plug-in, then Photoshop can also fade the filter to achieve the desired amount of noise reduction.
4) Choose the Y Channel Frequencies display mode. Until I discovered this display mode I was utterly lost when it came to setting the Noise Level sliders. Using this display mode allows you to directly observe the effect of your adjustments. This display mode is available in the lower left corner of the screen, next to the scale. It probably has an orange rectangle and the word Normal on it or a small pie chart with something like YCrCb beside it. Click it and choose Y Channel Frequencies from the list.
5) Set the Noise Reduction Amount frequency sliders to zero for all but the High band, which is left at 100%. Adjust the Noise Level Frequencies High band slider and watch the Y High display. Adjust the slider so that all of the little dash-like artifacts are obliterated (remember that the Noise Reduction Amount is currently set to 100% and will be reduced later). Clicking on any of the displays quickly toggles between filtered and unfiltered versions of the image. Images features that truly do have a sharp edge should remain visible, but otherwise, the entirety of the Y High should be an even gray. You probably will find that above a certain point, increasing the setting of the Noise Level slider does not change the appearance of the Y High image much, if at all. That is where you want to set the slider.
5b) If you find that you can't eliminate all of the trash, then it's time to start increasing the Noise Level slider and/or the Noise Level Luminance slider. Increase only enough to eliminate the noise.
6) Click back and forth between filtered and unfiltered displays and observe the image display. At this point, there may be very little effect visible and there should be no effect visible on features that you want retained in the final image. If the highest frequency band of the filter is affecting the parts of the image that you want to retain, then I suggest upsampling your image. There are other software packages that do upsampling better than Photoshop, and you should use one of those if possible***. And again, even if some of your desired image is being affected by the filter, you will be reducing the filter amount at the end of the process.
6) Set the Noise Reduction Amount Mid to 100%. Check your main image display. There may be some blurring starting to appear. If so, back off on the Noise Reduction Amount a bit until most of the blurring goes away. Adjust the Noise Level Mid slider in the same manner as for the High. This time, most of the Y Mid image will be an even gray, but there will be more features visible. You want to adjust the slider so that spurious stuff in the Y Mid image is eliminated, but some of the other disjointed, but correctly oriented features remain. Again, you'll find a point where increasing the slider doesn't change much and right at the edge of that is where you want to set it.
7) Do the same for the Low band while observing the Y Low and the main images. Don't allow much blurring of the filtered image to occur.
8) Very Low and Ultra Low can be adjusted best by observing the main image window and adjusting Noise Reduction Amount for the best visual appearance. Quite often these sliders will remain at zero or low values. I haven't yet found a need to adjust the Noise Level sliders for Very Low and Ultra Low, but once you understand how the Noise Level sliders work for the other bands, then you know what to look for in the main image.
9) After all adjustments are complete, change the Quality Mode if you wish. If you select Highest, the Preserve Details checkbox will become enabled. I think you're better off leaving the checkbox cleared. You lose very little in the way of details if you don't use it and you risk creation of artifacts if you do. Use as you see fit.
10) Most likely, the image appears a bit overprocessed. If the image being processed happened to be a portrait, an overprocessed image would make the person's skin look sort of like plastic or like computer generated imagery. I'm not quite sure what the effect might look like for a solar image. But because the Noise Reduction Amount Luminance slider was set to 100% at the beginning, it probably needs to come down some. Adjust the slider to give the main image a more pleasing appearance. A small amount of noise improves the appearance of sharpness.
Do be aware that the images have the contrast boosted to help you see the filter effects. You might find that the filter effect is not as large as you thought it would be when you get the processed image. You will probably find it best to leave the processing looking just a little heavy. If you have the Photoshop plug-in, always leave the filtration a little heavy and do the final adjustment using Edit -> Fade in Photoshop.
Alex, if you were having problems getting Neat Image to do what you wanted, especially if you simply had no idea what to do with the Noise Level, I suggest giving it one more try with my outline above.
I am by no means an expert on Neat Image ReduceNoise V8. There is still a lot I have to learn about it. I don't know how to use their plots or the fine tuning. I'm really just finally getting my feet on the ground. I feel like I'm just at the point where I can start asking some meaningful questions.
Geez, this got long. Sorry.
Bruce G
* There are cases where the noise character changes considerably from one part of the image to another. NI allows characterization of only one noise type at a time. This is not a terrible burden if running the Photoshop plug-in, as you can either make a selection or create a layer of just the stuff you want for one noise condition and apply an appropriate filter for that. Then make another selection or layer and process that differently. Repeat as necessary.
** I have found very little information on the internet regarding Neat Image's method, and they certainly aren't talking about how they do their magic. However, certain display modes within the program, as well as image artifacts left after processing with poor parameter settings, hint strongly at a gradient-based (edge detection) approach.
*** If you do use Photoshop, then in the Image Size dialog, check Resample: and click on the drop drown list to the right. I like Bicubic (smooth gradients), but you should try Preserve Details (enlargement) and Bicubic Smoother (enlargement) as well to see what preserves the integrity of the image best in the enlargement.
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
Thanks Bruce, that is invaluable information, we could do with this pinning in the library if Mark or Merlin spots this.
Alexandra
Alexandra
-
- Almost There...
- Posts: 604
- Joined: Sun Mar 22, 2015 7:48 pm
- Has thanked: 1246 times
- Been thanked: 731 times
Re: Denoise
The post is back here :-) Thank you.
Bruce, indeed a very long description, where I recognize the process.
Maybe I was not patient enough with solar imaging to find out the right settings.
I stopped using it in about 2014.
Here is my last solar pic, stacked with 100 pics, sharpened with wavelets, and contrast adjustments under PS.
It is a bit noisy.
Here, the same pic denoised with a gaussian filter under photoshop, probably with a setting of 0.3px.
Here is the original non denoised version in .tif format . I hope that you can download it.
Would you have a little time and denoise it with NI?
Alexandra is most probably interested in the result. Alexandra, you posted a pic about 2 weeks ago with a dotty pattern. Would you mind to share it for test? Thanks.
Bruce I would appreciate if you could share the pic after denoising under NI.
CS
Alex
Bruce, indeed a very long description, where I recognize the process.
Maybe I was not patient enough with solar imaging to find out the right settings.
I stopped using it in about 2014.
Here is my last solar pic, stacked with 100 pics, sharpened with wavelets, and contrast adjustments under PS.
It is a bit noisy.
Here, the same pic denoised with a gaussian filter under photoshop, probably with a setting of 0.3px.
Here is the original non denoised version in .tif format . I hope that you can download it.
Would you have a little time and denoise it with NI?
Alexandra is most probably interested in the result. Alexandra, you posted a pic about 2 weeks ago with a dotty pattern. Would you mind to share it for test? Thanks.
Bruce I would appreciate if you could share the pic after denoising under NI.
CS
Alex
-
- Almost There...
- Posts: 604
- Joined: Sun Mar 22, 2015 7:48 pm
- Has thanked: 1246 times
- Been thanked: 731 times
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
I just wanted to add a couple of additional comments regarding the paper.
I think one question that many people might have would be something like "That looks like cool mathematics, but can we actually use it?"
I believe that we can. We will need to wait for some more development, but the theory supports denoising in a manner that could be easily adapted to our data. The paper is written in terms of neural nets, training sets, validation sets, etc., and those aren't necessarily daily conversation topics for many. How would the concepts in the paper be applied to what astrophotographers do?
I'm not exactly a world expert in neural nets either, but if I understand the paper correctly, we could train the denoiser simply by presenting individual frames of the video sequences that are typically recorded. That is the beauty of their method. The neural net software never has to be shown a "ground truth" image. As long as it is a constant scene, the method should work. Image alignment, in the manner of AutoStakkert!, should be run first. Noise such as that resulting from the use of high Gain settings, clearly falls within the type of noise that can be addressed by their methods.
It is less clear to me whether or not spatial image intensity variations, such as would result from atmospheric disturbance, falls into the same class of zero-mean noise. Over the entire image we could say that the atmospheric disturbances are zero-mean. Some parts of the image are brightened and some parts are darkened, by a total of about the same amount. But at the pixel level, I can't say that the blurring constitutes zero-mean noise. In fact, I have to believe that it doesn't.*
At the same time, the denoiser _will_ tend to find what is consistent from image to image to image in the video series and though contrast will still suffer somewhat, I would expect that the processing would result in an improvement even for atmospheric disturbances.
There is one example in the paper, the koala bear image, that probably doesn't apply to astrophotography. In that example, a high quality image was corrupted with a high level of noise. However, because the noise could take on many values while the true pixel value at any one location in the good image had only one value, then it was extremely easy to use the mode of the data at each pixel location as the correct value. It does a fantastic job, but we never have the initial high quality image as was used in this example.
Near the end of the paper the authors discussed doing the denoising in a real-time setting. They were addressing a synthetic camera moving through a 3-D modeled scene, so it's not the same application. Still, it seems that for small images (256 x 256) they were able to produce 2 frame per second. Not bad at this stage of the game and bound to improve. If small regions can be denoised real-time that might help in better focus or scintillation measurements.
We can only hope that this research gets some attention and is continued. We are lucky in that we record our data in a manner that fits well with their approach. If this becomes publicly available it would be great to give it a try.
Bruce G
* Consider a pixel that, in truth, has a particular intensity. If noise with an average value of zero is added to the pixel's value and the pixel is repeatedly sampled, you will (with a sufficient number of samples) obtain a normal distribution of values around the true value. This is the kind of noise that the methods in the paper are addressing.
Now consider a pixel that represents a bright area that is near darker areas in an image. As the atmosphere wiggles around, the image is blurred, darkening the bright pixel and/or the darker areas are refracted such that they appear at the pixel's position instead of the correct, lighter area (which get refracted somewhere else, slightly lightening that particular pixel). In either case, the pixel is darkened and there is very little that might lighten it since it is mostly surrounded by parts of the image that are darker. There is a bias toward the darker values. Taking more and more samples does nothing to eliminate the bias. The atmospheric effects are not zero-mean.
I think one question that many people might have would be something like "That looks like cool mathematics, but can we actually use it?"
I believe that we can. We will need to wait for some more development, but the theory supports denoising in a manner that could be easily adapted to our data. The paper is written in terms of neural nets, training sets, validation sets, etc., and those aren't necessarily daily conversation topics for many. How would the concepts in the paper be applied to what astrophotographers do?
I'm not exactly a world expert in neural nets either, but if I understand the paper correctly, we could train the denoiser simply by presenting individual frames of the video sequences that are typically recorded. That is the beauty of their method. The neural net software never has to be shown a "ground truth" image. As long as it is a constant scene, the method should work. Image alignment, in the manner of AutoStakkert!, should be run first. Noise such as that resulting from the use of high Gain settings, clearly falls within the type of noise that can be addressed by their methods.
It is less clear to me whether or not spatial image intensity variations, such as would result from atmospheric disturbance, falls into the same class of zero-mean noise. Over the entire image we could say that the atmospheric disturbances are zero-mean. Some parts of the image are brightened and some parts are darkened, by a total of about the same amount. But at the pixel level, I can't say that the blurring constitutes zero-mean noise. In fact, I have to believe that it doesn't.*
At the same time, the denoiser _will_ tend to find what is consistent from image to image to image in the video series and though contrast will still suffer somewhat, I would expect that the processing would result in an improvement even for atmospheric disturbances.
There is one example in the paper, the koala bear image, that probably doesn't apply to astrophotography. In that example, a high quality image was corrupted with a high level of noise. However, because the noise could take on many values while the true pixel value at any one location in the good image had only one value, then it was extremely easy to use the mode of the data at each pixel location as the correct value. It does a fantastic job, but we never have the initial high quality image as was used in this example.
Near the end of the paper the authors discussed doing the denoising in a real-time setting. They were addressing a synthetic camera moving through a 3-D modeled scene, so it's not the same application. Still, it seems that for small images (256 x 256) they were able to produce 2 frame per second. Not bad at this stage of the game and bound to improve. If small regions can be denoised real-time that might help in better focus or scintillation measurements.
We can only hope that this research gets some attention and is continued. We are lucky in that we record our data in a manner that fits well with their approach. If this becomes publicly available it would be great to give it a try.
Bruce G
* Consider a pixel that, in truth, has a particular intensity. If noise with an average value of zero is added to the pixel's value and the pixel is repeatedly sampled, you will (with a sufficient number of samples) obtain a normal distribution of values around the true value. This is the kind of noise that the methods in the paper are addressing.
Now consider a pixel that represents a bright area that is near darker areas in an image. As the atmosphere wiggles around, the image is blurred, darkening the bright pixel and/or the darker areas are refracted such that they appear at the pixel's position instead of the correct, lighter area (which get refracted somewhere else, slightly lightening that particular pixel). In either case, the pixel is darkened and there is very little that might lighten it since it is mostly surrounded by parts of the image that are darker. There is a bias toward the darker values. Taking more and more samples does nothing to eliminate the bias. The atmospheric effects are not zero-mean.
Last edited by Bruce G on Mon Jul 30, 2018 12:59 pm, edited 2 times in total.
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
Alex,
First of all, let me say what a wonderful image you made. There is beautiful detail everywhere. Whatever processing you did up to this point has clearly been done with skill and care, with an eye toward detail and a light hand on the adjustments. Well done!
In order to demonstrate the difference between the filter methods, it's necessary to look at the image closely. I chose an area just a little above center in the image that has nice detail in both dark and light regions along with both high and low image gradients. I'll just post the results first. I'll follow up with an additional post on the details of the Neat Image processing.
Here is the initial image This is Alex's Gaussian filter And this is the result from Neat Image
It will probably be best if you can download the images and open them in a viewer that will allow you to rapidly switch between the different results. Make the images the full size of your screen. The differences can be very subtle. Change the order in which you view the images from time to time. Remember that you are looking at magnified images. Many of the subtle differences that you see will be invisible at a normal image size. For the sake of this discussion though, these small differences are instructive because they provide insight to the general behavior of the filter.
Notes on the image:
1) Again, the tonal range and the processing of the initial image were superb. Because of the high quality of the image, it is possible to discern subtle features in the image and have confidence in their existence (your brain can really play tricks with patterns in images sometimes).
2) Noise in the original image is sufficiently large that it is easily visible and recognizable. It is worth applying a noise reduction method.
3) The noise in this image was almost entirely high frequency noise. This allows the Gaussian filter to be very effective since its cutoff frequency can be kept high, which avoids blurring details.
Notes on the results:
To obtain optimal results, all filters require care and understanding in their application. Non-linear filters, in particular, are easy to push too far.
1) The Gaussian filter produces a very beneficial reduction in image noise. The residual is noticeable, but not objectionable.
2) A considerably higher level of noise suppression is possible with Neat Image.
3) Careful comparison gives a slight edge to the Gaussian filter on some details. It is good to compare each of the filtered versions directly with the original as well as with each other.
4) Reducing the amount of noise suppression applied by Neat Image will restore this detail, but at the cost of increased overall image noise. Eventually, it would be very similar to the Gaussian result in terms of the amount of remaining noise. This is a very subjective adjustment. As I mentioned in a previous post, I usually allow Neat Image to overprocess the image a bit and then back off a little once I get back into Photoshop. For this test, I attempted to get the best result out of Neat Image that I could and then left it there. I did not reduce the amount of filtering applied by Neat Image.
5) In the low gradient areas toward the center right, Neat Image provides a large improvement. Only one filter was characterized for this image. The filter being used in this portion of the image is the same as the filter used for the highly detailed areas. If any true detail were present in these low gradient areas, it would have been retained, despite what appears to be a pretty heavy smoothing. A Gaussian filter could be designed to smooth these areas in a similar manner, but any high frequency information (detail) would be destroyed.
Further comments:
1) For this type of noise, a simple digital low pass filter could be designed that would suppress the noise better than the Gaussian filter. Unfortunately, most people don't have the tools to do that.
2) There were some image features for which the Gaussian filter appeared to preserve better detail, or perhaps it might be more accurate to say it preserved better contrast. I believe that this small loss of contrast could be regained with a light high pass filter or a small amount of sharpening following the Neat Image filter, but I haven't tested that. Because of the additional noise suppression provided by Neat Image, a small amount of sharpening will not noticeably increase the noise.
3) If the noise in the image included lower frequencies, I believe that the results would much more strongly favor the Neat Image processing, as it is impossible to design a low pass filter to suppress the low frequencies without also squashing everything higher.
4) I have been using Neat Image for about a year and I still have a lot to learn. An expert may have been able to tweak that last bit of contrast out of the image, but I'm not there yet. Control and flexibility are often in direct opposition to ease of use.
First of all, let me say what a wonderful image you made. There is beautiful detail everywhere. Whatever processing you did up to this point has clearly been done with skill and care, with an eye toward detail and a light hand on the adjustments. Well done!
In order to demonstrate the difference between the filter methods, it's necessary to look at the image closely. I chose an area just a little above center in the image that has nice detail in both dark and light regions along with both high and low image gradients. I'll just post the results first. I'll follow up with an additional post on the details of the Neat Image processing.
Here is the initial image This is Alex's Gaussian filter And this is the result from Neat Image
It will probably be best if you can download the images and open them in a viewer that will allow you to rapidly switch between the different results. Make the images the full size of your screen. The differences can be very subtle. Change the order in which you view the images from time to time. Remember that you are looking at magnified images. Many of the subtle differences that you see will be invisible at a normal image size. For the sake of this discussion though, these small differences are instructive because they provide insight to the general behavior of the filter.
Notes on the image:
1) Again, the tonal range and the processing of the initial image were superb. Because of the high quality of the image, it is possible to discern subtle features in the image and have confidence in their existence (your brain can really play tricks with patterns in images sometimes).
2) Noise in the original image is sufficiently large that it is easily visible and recognizable. It is worth applying a noise reduction method.
3) The noise in this image was almost entirely high frequency noise. This allows the Gaussian filter to be very effective since its cutoff frequency can be kept high, which avoids blurring details.
Notes on the results:
To obtain optimal results, all filters require care and understanding in their application. Non-linear filters, in particular, are easy to push too far.
1) The Gaussian filter produces a very beneficial reduction in image noise. The residual is noticeable, but not objectionable.
2) A considerably higher level of noise suppression is possible with Neat Image.
3) Careful comparison gives a slight edge to the Gaussian filter on some details. It is good to compare each of the filtered versions directly with the original as well as with each other.
4) Reducing the amount of noise suppression applied by Neat Image will restore this detail, but at the cost of increased overall image noise. Eventually, it would be very similar to the Gaussian result in terms of the amount of remaining noise. This is a very subjective adjustment. As I mentioned in a previous post, I usually allow Neat Image to overprocess the image a bit and then back off a little once I get back into Photoshop. For this test, I attempted to get the best result out of Neat Image that I could and then left it there. I did not reduce the amount of filtering applied by Neat Image.
5) In the low gradient areas toward the center right, Neat Image provides a large improvement. Only one filter was characterized for this image. The filter being used in this portion of the image is the same as the filter used for the highly detailed areas. If any true detail were present in these low gradient areas, it would have been retained, despite what appears to be a pretty heavy smoothing. A Gaussian filter could be designed to smooth these areas in a similar manner, but any high frequency information (detail) would be destroyed.
Further comments:
1) For this type of noise, a simple digital low pass filter could be designed that would suppress the noise better than the Gaussian filter. Unfortunately, most people don't have the tools to do that.
2) There were some image features for which the Gaussian filter appeared to preserve better detail, or perhaps it might be more accurate to say it preserved better contrast. I believe that this small loss of contrast could be regained with a light high pass filter or a small amount of sharpening following the Neat Image filter, but I haven't tested that. Because of the additional noise suppression provided by Neat Image, a small amount of sharpening will not noticeably increase the noise.
3) If the noise in the image included lower frequencies, I believe that the results would much more strongly favor the Neat Image processing, as it is impossible to design a low pass filter to suppress the low frequencies without also squashing everything higher.
4) I have been using Neat Image for about a year and I still have a lot to learn. An expert may have been able to tweak that last bit of contrast out of the image, but I'm not there yet. Control and flexibility are often in direct opposition to ease of use.
Last edited by Bruce G on Sat Aug 04, 2018 5:22 pm, edited 1 time in total.
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
That is quite an incredible transformation Bruce. Do you think it better to buy Neat Image as a stand alone software or do you think it is better as a plug in for Photoshop?
I would send you a Tif myself but I can't work out how to do that other than you giving me your e-mail address. I think this example says it all though.
Alexandra
I would send you a Tif myself but I can't work out how to do that other than you giving me your e-mail address. I think this example says it all though.
Alexandra
-
- Ohhhhhh My!
- Posts: 177
- Joined: Mon Jul 16, 2018 8:18 pm
- Has thanked: 128 times
- Been thanked: 121 times
Re: Denoise
Alexandra,
I have a gmail account. The email address is bgirrell, followed by the usual gmail.com suffix.
I'll be happy to work on an example for you.
If you already have and use Photoshop, I would strongly recommend the plug-in. It permits you to use Photoshop's selection tools, layering, and other methods with the filter for substantially improved control. I would only recommend the standalone version if a person is not familiar with Photoshop, due to Photoshop's steep learning curve. I have been using Photoshop since Version 3, in the 1990s, so I'm kind of biased in that direction.
I have a gmail account. The email address is bgirrell, followed by the usual gmail.com suffix.
I'll be happy to work on an example for you.
If you already have and use Photoshop, I would strongly recommend the plug-in. It permits you to use Photoshop's selection tools, layering, and other methods with the filter for substantially improved control. I would only recommend the standalone version if a person is not familiar with Photoshop, due to Photoshop's steep learning curve. I have been using Photoshop since Version 3, in the 1990s, so I'm kind of biased in that direction.
-
- Almost There...
- Posts: 604
- Joined: Sun Mar 22, 2015 7:48 pm
- Has thanked: 1246 times
- Been thanked: 731 times
Re: Denoise
Hello Bruce,
Thank you for your comments on my image.
To give some words on it, it has been made with a D250mm f7 newton coupled with a custom design telecentric x5. The resulting focal length is 8750mm, F/D35.
These numbers are normally too big for my area where a very common seeing dominates. But that day, when I tested the new telecentric in Ha, the seeing was more stable than usual.
Regarding the treatment, I used mostly level 3 and level 4 wavelets; probably a bit of level 2, but I can't remember in detail. This to express that seeing and collimation were not prefect.
I tried also to balance during the treatment the low and high level. I generally try to have a balance and no darken to much the picture, this last technique giving a (wrong?) feeling of strong contrast.
Thank you for having taken the time to demonstrate the capabilities of denoising and Neat Image. I'm impressed!
There is potential there for one higher level in processing! I should really pay attention on it.
You have experience in it and you managed the different low/high level low/high frequency characteristics. Congratulation!
I'll give a try with an other multi frequency tool, and I'll come back to this post if I have a better result than my former gaussian.
As I'm not very fast in treatment, it could take several days.
CS
Alex
Thank you for your comments on my image.
To give some words on it, it has been made with a D250mm f7 newton coupled with a custom design telecentric x5. The resulting focal length is 8750mm, F/D35.
These numbers are normally too big for my area where a very common seeing dominates. But that day, when I tested the new telecentric in Ha, the seeing was more stable than usual.
Regarding the treatment, I used mostly level 3 and level 4 wavelets; probably a bit of level 2, but I can't remember in detail. This to express that seeing and collimation were not prefect.
I tried also to balance during the treatment the low and high level. I generally try to have a balance and no darken to much the picture, this last technique giving a (wrong?) feeling of strong contrast.
Thank you for having taken the time to demonstrate the capabilities of denoising and Neat Image. I'm impressed!
There is potential there for one higher level in processing! I should really pay attention on it.
You have experience in it and you managed the different low/high level low/high frequency characteristics. Congratulation!
I'll give a try with an other multi frequency tool, and I'll come back to this post if I have a better result than my former gaussian.
As I'm not very fast in treatment, it could take several days.
CS
Alex
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
Hello Bruce,
I would like to buy Neat Image but do I buy the home version which only does 8 bit images or do I buy the Pro edition which is much more expensive but deals with 16 bit images?
Kind regards
Alexandra
I would like to buy Neat Image but do I buy the home version which only does 8 bit images or do I buy the Pro edition which is much more expensive but deals with 16 bit images?
Kind regards
Alexandra
- pedro
- Way More Fun to Share It!!
- Posts: 12948
- Joined: Sun May 01, 2016 8:26 pm
- Location: Portugal
- Has thanked: 16 times
- Been thanked: 7759 times
- Contact:
Re: Denoise
I also use a gaussian filter in PS for denoising. It works like a charm
Pedro Re'
https://pedroreastrophotography.com/
https://pedroreastrophotography.com/
- rsfoto
- Way More Fun to Share It!!
- Posts: 6795
- Joined: Mon Jun 18, 2012 8:30 pm
- Location: San Luis Potosi, México
- Has thanked: 11388 times
- Been thanked: 6428 times
Re: Denoise
Hi,
My personal appreciation on many images is that sometimes we tend to use too much Wavelets and that intoduces a grainy seeing or as I call it sandy like the sand on the beach.
It sometimes happens to me too ...
regards rainer
My personal appreciation on many images is that sometimes we tend to use too much Wavelets and that intoduces a grainy seeing or as I call it sandy like the sand on the beach.
It sometimes happens to me too ...
regards rainer
regards Rainer
Observatorio Real de 14
San Luis Potosi Mexico
North 22° West 101°
Observatorio Real de 14
San Luis Potosi Mexico
North 22° West 101°
- marktownley
- Librarian
- Posts: 44628
- Joined: Tue Oct 18, 2011 5:27 pm
- Location: Brierley Hills, UK
- Has thanked: 23839 times
- Been thanked: 12169 times
- Contact:
Re: Denoise
For the difference in price I would go with the ability to work with 16 bit data everytime...
http://brierleyhillsolar.blogspot.co.uk/
Solar images, a collection of all the most up to date live solar data on the web, imaging & processing tutorials - please take a look!
- Montana
- Librarian
- Posts: 36146
- Joined: Mon Oct 17, 2011 5:25 pm
- Location: Cheshire, UK
- Has thanked: 20709 times
- Been thanked: 9963 times
Re: Denoise
Just want to bump this NeatImage tutorial for Mark to move to software section.
Alexandra
Alexandra