![]() ...
·
![]()
·
2
likes
|
---|
I'm just looking at drizzling for the first time. I just wondered how many on here actually use it? Does it help with mono images? Do I actually need to dither as well? Any thoughts would be appreciated. |
![]() ...
·
![]() |
---|
Another question regarding this topic. cfa drizzle osc images for better color? seems a few people do that. Is there really a benefit? |
![]() ...
·
![]() |
---|
David Koslicki: Great thread topic! I do dither in my subs, but have never stacked with drizzling. I use APP for stacking and really want to try this. Thanks and clear skies, Ian |
![]() ...
·
![]()
·
1
like
|
---|
@Ian Dixon I actually use APP too! In case it helps, I've found that the type of kernel you pick (topHat, point, square, Gauss) doesn't matter too much, as long as you don't pick point. I've tried varying the droplet size and found that smaller droplets = sharper but more fine scale noise. I haven't checked the theory though to see if this is just particular to my setup, or a general fact. |
![]() ...
·
![]()
·
2
likes
|
---|
Another question regarding this topic. In general is advisable to drizzle CFA images at 1x to have better details but at the expense of sligthtly lower SNR. Natural (i.e. drift) or imposed dithering is obviously a requirement. It is also the case if you're undersampling your seeing by a meaningful factor. |
![]() ...
·
![]()
·
1
like
|
---|
@David Koslicki thank you very much. I am going to try this with some recent M104 data (sampling at .92 arc sec/pixel) in which I used a 120 mm Esprit apo @ 840 mm focal length and my 2600 mc pro with 3.76 uM pixels. I have lots of other data with the same camera and my C8edge which is 2032 mm - my sampling is .38 arc sec/pixel - and so I am oversampling with this rig. Would drizzling help me in this case (or inject pain ![]() Thanks, Ian |
![]() ...
·
![]() |
---|
David Koslicki: Thanks! |
![]() ...
·
![]()
·
7
likes
|
---|
Drizzling makes only sense for undersampled data. Drizzle should reduce the stepping effect you see on undersampled stars. Drizzle adds no information at all. It is just smothering “gaps” on an edge of three neighboring pixels based on anti-alazing algorithms. Dithering makes no difference. The closer you work to oversampling drizzling gets conter productive, because in the end you have to downsample again and you only blow up processing time and storage by factor 4. to sum up: drizzle does not add any information, it only smoothens undersampled structures. |
![]() ...
·
![]()
·
3
likes
|
---|
@Ian Dixon I doubt that drizzle will gain you much when sampling at 0.38"/pixel. Ball parking it: I imagine your tracking would need to be under ~0.4 arc seconds RMS and excellent seeing for you to gain anything from drizzle, and then only if you are imaging something really small that you want to bring out some detail. My rational is that if your tracking accuracy is any worse than this, then the photons emitted from one point source of light are already falling on multiple pixels in each exposure. From what I've looked into, drizzle is for the opposite problem: multiple point sources of light falling on a single pixel. My usual strategy, however, is to always test theory with data. Since it only costs hard drive space and CPU cycles, it never hurts to try and see what the results are! When I do this, it often helps improve my understanding of the underlying processing algorithms. |
![]() ...
·
![]()
·
2
likes
|
---|
Ruediger: Thanks @Ruediger - got it |
![]() ...
·
![]()
·
2
likes
|
---|
@Ruediger From reading the drizzle paper (https://iopscience.iop.org/article/10.1086/338393/pdf), it's more complicated than a simple anti-aliasing algorithm. While that's the most visible feature (smoothing out the jagged edges), since the shift of the image (dither) is being used to infer what smaller pixels (the drizzle "drops") would have measured, you really are dividing the input from a single pixel between several output pixels (see Section 7 of that paper). So you really are teasing out extra information. To see this in action, find a pair of stars/objects that are really close to each other and notice how the resolution changes. Using a different crop of the image I posted above, note how the faint smudge on the lower right of this star is better resolved when drizzled (hence, more than just smoothing edges): No drizzle ![]() Drizzle ![]() Also, drizzle will definitely not work if images are not dithered or somehow moved from sub to sub. The algorithm would have nothing to work with then. |
![]() ...
·
![]() |
---|
I tried drizzling tonight on my latest image:![]() M81 and M82 using very old PixelMath What I have noticed is that without drizzling I had really nasty sharp edges and artefacts on the stars and with drizzling they are not too bad (still a bit rough). Is this one thing that drizzling can help with? |
![]() ...
·
![]()
·
2
likes
|
---|
Happy to share some experience here with 1.000mm FL and an ASI295mc. This combination is wether an over- nor an undersampling. I did it in APP, mostly with 2x drizzling and dropplet 0.5. I found out that it sometimes helps a little bit in resolution but with the cost of noise. As some others stated here the processing time and file sizes are another disadvantage. I assume that you just really benefit from drizzling with undersampled data as many others said here before. You will find nice explanations about pros and cons from Mabula and other guys in the web which I highly recommend. But if you have the time it's worth it to push some buttons compare the results with your own eyes😉 Good luck Mike |
![]() ...
·
![]() |
---|
David Koslicki: Hi David, Many thanks for your replay. There is a misunderstanding what the term “information” means. There is no way to generate additional “information” by any algorithm. You can only generate a smothening effect, but actually no information. Drizzling is a pure visual effect. As you prove in your example, showing a star which shows the effect of under sampling. Your second star shown is blurred or smoothened shape at that zoom level. You can achieve the illusion just by zooming in infinitely: you always get an under sampling effect, which you could mitigate with drizzle. Drizzle was originally developed for cameras with low resolution compared to screen resolution or print media. If your camera is already providing high resolution close to or even beyond oversampling drizzle won’t improve your image. Or other way round: it makes no sense to drizzle a full frame image and then scale it down to 50% in order to post on AB. Or convert it even to jpg🫤 |
![]() ...
·
![]()
·
4
likes
|
---|
Hopefully I can clarify a few points with respect to dithering and drizzle, my comments apply to both mono and color CMOS and CCD cameras. Dithering: a random (ideally) movement of the camera sensor pixels relative to the image target, done between subexposures. For deepsky targets you should be doing this whether or not you drizzle. If you don't dither then you will see background noise artifacts due to pattern noise from your sensor. If you are guiding since image moves very little you will see the pattern noise burned in your final image. If not guiding, where there will be some drift between frames, you need to dither at least occasionally, otherwise you end up with a pattern noise called walking noise, since non-guided drift is mostly in a defined non-random direction. Again, all this is independent of whether you want to drizzle in processing. It is important you dither. Drizzling: Processing step that basically maps the original coarser subexposure grid with its pixel values to a finer pixel grid that has 4X more pixels (for 2x drizzle), or 9X more pixels (for 3x drizzle). In this way you actually do potentially capture more image detail. The method was specifically developed for the case of undersampling, when your pixels are too large for your plate scale and seeing. See [url=http:// https://www.jstor.org/stable/10.1086/338393?seq=5]Fruchter and Hook[/url] for the math behind this transformation. In a nutshell this trick can recover detail that was lost in each subframe, but can be recovered by adding the contributions of all the shifted subframes. Note for lucky imaging for planetary work images are often oversampled to improve the image resolution and even there drizzle can help increase the detail captured. So a case can be made for deepsky work with shorter lucky imaging that drizzle should help even if you are oversampled. My own personal experience, non-guided lucky imaging (less than about 20 to 30 seconds), is that 2X drizzle reduces my FWHM even though I am in theory at the sweet spot for sampling without drizzle (plate scale at 0.5 arc-sec, so x 2 to 3 for Nyquist sampling gives me 1 to 1.5 arc-sec, good enough for all but the absolute best seeing). Drizzle is very helpful to bring out fine detail, such as in planetary nebula. In my tests I see no impact on S/N of 2X drizzle, and did not find 3X drizzle improved my resolution over 2x. So I do use drizzle when I can. I use DeepSkyStacker for drizzle. The only reason I don't drizzle is if I am using the full frame of my 50 mb raw images, then DSS crashes on my computer. The issue is the 4X increase in pixels, so a 50 mb image becomes 200mb in the stacking! So for drizzle I either capture a smaller frame or set a smaller frame size in DSS. In short. Dither. Try with and without drizzle processing on different targets and different nights and compare. Everyones setup and conditions are different, I really encourage you to experiment with it, and learn if and when to use it. Clear skies and fast CPUs Rick |
![]() ...
·
![]() |
---|
@Ruediger Fair enough: my interpretation of "information" is the Shannon entropy-esque definition. Hence a single (undrizzled) pixel with normalized value 1 will have entropy 0 while four (drizzled) pixels with, say, normalized values {1/2, 1/3, 2/3, 1/4} will have entropy Log[4], hence more information ![]() |
![]() ...
·
![]()
·
1
like
|
---|
I usually drizzle, particularly for smaller galaxies and for the stars that I will replace the narrowband ones with. Usually I drizzle x2; I have a native 1.41"/pixel so that takes it to about 0.7"/pixel which gives me visible benefits. I also find the drizzled noise easier to deal with as it is finer. Occasionally I drizzle x3 for tiny stuff - e.g. the waterbug galaxy and friends showed improvement at 3x over 2x : https://www.astrobin.com/itma6w/?nc=user |
![]() ...
·
![]()
·
1
like
|
---|
You'll get a description of the algorithm in this paper, and here as part of the documentation of the DrizzlePac software developed for the HST. In a nutshell, the principle is to resample each subframe on a finer grid and to replace each original pixel by a smaller 'droplet', the simplest case being a smaller square, but other shapes (e.g. a Gaussian) are possible. The result for a single frame is an image filled with droplets with gaps in between (depending on how small the droplets are compared to original pixels). When stacking enough dithered frames, the gaps are eventually all filled in by a 'drizzle' of droplets. @David Koslicki, it is normal that SNR decreases with smaller droplets: on average, fewer droplets contribute to the signal at any given point of the resampled final image. The idea of filling in gaps by dithering can be extended to the de-bayer process. Each color plane of a raw RGB image is made of pixels & gaps. The gaps in each plane can be filled in with dithered frames, and the process is referred to as 'bayer-drizzle'. In that case, if the objective is not to increase the resolution (but it can), the native sampling can be used with on droplet shrinkage. I always dither my frames anyway, so bayer-drizzle comes for free. A resolution increase can also be achieved if the images are undersampled. Here is an example to showcase my own results ![]() ![]() The AZI178 barely samples my Samyang135 (5.6" measured PSF for 3.7" pixels, see revision B) at the center of the field of view. The image was bayer-drizzled with x4 resampling and 0.5 square droplets. I'm using a home-brewed Python program that is limited to square droplets, so I never experimented with other shapes. Another example with x4 resampling and 0.5 droplet size (left) from a simulated sequence of 200 dithered images of a resolution target (right). Note that the input was actually RGB, hence the subtle color artefacts in the drizzled image. ![]() CS, Frédéric |
![]() ...
·
![]() |
---|
David Koslicki: Hi David, I am argumenting based on the same definition based on Shannon Theorem: Since the information comes from a predictable algorithm, the probability=1. Hence information equals zero. You cannot generate more information than contained in the raw data. That would violate the information theory that an information sink contains more information than the source. But forgive me if I am wrong, my studies of information theory are 30 years in the past. 🫣 Maybe we can agree on an empiric approach? just try it out and do what looks better 🤔 |
![]() ...
·
![]()
·
3
likes
|
---|
Ruediger:David Koslicki: There is no information created by drizzling, information is only recovered. The information at the sub pixel level is contained in the total set of data, the collection of subexposures that are randomly moved with respect to each other. By simply adding them pixel by pixel we loose the extra information that the random motion produced. In drizzling we have the opportunity to recover this lost information. A too simple analogy would be sieving a sample of two different sized powders--if we use a large sieve so all the particle go through it looks like all the particles are the same size. If we use a smaller sieve we see some go through, some stay behind. We have more information on the sample, but did not create information. |
![]() ...
·
![]()
·
1
like
|
---|
Is my understanding that by 2x drizzle you reduce SNR by a factor of 2 because you only have 1/4th of the signal per "drizzle pixel" on average correct or am I missing something? Thank you for starting this discussion. Highly interesting for me since I'm currently still creating very wide field work with camera lenses which is significantly undersampled. Will certainly give drizzling a try. Clear skies Wolfgang |
![]() ...
·
![]()
·
1
like
|
---|
For what it's worth. I always dither my datasets and produce drizzled and non-drizzled versions of the raw stack. Unless the stack is under sampled, I almost always use the non-drizzled stack. To test for under sampling I use the PI FWHMEccentricity script and examine the Median FWHM value. If it's less than 2 you're under sampled. If you are a PI user and you don't have a copy of Inside PixInsight by Warren Keller, I would strongly recommend it. |