Combining OSC with Mono [Deep Sky] Processing techniques · Coolhandjo · ... · 133 · 3027 · 22

jrista 8.93
...
· 
·  2 likes
Hello again everyone I'm back with more time on my hands now. And now to get into the goods.

@Freestar8n Just to clarify - is T your naming convention for what I would call L in LRGB, or is this an established thing elsewhere?

Also,
A true luminance filter would be fairly narrowly peaked in the green, and much more narrow than typical QE response curves.

How would this take into account for objects that are very obviously not "green"? For example, M45, SH2-136 (Ghost nebula near iris), or anything with an emission line?


Arun H:
Incidentally - today's IOTD has a lum to RGB ratio of 1:1:1:1, so roughly equal total signal between luminance and color. The overall image is excellent and the colors deep and vibrant.

If we're talking about IOTD's, there's plenty that have 3:1's as well. To take it to the extremes even, 2/15/24's IOTD by zombi (https://www.astrobin.com/r16oac/B/) is roughly 10: ~1 : ~0.7 : ~0.7 (5hrs 20mins L, 33mins R, 22mins G&B). While I'm not recommending a 10:1 ratio, clearly it worked. This isn't exactly an isolated event either. In my brief search I found plenty that had >4:1 ratios.

Did it work? I see signs of fairly significant artificial saturation... I also noticed that there are some tenuously thin swaths of dust that fade to a thin translucent gray as the signal gets weaker. A telltale sign of RGB exposures that were probably too short. That image is from the Taurus region, which is thickly packed with dust and plenty of interdispersed stars that often closely reflect off the nebula. Most of the background dust that isn't directly reflecting a literal nearby star, should generally take on more of a brownish and tan color.

Deeper integrations of the region usually depict that...so, the falloff from light tan to gray here, combined with the short RGB integrations (20 to 30 minutes), and the generally brownish color of the dust here, makes me think that the image, as beautiful and detailed as it is, is probably not an accurate color rendition. So, it worked in one sense, not in another, in other words.
Like
C.Sand 2.33
...
· 
Jon Rista:
Did it work? I see signs of fairly significant artificial saturation... I also noticed that there are some tenuously thin swaths of dust that fade to a thin translucent gray as the signal gets weaker. A telltale sign of RGB exposures that were probably too short. That image is from the Taurus region, which is thickly packed with dust and plenty of interdispersed stars that often closely reflect off the nebula. Most of the background dust that isn't directly reflecting a literal nearby star, should generally take on more of a brownish and tan color.

Deeper integrations of the region usually depict that...so, the falloff from light tan to gray here, combined with the short RGB integrations (20 to 30 minutes), and the generally brownish color of the dust here, makes me think that the image, as beautiful and detailed as it is, is probably not an accurate color rendition. So, it worked in one sense, not in another, in other words.

My bad, I should have been more clear. I definetly do not think there is enough RGB data there. Artificial satruation and all. "Clearly it worked" refers to the fact that it got an IOTD, and that the image looks very good. Very much so my bad on wording.

All that being said, today's IOTD is more green than I would think natural dust to be, so I would not say it is neccessarily an accurate color rendition either. In addition to that it has Ha addition. Once again, not a bad image, just I don't think it accurately demonstrates the points this discussion is making.
Like
HegAstro 12.28
...
· 
I have to agree with Jon here. Ani's image, although of a different region is far cleaner, richer, and more vibrant than the image of the Taurus region (although obviously not an apples to apples comparison). Look in the shadows. Just because an image is an IOTD doesn't make it perfect, or without room for improvement!
Like
C.Sand 2.33
...
· 
Arun H:
I have to agree with Jon here. Ani's image, although of a different region is far cleaner, richer, and more vibrant than the image of the Taurus region (although obviously not an apples to apples comparison). Look in the shadows. Just because an image is an IOTD doesn't make it perfect, or without room for improvement!

I would like to emphasize that I do not to intend to compare the merits of the images in terms of color or accuracy and whatnot [EDIT: Though that was not clear in my origonal statement]. My point here is that picking out an IOTD does little to support a point of view. Today's IOTD (3/1) has ~45hrs of exposure and the 2/15 ~7 (ignoring the differences of LP and such, though they seem relatively similar).
Edited ...
Like
jrista 8.93
...
· 
·  1 like
Jon Rista:
Did it work? I see signs of fairly significant artificial saturation... I also noticed that there are some tenuously thin swaths of dust that fade to a thin translucent gray as the signal gets weaker. A telltale sign of RGB exposures that were probably too short. That image is from the Taurus region, which is thickly packed with dust and plenty of interdispersed stars that often closely reflect off the nebula. Most of the background dust that isn't directly reflecting a literal nearby star, should generally take on more of a brownish and tan color.

Deeper integrations of the region usually depict that...so, the falloff from light tan to gray here, combined with the short RGB integrations (20 to 30 minutes), and the generally brownish color of the dust here, makes me think that the image, as beautiful and detailed as it is, is probably not an accurate color rendition. So, it worked in one sense, not in another, in other words.

My bad, I should have been more clear. I definetly do not think there is enough RGB data there. Artificial satruation and all. "Clearly it worked" refers to the fact that it got an IOTD, and that the image looks very good. Very much so my bad on wording.

All that being said, today's IOTD is more green than I would think natural dust to be, so I would not say it is neccessarily an accurate color rendition either. In addition to that it has Ha addition. Once again, not a bad image, just I don't think it accurately demonstrates the points this discussion is making.

Ah! Yeah, I guess there are a variety of goals, and getting an IOTD is one of them. I guess if you know how to tweak an image to win a contest, you might as well. ;) To that end, indeed, it clearly worked!
Like
jrista 8.93
...
· 
·  2 likes
Arun H:
I have to agree with Jon here. Ani's image, although of a different region is far cleaner, richer, and more vibrant than the image of the Taurus region (although obviously not an apples to apples comparison). Look in the shadows. Just because an image is an IOTD doesn't make it perfect, or without room for improvement!

I would like to emphasize that I do not to intend to compare the merits of the images in terms of color or accuracy and whatnot [EDIT: Though that was not clear in my origonal statement]. My point here is that picking out an IOTD does little to support a point of view. Today's IOTD (3/1) has ~45hrs of exposure and the 2/15 ~7 (ignoring the differences of LP and such, though they seem relatively similar).

It is also very hard to do any meaningful comparison, of IOTDs or others, due to processing. I guess you could call out three key factors that are fundamental to resulting image quality in the long run: technology, acquisition technique, processing technique. Technology is pretty easy. How the technology is used to acquire data is a little harder to evaluate and compare. Processing technique is where it all breaks down, and it gets tough to compare images that seem similar, but when the end results don't quite align with some of the technical and acquisition aspects. 

With processing, you have a range of skills. Sometimes people have high end tech, excellent acquisitions skills, and no processing skills, and IQ can really suffer due to aggressive and ineffective processing. Other times an imager might not have great tech or good acquisition skills, but are phenomenal at processing and they can make do. 

If you want to compare actual images, its better if you follow a more strict acquisition, pre-processing and minimal (and explicit) post-processing plan to make sure you are comparing things on a level field. With the diversity of ready-processed images hosted on various image sharing sites like ABin, you might be able to broadly classify different images...low, medium, high quality, but you aren't necessarily going to have any useful insights as to why an image falls into one of those buckets.
Like
jrista 8.93
...
· 
·  2 likes
Jon Rista:
Arun H:
I have to agree with Jon here. Ani's image, although of a different region is far cleaner, richer, and more vibrant than the image of the Taurus region (although obviously not an apples to apples comparison). Look in the shadows. Just because an image is an IOTD doesn't make it perfect, or without room for improvement!

I would like to emphasize that I do not to intend to compare the merits of the images in terms of color or accuracy and whatnot [EDIT: Though that was not clear in my origonal statement]. My point here is that picking out an IOTD does little to support a point of view. Today's IOTD (3/1) has ~45hrs of exposure and the 2/15 ~7 (ignoring the differences of LP and such, though they seem relatively similar).

It is also very hard to do any meaningful comparison, of IOTDs or others, due to processing. I guess you could call out three key factors that are fundamental to resulting image quality in the long run: technology, acquisition technique, processing technique. Technology is pretty easy. How the technology is used to acquire data is a little harder to evaluate and compare. Processing technique is where it all breaks down, and it gets tough to compare images that seem similar, but when the end results don't quite align with some of the technical and acquisition aspects. 

With processing, you have a range of skills. Sometimes people have high end tech, excellent acquisitions skills, and no processing skills, and IQ can really suffer due to aggressive and ineffective processing. Other times an imager might not have great tech or good acquisition skills, but are phenomenal at processing and they can make do. 

If you want to compare actual images, its better if you follow a more strict acquisition, pre-processing and minimal (and explicit) post-processing plan to make sure you are comparing things on a level field. With the diversity of ready-processed images hosted on various image sharing sites like ABin, you might be able to broadly classify different images...low, medium, high quality, but you aren't necessarily going to have any useful insights as to why an image falls into one of those buckets.

FWIW, a lot of the LRGB and OSC talk, is theoretical. There are "the ways we do things" and...that is pretty deeply ingrained within astrophotography circles. 

Not a lot of people do RGB only. When I come across such images, they generally rank highest on my personal quality meter. Not much talk about LRGB ratios that are maybe more "in balance" (boy, I've seen some crazy L - RGB ratios since this thread started, some 10:1:1:1, some even higher than that!) Not much talk about RGB ratios, for that matter...the 1:2:1 stuff comes from Frank over on CN, and I tend to agree that we probably shouldn't even be getting 1:1:1 distributions of RGB data. Few talk about how to determine an optimal R:G:B ratio, or the fact that it probably requires either actively stacking your first couple of hours the night of acquisition, and re-weighting, if you do one-night-only imaging, or stacking what you acquire your first night and then adjusting your ratio accordingly to support the weakest or noisiest channels by the time you are done.

There is some real-world practical experience, but its often so seldom done, that its hard to get robust statistics on what the results actually are like in the end. I know what I personally like in the end, the results that are most impressive to me. The IOTD from a couple of days ago was an LRGB 1:1:1:1 ratio, and it had excellent color. The color was quite deep. The details were very good, and I wonder if a super lum was used which would have resulted in one incredibly strong L channel. Regardless, the fact that it had as much R, G and B as L is probably a key factor in the quality of the result. 

Not a lot of people process OSC in the most optimal ways, either... I think there are numerous ways we can process OSC that would make OSC results a lot better, but, there actually isn't, hasn't been, as far as I've seen and know, a lot of talk about processing techniques for OSC that might step up the OSC game in general. For example, bayer drizzle (sometimes called CFA drizzle), is probably a much better way of both demosaicing and integrating OSC data, than your standard debayering algorithms. Without enough subs, it can result in higher noise/lower SNR, but with enough subs (and better plenty of subs) it will produce mono-like output for your OSC images. My experience is that monochrome or mono-like data with very per-pixel characteristics (vs. the smudged and smeared characteristics of debayered OSC) will handle noise reduction better, will often combine with other mono data (i.e. NB) better, etc. 

Sometimes OSC is better debayered using a super resolution technique, that simply separates the channels and then halves their resolution. You get the raw red, green and blue channels, with G generally having 40% better SNR. This presents different processing opportunities for OSC imagers that could completely change the results. You would be more in direct control of per-channel background extraction. You have more freedom with how each channel is scaled and aligned. More options for recombining the channels, more ways to calibrate them, etc. 

Once I get my system back up and running and am able to get back out to my dark site on a regular basis, I intend to explore a lot of this stuff myself. I do like to encourage others to explore the options as well, though, as a single-source of data points isn't really enough to make for meaningful trends.
Like
C.Sand 2.33
...
· 
Jon Rista:
It is also very hard to do any meaningful comparison, of IOTDs or others, due to processing. I guess you could call out three key factors that are fundamental to resulting image quality in the long run: technology, acquisition technique, processing technique. Technology is pretty easy. How the technology is used to acquire data is a little harder to evaluate and compare. Processing technique is where it all breaks down, and it gets tough to compare images that seem similar, but when the end results don't quite align with some of the technical and acquisition aspects.

I almost entirely agree. The only exception being if we were to see a large disparity between ceartain techniques being a point to look at. We can use equipment/technology as an example for this: I don't see many IOTD's being taken with achromats. I wouldn't be suprised someone found examples, but I can gaurentee they'd be heavily outnumbered. But yes, overall, I don't think there are major disagreement in what will produce the "best" image. 

Two notes on our L/no L disagreement here:
1. I don't think it contradicts my previous statement because we can agree that both methods can produce amazing results as long as care is taken to do so.

2. I do think there are more IOTD's with L than without. Now I don't say this as a "gotcha", just as something that's interesting. I haven't been around this debate long enough to know if RGB (no L) is pervasive enough to account for the disparity, nor do I feel like doing the statistics. I'm sure there might be other reasons for this and would be interested in hearing anyone's thoughts on why.
Jon Rista:
With processing, you have a range of skills. Sometimes people have high end tech, excellent acquisitions skills, and no processing skills, and IQ can really suffer due to aggressive and ineffective processing. Other times an imager might not have great tech or good acquisition skills, but are phenomenal at processing and they can make do.

Absolutely. 
Jon Rista:
If you want to compare actual images, its better if you follow a more strict acquisition, pre-processing and minimal (and explicit) post-processing plan to make sure you are comparing things on a level field. With the diversity of ready-processed images hosted on various image sharing sites like ABin, you might be able to broadly classify different images...low, medium, high quality, but you aren't necessarily going to have any useful insights as to why an image falls into one of those buckets.

For the most part yes. I do think there is merit to looking at those buckets in reference to what styles most commonly fall into each. Once again using equipment as an example, I have no doubt we would see a larger ratio of CDK's in the upper echelons than towards the bottom (specifically ratio of CDK : Other scopes [Edit: Astrobin is trying to make : O and emoji lol]). But doing this comparison would require a large set of data and a number of qualifiers on how to judge and image, and unfortunately I don't have those skills. Anyone want to do the community a favor? (And probably be criticized for your methods?)
Edited ...
Like
jrista 8.93
...
· 
Jon Rista:
It is also very hard to do any meaningful comparison, of IOTDs or others, due to processing. I guess you could call out three key factors that are fundamental to resulting image quality in the long run: technology, acquisition technique, processing technique. Technology is pretty easy. How the technology is used to acquire data is a little harder to evaluate and compare. Processing technique is where it all breaks down, and it gets tough to compare images that seem similar, but when the end results don't quite align with some of the technical and acquisition aspects.

I almost entirely agree. The only exception being if we were to see a large disparity between ceartain techniques being a point to look at. We can use equipment/technology as an example for this: I don't see many IOTD's being taken with achromats. I wouldn't be suprised someone found examples, but I can gaurentee they'd be heavily outnumbered. But yes, overall, I don't think there are major disagreement in what will produce the "best" image. 

Two notes on our L/no L disagreement here:
1. I don't think it contradicts my previous statement because we can agree that both methods can produce amazing results as long as care is taken to do so.

2. I do think there are more IOTD's with L than without. Now I don't say this as a "gotcha", just as something that's interesting. I haven't been around this debate long enough to know if RGB (no L) is pervasive enough to account for the disparity, nor do I feel like doing the statistics. I'm sure there might be other reasons for this and would be interested in hearing anyone's thoughts on why.
Jon Rista:
With processing, you have a range of skills. Sometimes people have high end tech, excellent acquisitions skills, and no processing skills, and IQ can really suffer due to aggressive and ineffective processing. Other times an imager might not have great tech or good acquisition skills, but are phenomenal at processing and they can make do.

Absolutely. 
Jon Rista:
If you want to compare actual images, its better if you follow a more strict acquisition, pre-processing and minimal (and explicit) post-processing plan to make sure you are comparing things on a level field. With the diversity of ready-processed images hosted on various image sharing sites like ABin, you might be able to broadly classify different images...low, medium, high quality, but you aren't necessarily going to have any useful insights as to why an image falls into one of those buckets.

For the most part yes. I do think there is merit to looking at those buckets in reference to what styles most commonly fall into each. Once again using equipment as an example, I have no doubt we would see a larger ratio of CDK's in the upper echelons than towards the bottom (specifically ratio of CDK : Other scopes [Edit: Astrobin is trying to make : O and emoji lol]). But doing this comparison would require a large set of data and a number of qualifiers on how to judge and image, and unfortunately I don't have those skills. Anyone want to do the community a favor? (And probably be criticized for your methods?)

Oh, I would suspect there are probably a lot more IOTDs with L than without. As I was saying, its just "how we do it" and so, yeah, its how people usually process their images. 

But, therein, lies a bit of a problem. If its how the vast majority of people do things, how do we know it IS the best? Some people think that if a majority of people do X, then X must be best. Some studies have shown that, more likely than not, the reason most people do X, is because of X-related group think, influence on, about, around and for X, etc. etc. When everyone does X, how do you know what Y, Z, and A, B, C, D and E are like? 

FWIW, I guess I'm less concerned about scopes here, at least in this discussion, since this thread was originally started asking about the viability of combining OSC & mono+NB data. That is a camera hardware, processing question. The LRGB stuff came into the discussion because people were saying it was hands down better than OSC, which I disputed. ;P When people started talking about how OSC bandpasses are actually often passing more light overall (i.e. larger area under the curve...not always true, but frequently true, with OSC), that got us into comparing LRGB to OSC, and also RGB to OSC (i.e. a lot of the RGB filters bundled with L filters, also have LP gaps, which narrows the bandpasses a bit, thus reducing the area under the curve). We've kind of been stuck on the LRGB discussion...

At this point, all I'm trying to say is, its tough to draw truly clear and definitive conclusions from processed data. ESPECIALLY with the advent of AI processing tools. That hides so many of the real-world facts about the data we are trying to compare, that it effectively makes any conclusions suspect, at the very least. 

In the spirit of the original topic, OSC+mono/NB...
  1. I do NOT believe that OSC is inferior to mono+LRGB. In fact, there are ways OSC could be quite superior. This has been tested before, sounds like Frank may have recent testing.
  2. Mono+RGB has a lot to offer, however, IMHO, its not really best done with your standard LRGB filter sets, and instead with an overlapping set (i.e. Astronomik Type-2c or J-C BVR filters).
  3. The idea of combining OSC data with mono NB data is perfectly fine. And within this point:
    • There should NOT be any registration issues. I've registered plenty of OSC images to NB images. I've registered images with wildly different image scales. Registration operates on star details, which are very bright signals, there are ZERO inherent reasons why you couldn't register NB to OSC, OSC to NB, OSC to mono RGB, etc. etc. etc.
    • Drizzling the OSC data to demosaic and integrate it will produce a signal and noise profile much more like mono data, further negating any concerns about point 3.1.
    • Use of a CCM to do at least an initial calibration pass of OSC data could very likely negate any green color cast issues.
    • Performing SPCC on the data after CCM might well produce even more accurate results, since the colors of the image have been properly redistributed to account for the exact nature of the OSC CFA bandpasses.


 There are processing techniques that can be used with OSC that can completely change how its approached. Super-resolution demosaicing can give you three high quality but half-resolution channels. Ideally, you would probably want to apply a CCM to the super-res image first, then split the RGB channels. From that point on, you can process the image as if it was mono data. In fact, this could well be a better practice in general. Modern CMOS cameras have tons of resolution. Old guard CCD cameras usually had much lower resolution, sometimes vastly lower. Some CCD cameras only produced 2-3 megapixel images. We have 30, 50, 60 megapixels these days!! Even reduced by a factor of 50%, a 30mp camera would still deliver 7.5mp images. I downsample a lot of my images, although after a lot of my early processing that aims to maximize detail, and the final results are very high detail images, despite being 1/4 the area. For sharing pretty pictures online, its a great way to maximize the IQ at the scale most people are going to view at anyway. For print, its a different deal, and I tend to keep my full res (or when the data supports it, 2x drizzled) images.
Like
vercastro 4.42
...
· 
·  1 like
Jon Rista:
That to me is a minor point. The more important one, I think, is this. IF you do create a synthetic luminance from your RGB. You could get 30 hours of RGB. And, you could ALSO, without having to expend any additional acquisition time, effectively have a ~30 hour synthetic L by integrating the RGB channels together. (FWIW, I'm not saying extract the L*, I'm saying integrate them together...I find the latter seems to produce a better synthetic L.)

Once you have that L channel, since L is not really about SNR, and is much more about having a high SNR undiscriminating channel to process for contrast, then you can actually DO that processing. And also, still have your deep RGB. Without any additional acquisition time costs for L.

Sort of a....have your cake, and eat it too, kinda deal.

No, its not a 100% perfect, exact replica of what separately exposing an L filter would get you. I don't think the differences are actually going to matter enough in the end, though, not once you have a super strong RGB image to work with AS WELL.


Here's the thing, 30 hours of RGB all integrated together to make a super lum is still roughly a 3rd the amount of the amount of signal (not SNR, we'll get to that) compared to 30 hours of just Lum. Of course you want colour too, so you don't actually image just lum, but you get my point.

There is no free cake here. The math simply doesn't give it. The BEST option is somewhere in the middle, always has been.

Lum is absolutely about increasing SNR with less imaging time. I understand what your saying there, but the SNR is what makes the higher contrast possible and REAL lum makes that possible in less imaging time. Your too hung up on this idea that more aggressive lum ratios dooms an image to lack natural saturation, vibrancy and gamut. In my direct experience, that is practically not the case (judge my photos if you want). The issue is almost always with lack of OVERALL integration time (most of my projects are at least 20 hours, usually double) and less precise processing.

I will in the future try and image a target with the primary goal to demonstrate a comparison between all RGB and Lum with some fresh data.
Edited ...
Like
Freestar8n 1.51
...
· 
·  1 like
Hello again everyone I'm back with more time on my hands now. And now to get into the goods.

@Freestar8n Just to clarify - is T your naming convention for what I would call L in LRGB, or is this an established thing elsewhere?

Also,
How would this take into account for objects that are very obviously not "green"? For example, M45, SH2-136 (Ghost nebula near iris), or anything with an emission line?

Calling it T is my convention that I started in the CN thread, to represent Total signal across the filter bandpasses.  "Luminance" has two different meanings - with one relating to total flux and the other to perceived luminance.  In the LRGB scheme you want the latter, but the "Luminance" filter delivers the former.  I don't think people realize it is fundamentally different and has consequences.

The problem is that "LRGB" is strictly an amateur thing and it isn't used professionally.  So the "luminance" filter is also a strictly amateur thing.  There is no "Luminance" filter on the HST.  So I think it is confusing and incorrect to call it "Luminance" when it is designed for use in LRGB imaging.  Another name for the filter is simply UV/IR cutoff - but that is longer.

As for Jon's question about the OSC measurements - they are in this thread: https://www.cloudynights.com/topic/911085-filter-comparison-actual-data-osc-sloan-jcsquare-astronomik/

It's a pain and time-consuming to do tests like this, and it requires good and steady conditions.  But I welcome others to take the time to try and compare results.  Measurements were done in Maxim with two different cameras and mapped to common units.

@Jon Rista  pointed out that the 183MC is more modern and has higher QE than the 1600, and I think that's true but I think the effect shown here is more than that difference.  The broadband object I chose for measurement is the nucleus region of a galaxy.

Frank
Like
morefield 11.37
...
· 
·  3 likes
Arun H:
I have to agree with Jon here. Ani's image, although of a different region is far cleaner, richer, and more vibrant than the image of the Taurus region (although obviously not an apples to apples comparison). Look in the shadows. Just because an image is an IOTD doesn't make it perfect, or without room for improvement!

I would like to emphasize that I do not to intend to compare the merits of the images in terms of color or accuracy and whatnot [EDIT: Though that was not clear in my origonal statement]. My point here is that picking out an IOTD does little to support a point of view. Today's IOTD (3/1) has ~45hrs of exposure and the 2/15 ~7 (ignoring the differences of LP and such, though they seem relatively similar).

The mix on today's IOTD is a bit misleading.  I had targeted about 2:1:1:1 initially and when I thought I was done collecting data I found that I had had a large  obstruction on my corrector lens for most of the RGB data capture.  So I shot a lot more RGB and used those clean subs to create the local norm masters.  Those clean local norm masters helped clean up the random gradients from the obstructed RGB subs but, between the obstruction and these subs being deemphasized by weighting, I'd say the effective mix is not 1:1:1:1.  I'd leave my CG30 out of the discussion one way or the other here!
Like
jrista 8.93
...
· 
As for Jon's question about the OSC measurements - they are in this thread: https://www.cloudynights.com/topic/911085-filter-comparison-actual-data-osc-sloan-jcsquare-astronomik/

Oh, yes, I forgot about that thread. (And was thinking it was the ASI1600, forgot it was both.)
Like
jrista 8.93
...
· 
·  1 like
Jon Rista:
That to me is a minor point. The more important one, I think, is this. IF you do create a synthetic luminance from your RGB. You could get 30 hours of RGB. And, you could ALSO, without having to expend any additional acquisition time, effectively have a ~30 hour synthetic L by integrating the RGB channels together. (FWIW, I'm not saying extract the L*, I'm saying integrate them together...I find the latter seems to produce a better synthetic L.)

Once you have that L channel, since L is not really about SNR, and is much more about having a high SNR undiscriminating channel to process for contrast, then you can actually DO that processing. And also, still have your deep RGB. Without any additional acquisition time costs for L.

Sort of a....have your cake, and eat it too, kinda deal.

No, its not a 100% perfect, exact replica of what separately exposing an L filter would get you. I don't think the differences are actually going to matter enough in the end, though, not once you have a super strong RGB image to work with AS WELL.


Here's the thing, 30 hours of RGB all integrated together to make a super lum is still roughly a 3rd the amount of the amount of signal (not SNR, we'll get to that) compared to 30 hours of just Lum. Of course you want colour too, so you don't actually image just lum, but you get my point.

There is no free cake here. The math simply doesn't give it. The BEST option is somewhere in the middle, always has been.

Lum is absolutely about increasing SNR with less imaging time. I understand what your saying there, but the SNR is what makes the higher contrast possible and REAL lum makes that possible in less imaging time. Your too hung up on this idea that more aggressive lum ratios dooms an image to lack natural saturation, vibrancy and gamut. In my direct experience, that is practically not the case (judge my photos if you want). The issue is almost always with lack of OVERALL integration time (most of my projects are at least 20 hours, usually double) and less precise processing.

I will in the future try and image a target with the primary goal to demonstrate a comparison between all RGB and Lum with some fresh data.

How is integrating 30 hours of RGB together into a SYNTHETIC lum (not super lum, that is integrating LRGB together) only 1/3rd the amount of signal?? You are adding the signal of all three channels together, which should improve the SNR accordingly. Have you actually tried this? It most definitely results in a channel that is WAY stronger than 1/3rd the signal of a separate L channel. MRS noise measurements in PI also indicate its close to the L channel noise levels. 

If you use a set of filters like the Astronomik Type-2c, you have a certain amount of overlap between the filters as well, which should improve the total signal acquired across the channels. 

I also don't see how you can improve SNR by adding signals together, but not improve the signal. You can't add only noise and not add signal. You are adding signal and noise concurrently, the difference is signal adds linearly, while the noise adds in quadrature. THAT IS the math.
Like
C.Sand 2.33
...
· 
The problem is that "LRGB" is strictly an amateur thing and it isn't used professionally.  So the "luminance" filter is also a strictly amateur thing.  There is no "Luminance" filter on the HST.  So I think it is confusing and incorrect to call it "Luminance" when it is designed for use in LRGB imaging.  Another name for the filter is simply UV/IR cutoff - but that is longer.

Professional astronomers aren't setting out to get pretty pictures. If they want color accuracy they use a spectrometer. There's (almost) no scientific projects that image in the way that we do. Most of the space/large telescope pretty picture images you see are either data that was used for a different purpose, and happened to be in a form that can be used for pretty pictures. The images taken by those telescopes that were meant to be pretty pictures are largely for outreach and a fraction of a fraction of the time those telescopes have. Plus, just about anything one of those large telescopes puts out (and space telescopes moreso) is going to be a stunner. 

There isn't a luminance filter on the HST because the purpose of the instrument is completely different.

Now, all that being said. The closest thing to a luminance or UV/IR cut filter would be F606W or F600LP (that is, if I'm reading this chart correct https://wfc3.gsfc.nasa.gov/tech/filters-uvis.html).
Like
Freestar8n 1.51
...
· 
Professional astronomers aren't setting out to get pretty pictures. If they want color accuracy they use a spectrometer. There's (almost) no scientific projects that image in the way that we do. Most of the space/large telescope pretty picture images you see are either data that was used for a different purpose, and happened to be in a form that can be used for pretty pictures. The images taken by those telescopes that were meant to be pretty pictures are largely for outreach and a fraction of a fraction of the time those telescopes have. Plus, just about anything one of those large telescopes puts out (and space telescopes moreso) is going to be a stunner. 

There isn't a luminance filter on the HST because the purpose of the instrument is completely different.

I agree and my main point is in essence, "There are no adults in the room" to keep things on track.  Much of what happens in amateur imaging is backed by professional literature and that keeps it on track - but for LRGB the community is on its own - in terms of filter makers labelling filters and imagers applying them in certain ways.

UV/IR cutoff is perfectly fine and I don't know of any professional filters that are misleadingly named.  But to suggest "Luminance" is an appropriate proxy for Luminance is simply a mistake - that the filter makers and users likely haven't realized.

It's true that LRGB is all about pretty pictures and not about science - and that is why it is strictly amateur.  But even as a pretty picture method - its reasoning is flawed - when based on a "Luminance" filter that is actually "Total signal across the filter passbands."

It's easy to dismiss this as pedantic or purist - but the fact that users like me who tried it found it to desaturate the very colors we were trying to capture - makes it a very real disconnect from how LRGB is intended to work.  LRGB is about good color and detail in less time - and T departing from L has direct impact on the good color part.

That doesn't mean some people might prefer it - just as some people prefer HaRGB to RGB.  It's different signals used in different ways with different processing to achieve different results.  As opposed to "A better way to achieve a deep and detailed RGB image in less time than you would have with just RGB data."

Frank
Edited ...
Like
C.Sand 2.33
...
· 
I agree and my main point is in essence, "There are no adults in the room" to keep things on track.  Much of what happens in amateur imaging is backed by professional literature and that keeps it on track - but for LRGB the community is on its own - in terms of filter makers labelling filters and imagers applying them in certain ways.

UV/IR cutoff is perfectly fine and I don't know of any professional filters that are misleadingly named.  But to suggest "Luminance" is an appropriate proxy for Luminance is simply a mistake - that the filter makers and users likely haven't realized.

It's true that LRGB is all about pretty pictures and not about science - and that is why it is strictly amateur.  But even as a pretty picture method - its reasoning is flawed - when based on a "Luminance" filter that is actually "Total signal across the filter passbands."

It's easy to dismiss this as pedantic or purist - but the fact that users like me who tried it found it to desaturate the very colors we were trying to capture - makes it a very real disconnect from how LRGB is intended to work.  LRGB is about good color and detail in less time - and T departing from L has direct impact on the good color part.

That doesn't mean some people might prefer it - just as some people prefer HaRGB to RGB.  It's different signals used in different ways with different processing to achieve different results.  As opposed to "A better way to achieve a deep and detailed RGB image in less time than you would have with just RGB data."

Frank

To further my rant on HST and all not using L, those scopes don't use RGB either. I'm sure you know about sloan and johnson filters. In this sense astrophotography and all of LRGB is not about science. We might be more accurate in representing these in normal photography, but where does that leave L? Old monochrome cameras? For this reason I don't think RGB or LRGB should be entirely thought of as based off one thing or another. Astrophotography is its own thing, we borrow from photography and astronomy alike. This may be repetitive but I don't think there is a place for a research paper based around luminance filters (and for that matter, RGB filters in the way that we use them). There just isn't much of a market for research into pretty space pictures.


I understand your point on L =/= L*, but don't entirely agree with renaming. If we're replacing [EDIT: replacing was a poor choice of words here] L* with L anyway, why shouldn't it be called luminance, we're using it as luminance anyway. Yes there is a distinction to be made between the two but in my opinion that comes down to the user understanding the purpose of the filter. This falls in line with my opinion that most LRGB images that have the washed out/similar issues are due from improper processing. This isn't intended to be a dig at yours or anyone's processing skill, I don't know how to effectively process LRGB and it is more difficult that straight RGB. That's where I think the majority of the issue people have with LRGB comes from.
Edited ...
Like
Freestar8n 1.51
...
· 
To further my rant on HST and all not using L, those scopes don't use RGB either. I'm sure you know about sloan and johnson filters. In this sense astrophotography and all of LRGB is not about science. We might be more accurate in representing these in normal photography, but where does that leave L? Old monochrome cameras? For this reason I don't think RGB or LRGB should be entirely thought of as based off one thing or another. Astrophotography is its own thing, we borrow from photography and astronomy alike. This may be repetitive but I don't think there is a place for a research paper based around luminance filters (and for that matter, RGB filters in the way that we use them). There just isn't much of a market for research into pretty space pictures.


I understand your point on L =/= L*, but don't entirely agree with renaming. If we're replacing L* with L anyway, why shouldn't it be called luminance, we're using it as luminance anyway. Yes there is a distinction to be made between the two but in my opinion that comes down to the user understanding the purpose of the filter. This falls in line with my opinion that most LRGB images that have the washed out/similar issues are due from improper processing. This isn't intended to be a dig at yours or anyone's processing skill, I don't know how to effectively process LRGB and it is more difficult that straight RGB. That's where I think the majority of the issue people have with LRGB comes from.

My recent comparison of filters includes Sloan and Chroma versions of JC.  I'm quite familiar with them. 

I am not implying we should be doing science.  I am saying that for the purpose of pretty pictures - T is not a good proxy for L, and this is a mistake that has been made because there are no professional references to how this stuff works.

I don't think you are following what I am saying and I will leave it at that.

Frank
Like
C.Sand 2.33
...
· 
I am not implying we should be doing science.  I am saying that for the purpose of pretty pictures - T is not a good proxy for L, and this is a mistake that has been made because there are no professional references to how this stuff works.

I don't think you are following what I am saying and I will leave it at that.


I don't think I'm entirely following either. But I am trying to say that I don't intend for T to be a proxy to L. Yes we swap out L* for L, but that is because we have done transformation to L that will improve the qualities of our image after that swap. Proxy implies representation, I'm suggesting transformation.
Like
rockstarbill 11.02
...
· 
·  1 like
How about all you nerds, do a me a favor and use this set of data here:

https://darkmattersastro.com/product/orion-nebula/

It is broken up this way:

Luminance – 224 x 2 mins – Chroma 50×50 L
Red –  50 x 2 mins – Chroma 50×50 R
Green –  53 x 2 mins – Chroma 50×50 G
Blue – 51 x 2 mins – Chroma 50×50 

All of the data was taken under B1/2 New Mexico skies, at a pretty good level of seeing. I would not call it the best, but vs your likely backyard conditions, this is significantly better.

The ratio here of L to RGB is about 4.5:1:1:1. 

There can be no excuses for one person and their conditions vs another. The data is completely free as well (and always has been, even before this thread) and you can all download it and nerd out here with the exact same set of data for us all to read. 

Use some data, instead of excel hypothesis. People watching this, would likely very much enjoy this discussion more if real data was involved in the process and I am willing to be the one to provide it.

Quick edit: The link and package is for mastered data. You do not need to pre-process that much data. Trust me for this sensor you would probably rather eat glass and razor salad instead of going through that. I did that for you, so no need to worry.

Here is a link to my favorite process of this data, which was done by my good friend Mike Selby, so you know the data isn't trash: https://www.astrobin.com/7p5671/


-Bill
Edited ...
Like
C.Sand 2.33
...
· 
·  1 like
Bill Long - Dark Matters Astrophotography:
How about all you nerds, do a me a favor and use this set of data here:

https://darkmattersastro.com/product/orion-nebula/

[...]

-Bill

Thanks Bill, I'll try to get on it this weekend. No promises, but I will get to it eventually. I would prefer if someone else who's more confident and experienced than I process it (ehm, @vercastro ) partially as a learning experience to myself and to give a proper representation of LRGB. 

There may be pushback on enter veracity of this test because it doesn't have equal LRGB : RGB time, as in those 224 L subs could have been used for RGB. 

To combat this slightly I intend to process 3 images: Full data LRGB, LRGB but artificially decreasing the data to the point where it matches what would have been put in to just RGB (as in, 35-40 x RGB each, ~45xL, whatever makes it equal to the total integration of just RFB), and one RGB only.

I do think the artificial data reduction method will get the short end of the stick, but I suppose we'll see.

Once again thanks Bill for the data.

[Edit: I didn't realize it was M42. This puts a hamper on my excitement. I'll still process it but I expect to gain next to nothing from it in terms of the topics of this thread.]

[Edit 2: I didn't realize Bill wanted us to make an account on his site... yeah, it seems like nothing was going to come from all this anyway, but I'm not about to give you all my details for an image that won't help our reasoning anyway.]
Edited ...
Like
rockstarbill 11.02
...
· 
No problem man.

Check this Trapezium out! It's in the data. Do not leave it behind. 

Screenshot 2024-03-01 at 2.03.41 AM.png
Like
rockstarbill 11.02
...
· 
Last comment before the Nerd Astro Data Super Bowl....

I like HaRGB a lot.

https://www.astrobin.com/rh75in/C/

That is an image I did in HaRGB, there is no Lum in the capture, none of this SuPeR lUm BrUh stuff either. 

Straight up data, and no NoiseXTerminator at all. My noise reduction tool? More data. The right one.

Check it out, or don't. I will say this though, if some nerd showed up and told me "you should have taken Lum" I would just respond by asking for an image that is close to this one and they will not have one.

The point though, is Lum is useful if you are going after faint dust in a field. Probably a field I don't give a shit about, nor would anyone that would put a print on the wall. As much as this site, and the IOTD staff have become infatuated with dust, customers and people outside of this hobby think looking at white lines that end in a color equivalent to feces is trash. I agree with them. 

In certain fields this is a bonus to the image. When it's nothing BUT that, it's ugly as F. 

At any rate, lets see what you guys come up with, using the same data.

-Bill
Edited ...
Like
Die_Launische_Diva 11.14
...
· 
·  3 likes
As a nerd I'd personally prefer to understand how we as humans we perceive color and luminance, and then do some math, rather than processing a single "free" data set (to prove what?). There are so many ill-defined concepts in this hobby. By comparing single images and single datasets we are not getting anywhere imho.
Like
rockstarbill 11.02
...
· 
Die Launische Diva:
As a nerd I'd personally prefer to understand how we as humans we perceive color and luminance, and then do some math, rather than processing a single "free" data set (to prove what?). There are so many ill-defined concepts in this hobby. By comparing single images and single datasets we are not getting anywhere imho.



The data normalizes the argument of sky background, blah blah blah. 

They get a sterilized set of data to work with equally. The playing field is even. The cost of the data is completely irrelevant. I could have put this on a Dropbox link and the cost then would not be mentioned. I do not do that, because people that get data from my site are legally bound by agreements to not misuse the data. It is also hosted on Amazon S3, which has far better security, redundancy, and cost for this than Dropbox.

I used to run a huge open share of data on CN, and had to stop it because of abuse and misuse of the data. 

At any rate, the equal data is useful for this discussion. It is ill-conceived to think otherwise. 


-Bill
Edited ...
Like
 
Register or login to create to post a reply.