![]() ...
·
2
likes
|
---|
I think it is very camera specific, I can't comment too much on DSLR imaging and I only did that very briefly, with both of the cameras I used at that time it was not practicable to shoot darks seperately and I was limited to 30 seconds, so I turned on the "Long exposure noise reduction" function of both, which simply shoots a dark frame after each light, and subtracts is in camera. Without that the images were notably noisy. Regards bias not being useful for CMOS sensors as a rule, I think that is too generalised. I moved from the DSLRs to an ASI294MC Pro, and bias frames were not useful due to the issues mentioned above, I used a range of fixed length exposure times and fixed length flats with matching dark flats and never shot any bias frames. I have since replaced that camera with an ASI2600MC Pro and it is a different beast entirely, I shot 100 bias frames at 0 gain and 100 gain (The only two I use) and while I still shot a range of dark frames to make masters with no bias frame extraction, I shoot varying length flats and calibrate them with the master bias only. I did process one set of data with bias frames and no darks as I did not have a master dark for 100 gain at that time, and when I shot darks for that gain and reprocessed it later I was hard pressed to see a difference in the integration. TL;DR: Bias frames do work with at least some CMOS sensors and in almost every image I have ever produced, darks improved the image, that may or may not carry over to a DSLR and it might come down to how difficult it is to shoot darks to match your lights as to if it is worthwhile. |
![]() ...
·
1
like
|
---|
I’ve always used Darks Bias and Flats. I use a modest Canon T3i modded full spectrum. I’m going to try without Bias files and see what I come up with. Here’s with Bias files. https://www.astrobin.com/96j4kf/?nc=user clear skies |
![]() ...
·
7
likes
|
---|
Calibration always improves an image, *if it is done correctly*. The reasons are obvious if you understand the concepts involved and process behind it (which is surprisingly simple). There's a reason why 'professional' astronomical observations are thoroughly calibrated, and why 99% of great images by experienced imagers (including much more than myself) are calibrated. The theory is the same regardless of sensor type (DSLR/CCD/CMOS, mono/color, etc). There are some specific questions that may differ. The temperature question with uncooled cameras (i.e. DSLRs) is important. Ideally it should match perfectly, but +- 2C is fine. The question that dark frames "waste" time is pointless - you should use twilight time or cloudy nights (with similar ambient/sensor temperature) or fridge, or whataver way you can to try to match the temperatures as well as possible. No clear sky time wasted. The idea that DSLRs sensor temperatures are "random" is wrong. IF the ambient temperature is held constant, once the camera enters "in permanent regime" - after a few long squential exposure shots (such as nighttime), the temperature of the sensor should stay the same! Besides, some software (I know BYEOS and MagicLantern) report the internal temperature of the camera, which should make matching temperatures easier (although not reporting the sensor itself, it is close enough). Now, people try to cut corners to simplify the workflow. That is a decision everyone should make. The difference might be minimal so that one may consider the "hassle" of dealing with calibration worthless. If you are happy with your images, that is fine. Some points I want to keep noted:
If you want to know more about it, I strongly recommend searching for Jon Rista's posts on Cloudy Nights, over several different threads (this is one of those questions that get asked again and again). Personally, I don't like the approach of just stacking light frames (and some even advocate doing that with stretched files from Camera Raw/TIFF for instance) for serious DSO work (especially for faint stuff). This approach is used by some astrophotographers (Tony Hallas, Roger Clark). Easier, yes. Best possible results, no. Bottom line. I calibrate my DSLR images. I use darks taken on the fridge. I have neved wasted clear sky time taking darks. I have had cases where calibration didn't work, usually from human error during acquision (especially stray lights) or incorrect software setup. If you want to skip calibration (for darks or for all calibration) know your results will be suboptimal, but might be good enough. And always experiment with your gear and learn about the intricacies behind the processes. my 2cents Best regards to all Gabriel |
![]() ...
·
5
likes
|
---|
Personally, I don't like the approach of just stacking light frames (and some even advocate doing that with stretched files from Camera Raw/TIFF for instance) for serious DSO work (especially for faint stuff). This approach is used by some astrophotographers (Tony Hallas, Roger Clark). Easier, yes. Best possible results, no. Fully agree. When I started out, I followed Roger Clark's methods. The methods are nice when getting started, but very limiting if one wants to image deep and faint objects. Now I am a very firm believer in calibration. |
![]() ... |
---|
Gabriel R. Santos (grsotnas): More like an entire dollar! Thank you :-) |
![]() ...
·
2
likes
|
---|
I will reiterate that he is not saying to just stack light frames though, he says to do a great deal of very sophisticated preprocessing at the raw development stage which may even include applying a flat frame. (he is also using f/2 lenses that cost as much as a small car and relatively short subs, under those conditions dark current truly is insignificant). Nothing simple about manually aligning the black points of all your subs one by one for example, which is an extremely crucial step. I've done it and it can take whole days if you have 200 subs. Or figuring out a light fall-off pattern that matches your gear (that you'll only do once though). The subs you end up stacking are not "just the light frames", at all. They are color corrected, flattened, sharpened, denoised, normalized and have problematic pixels removed, chromatic aberration cancelled and highlights reconstructed. |
![]() ...
·
1
like
|
---|
It's a dumb argument, if I don't use darks hot pixels often destroy the image, even with dithering.
|
![]() ...
·
3
likes
|
---|
(he is also using f/2 lenses that cost as much as a small car and relatively short subs, under those conditions dark current truly is insignificant) The contribution from dark current depends on overall integration time - splitting into small subs has nothing to do with it. Using a fast lens like he does simply means you are collecting more light in a given period of time which reduces the relative contribution of all noise components other than photon shot noise. In the end it is worth looking at results and making a judgment. Roger Clark’s images are of bright objects and if you compare them to the same objects shot by good imagers here - Gabriel is an example - you’ll see that imagers here routinely take far better and deeper images than he does even when comparing the same bright objects. And they all use dark frame calibration and flats. There is an excellent reason for that. It just makes things consistent, simple and easy. In the end it is everyone’s individual choice. I was recently, for the first time, able to capture the IFN in an image. It would have been a lot harder to get it to come out without accurate calibration. I know - I have struggled with poorly calibrated images in the past even with bright objects such as M33. And if you want to image deep, Jon Rista’s methods will work out much better than Roger Clark’s. I’ve used both. |
![]() ...
·
2
likes
|
---|
Nothing simple about manually aligning the black points of all your subs one by one for example, which is an extremely crucial step If you (or the video poster or anyone else) are doing that manually, then you are using improper techniques on improper software. Any serious preprocessing software (APP, PI, probably DSS, MaximDL, ImagesPlus.... you name it) has a form of NORMALIZATION, which does exacly that. And more than just matching additive effects (which is what you would do with manual black point correction), these algorithms can be much more advanced in terms of statistics, local normalization, etc - that is to match frames from different nights or places, and make them statistically comparable so integration works best. Or figuring out a light fall-off pattern that matches your gear If you are doing that in CameraRaw or Photoshop and doing it on NON-LINEAR data (which is what Hallas does), then Calibration will never work as intended, integration will be suboptimal, and there is no point in discussing it further - mathematically, dark frame and flat fielding will not be performed correclty. It must be applied to raw LINEAR images. Why would you figure out (model) a light falloff pattern when a FLAT FRAME DOES EXACLY THAT measuring your actual data, PERFECTLY, and accounting for dust motes and PRNU? The subs you end up stacking are not "just the light frames", at all. They are color corrected, flattened, sharpened, denoised, normalized and have problematic pixels removed, chromatic aberration cancelled and highlights reconstructed. You seem to be using Hallas/Clark workflow. It inverts the order of things, makes proper calibration mathematically impossible, and doesn't do preprocessing in a "standard way" (i.e. the standard workflow and Calibration frames amateur and professionals use). It can lead to good results. But not best ones. Depends ultimately on your goals, but also on your processing skill, and your targets (the fainter the harder). I stand by my arguments, and agree with what @Arun H. has added. Again, I do not disencourage anyone from trying new techniques, and see what works well enough for you. But all the points of my previous post are valid. "Learn about the intricacies behind the processes" (includes calibration, registration, normalization, integration - pre processing, post processing, and more), and the best possible results come from correctly calibrated (and exposed, and framed... and carefully processed) data. And all that is a decision every imager should make, based on your goals and your enjoyment of the hobby. =) Best regards, Gabriel |
![]() ... |
---|
Stuart Taylor: Yes, Stuart, you can do the pre-processing (calibration, aligning, stacking) and the post-processing of your images in PixInsight. It's not free software, however. CS Thilo |
![]() ...
·
3
likes
|
---|
If you are doing that in CameraRaw or Photoshop and doing it on NON-LINEAR data (which is what Hallas does), then Calibration will never work as intended, integration will be suboptimal, and there is no point in discussing it further - mathematically, dark frame and flat fielding will not be performed correclty. It must be applied to raw LINEAR images. It is this point that really needs to be emphasized. Especially for complex lenses as opposed to optically simpler telescopes, the light fall off pattern will depend on focus location as the location of the optical components internal to the lens change. I've had enormous trouble with a 70-200 f/2.8L II lens trying to just use the lens profiles in Lightroom to correct for light fall off. Flats do it very, very well. And if you happen to have dust on your sensor and are trying to image a frame that has weak signal (empty space or IFN), it is an absolute nightmare to correct without flats, but simplicity itself if you have a well matched set of flats. It is now part of my routine to take flats after every session. Yes, a bit of work, but saves a ton of headache during post processing. Finally, the point made by @Gabriel R. Santos (grsotnas) about linear data is important. Flat and dark correction convert, in a mathematical way, your raw data to purely linear form - your data are directly proportional to photons recorded. Without calibration with darks and flats, this is impossible. |
![]() ... |
---|
I always shoot dark frames... as well as flats, biases and dark flats. I stack with and without the darks and I also sometimes play around with the bias. I also keep a library with groupings of darks at different sensor Temps. Aggressive dithering helps a lot but I find dark frames give me just a little bit more.
|
![]() ...
·
3
likes
|
---|
All images have a very low black point and the color balance needs work, but I think it is safe to tell that flats do help ![]() |
![]() ... |
---|
Die Launische Diva: Sorry, I am very new to all this. Can you explain what you mean by low black point? Did I move the left hand arrow too far to the right in the levels adjust? I do adjust the 'offset' in the Exposure panel too as it makes the sky darker. Maybe that is my error? |
![]() ...
·
2
likes
|
---|
Basically what it means is that you’ve made the background way too dark. That does hide a lot of defects but will also hide meaningful signal to the extent there is some in the background. It also makes the transitions between signal and background way to jarring because some of the weaker DSO signal gets clipped too. And again, this is the point. When you’re starting out, all you care about is that you get a DSO image. But as you gain experience you also want to bring out the more interesting stuff. That’s where longer integrations, better calibration and better technique become critical. This is where the Clark/Hallas methods reach their limits. |
![]() ... |
---|
Gabriel R. Santos (grsotnas):Nothing simple about manually aligning the black points of all your subs one by one for example, which is an extremely crucial step Like I said, it's a heated subject ![]() To answer your three questions 1) the normalisation routines offered by the various integrators align the peaks of the histograms, not the black points. Thereby, they reduce chroma which is something I care about a lot. Also, when I set the black point I know what is supposed to look black, thereby starting the integration with a great deal less gradients. But that is immaterial. My argument was that the subs you are supposed to "just stack" are normalized, and much better than they'd been done by an automatic routine. They are not "plain lights". 2) I use the Roger Clark method when imaging with a DSLR but I don't mind throwing in in a dark or a flat if deemed appropriate. The core of the method for me is letting the raw converter do all the heavy lifting, it's not so much about whether darks are used or not. On the other hand, I use traditional calibration when imaging with the astrocam, simply because Rawtherapee does not support my astrocam. I very much wish it did. 3) I would correct the light fall off algorithmically if I didn't have dust motes (or if I could get rid of them with massive dithering) because a clean mathematical transformation is much simpler than dividing every set with a master flat that has its own noise and its own patterns if it hasn't been taken correctly, which in my experience is not the most unthinkable of "ifs" with a CMOS sensor and a setup that must be disassembled at the end of the imaging session which is usually much earlier than dusk. You should see me taking flats for the astrocam, capping the Newt with a 4kg laptop that I hold with my right hand while I am trying to start shooting by pointing the mouse of a second laptop to the "play" button using my left hand. I am right handed. It's a very miserable situation. So I am not religious about it, I have objectively (well, objectively in my subjective setup ![]() The way I see it, you are going to do extremely non linear stuff to your integrated image anyway. Why does it matter (mathematically) if you do them to the individual subs before integration, provided the individual subs are not stretched (i.e. do not have tone curve applied)? Is it such a crime if you increase SNR by 5% by applying a gentle noise reduction to the raw data? Why use a dark frame or dithering to correct hot pixels (in particular) when they are the one class of imperfection that can be detected algorithmically with very good precision? Etc. And what you are doing is actually more advanced, not less. It is by no means "just stacking the lights". I would not recommend the workflow to a beginner. It is true that the full power of a raw converter probably falls apart when imaging a very faint object and it is completely useless with an unsupported sensor, but when imaging something visible with a stock DSLR there simply is no way you can do better color rendition, noise reduction and all the other stuff that has been optimised by professionals for your camera. Pixinsight in particular does not have AmaZE and the other extremely capable demosaicing algorithms purely for licensing reasons, neither does it support DCP profiles. Those two things alone can very well make a tremendous difference, definitely larger than a dark frame, if you are imaging a reasonably visible target and care about colors, dynamic range and CA around stars more than you do about hot pixels and whether noise reduction happens before stretching. BUT, for one more time, I feel I must repeat my core point which is: whether you believe that method to be superior or not (and it is I think established that the answer varies), it is not "stacking just the light frames". The way I perceive it, what you are stacking are subs as close as possible to fully preprocessed while still remaining linear (linear, not raw). Cheers, Dimitris |
![]() ... |
---|
Basically what it means is that you’ve made the background way too dark. That does hide a lot of defects but will also hide meaningful signal to the extent there is some in the background. It also makes the transitions between signal and background way to jarring because some of the weaker DSO signal gets clipped too. But surely the sky between stars is dark, black in fact. I moved the black point to there because otherwise there was still some coloured glow in it. Not sure what a Clark/Hallas method is. Perhaps someone can point me to a good beginner tutorial on processing stacked images? |
![]() ...
·
1
like
|
---|
The best description I have seen of a good background in a dark area is that the image background should have a texture like an old school chalkboard, not just black. I usually when processing crop the image so that there is no black space around the edges from dithering (assuming you dither) then examine the histogram in photoshop to see where the real data starts, I set the black point to a couple of levels below the first one that has at least one pixel after black space has been removed. Then you can darken the background somewhat using curves if you want to, which might suppress some faint detail but setting the black point higher eliminates it completely. |
![]() ...
·
2
likes
|
---|
I started with Mitch: https://m.youtube.com/watch?v=HIXJJqew6rQ Then moved on to Kayron: https://www.lightvortexastronomy.com/tutorials.html Th light vortex tutorials are broken up into different parts of the process but down near the bottom of the list is an M31 DSLR tutorial that strings most of it together. It doesn’t include calibration and integration, but those first steps are fully explained in the other tutorials. |
![]() ...
·
1
like
|
---|
To calibrate the background after stacking, I set the black point to all 35's on the R, G and B. Not out back, but chalkboard like was stated.
|
![]() ...
·
2
likes
|
---|
I might be crazy, but I choose the background the way I like it best. At least for me AP is fun. :-)
|
![]() ...
·
3
likes
|
---|
Olaf Fritsche: And of course you are entirely free to do so ![]() |
![]() ... |
---|
@Blue - I am not dithering. I don't control the scope via my PC, just with the hand controller, so I am not sure how I would dither unless doing it manually by little nudges of the slew buttons between exposures (which would be super tedious) @Die Launische Diva - I think you have hit upon exactly what I was doing! Setting the sky too dark to reduce that colour glow. I tend to find it's a circle shape in the centre of the image and as I reduce the exposure offset it gradually shrinks and disappears. But by the time that has happened, I have lost a number of stars! :-( Interesting that you say it's important to get good flats. I am not sure mine are looking quite right (here is an example). I place white paper over the aperture and point the scope straight up at the daytime sky adjusting the exposure to get the histogram peak about half way along the x-axis. ![]() |
![]() ...
·
1
like
|
---|
Stuart Taylor: Getting good flats and using them well is the alpha and omega of AP. The way you do it I don't think I would in a 1000 years. You need to get proper flats, either with a flat box, a flat-field panel or shooting dusk flats (or even better sky flats when is cloudy and you have evenly distributed LP). Bigger telescopes would go the latter way 100% of the times. And even with the best of flats and the darkest skies on earth I still have to deal with gradients in the end. Moonlight is one of the major bugbears when dealing with flats. Well, unless you only shoot with no moon in the sky. |