Should Astrobin lead by example in requiring disclosure of AI based sharpening? [Deep Sky] Processing techniques · Doug Summers · ... · 162 · 5331 · 9

This topic contains a poll.
Should astrobin require users to divulge use of AI based sharpening so as to help avoid "deep fake" technology from taking over AP?
Yes
No
dmsummers 6.80
...
·  10 likes
·  Share link
I ask this question as a follow-up to a TopazAI sharpening demonstration and subsequent discussion on ethics in my club (and after viewing some ethics presentations online from TAIC, Richard Wright, etc.).  

The gist of the issue is that we're probably not very far away from "deep astroimage AI fakes" that won't be able to be discerned.   Differentiating real images from deep fakes is getting to a point now where there's a group at the University of CA (Riverside) that is using pixel level math to figure it out (not for AP, but for other images/videos).   Technology measures and countermeasures likely mean that reality will soon enough slip away if we don't figure out how to discriminate what is real from what is fake.     I've heard a lot of arguments recently along the lines of "well, everything in post-processing is manipulating the data, so nothing is real anyway" as justification for using AI sharpening.   I also hear a lot of "all our work is art, so it doesn't matter what we do since it's not science".  

Quite a few in the hobby have different objectives that might be harmed by the attitudes and technology discussed above.     In the extreme, if not addressed, the whole hobby could be damaged by dilution of public trust and loss of credibility as deep fakes take over.   So,  would it be wise for AB to take a preemptive lead/stand and require folks to divulge AI based sharpening (at least until verification of methods/data are provided that bring the technique(s) to an acceptance level of other PP software)?   Would AB & its users be doing a service to the hobby by helping to form guidance in this regard?   Art is highly subjective, and there's quite a bit of art in what we do, but there's also some who are trying to stay "faithful" to gear and captured data.   These are hard thoughts with difficult choices, but to do nothing seems like a road to a confused/degraded hobby.    What do others think?   I'd be interested to hear!
Like
Snjór 11.96
...
·  17 likes
·  Share link
Each to own I say, as I not have time or great skills images are as are. Others do differently huge acquisition times, and varied software processes. Personally I enjoy see all.

People should be free do as wish and viewers make own judgement.

Best wishes,
Sigga
Like
dmsummers 6.80
...
·  3 likes
·  Share link
Hi Sigga,   Just so I understand, are you saying you don't care to know whether images you look at are real or not...you'll just enjoy seeing them regardless?
Like
barnold84 10.79
...
·  16 likes
·  Share link
Fake doesn’t begin with the use of AI. One could also claim star removal is fake for which you can use algorithms which are fully transparent and understandable in every detail. 

Overall, I think it’s very important to be as transparent as possible. Like the basic rule of scientific publication: publish in such a way that another person could (in principle) repeat what you‘ve done. Select one object and look it up in AstroBin and you’ll find dozens of different representations of the same object. Which of it is the „true“ one? The one in grayscale shot only in Ha or the one in SHO? 

Only through transparency in the data gathering and processing one can judge what one sees. Enough philosophy and to your answer:
I think AstroBin should encourage people to fill the information that one can already fill as complete as possible. With that I can compare and judge. 

Cheers,
Björn

PS: (not so serious answer) with the current weather, deep faked AI-boosted machine learned and whatsoever manipulated photos will probably become the only way of creating astro photos here 
Like
Snjór 11.96
...
·  15 likes
·  Share link
Doug Summers:
Hi Sigga,   Just so I understand, are you saying you don't care to know whether images you look at are real or not...you'll just enjoy seeing them regardless?

Define real Doug.

From my experience people that use tools to process images list them. I read that and take into account.

Is Photoshop AI, Lightroom? Both quite sophisticated software that have ability to materially change an image (see Adobe site for examples). PI is same.

AstroBin doesn't need to be arbiter, viewers can decide for self!

If wish "real" image get telescope or binoculars or eyeballs  and look at sky. No filters, software or processing. Real.

​​​​
Edited ...
Like
JimLindelien
...
·  12 likes
·  Share link
In general I encourage the use of the Astrobin's metadata features to document how one's image was obtained, whether AI or not. It's ethical (in the event some new technique is under discussion by the community); and, it really helps those new to AP to be guided in their early attempts at imaging and post processing.

That said I understand that there are those who feel competitive and/or seek to commercialize their images and fund their AP activities into the future.  We all know AP is an expensive passion. For them maintaining some degree of proprietary knowledge about how they process their images strikes me also as rational from their point of view and not ipso facto unethical.

So the underlying questions seem to be, who is harmed by AI in AP?, and, what is the extent of the harm?  At the moment I don't see much to panic about with AI in AP in contrast to terrestrial AI frauds such as identity theft and defamation. Can you provide some examples of severe harms that may occur from AI use in AP?

Some points either way to ponder:

1. Is AI fundamentally different from other generally accepted super-resolution techniques like deconvolution and drizzle integration? Exactly how and why?

2. Is it fakery to use local contrast enhancement or unsharp masking, since these are changing the apparent "natural" structures we perceive in the result?

These techniques make mathematical predictions about what the image "should" look like, but they too, like AI, are mathematical estimates in the absence of perfect knowledge.

One unique concern I have about AI in AP is something of a Catch-22 that does not occur in the examples I mentioned above. If the AI network was not trained from an astronomical images training set, how can we know if the details produced are a quality prediction? But if the training set was predominantly astro-imagery, then this introduces an ethical question: Are the rendered details in essence an amalgam derived from the uncredited use of the works of other astrophotographers? This argues that AI use should be disclosed, in the same manner that a member today states that they used Hubble raw data, for example.

As for me I document in my metadata any use of AI, just as I mention any unusual method, for example an atypical approach to creating a synthetic G or L layer, and what my specific motivation was in doing so. In general, that intent is to make it easier for the viewer's eye-brain system to better perceive and appreciate the beauty of nature. I don't see the harm in that.

Regards,

Jim
Edited ...
Like
dmsummers 6.80
...
·  2 likes
·  Share link
Hi Bjorn,  in regards to the comment along the lines of "Which is the "true" image", I think all will agree that there are many, many "true" representations of an image's data based on what people choose to highlight or emphasize from within the dataset.     The AI issue is that the method inserts fake detail(s) to make the image more appealing based on proprietary (unknown) training of external data of unknown quality.    A line seems to be crossed with the fake detail insertion and proprietary (unvetted methods).   

As to the second part of your response, voluntary form completion will likely not allow for discrimination of deep fakes from real data as measures & countermeasures develop in time.    Although I'm sure your skills are far better than mine, I'm not so sure that in 5-10 years, you'll be able to tell a deep fake from a real image!   In that sense, the question is does anything need to be done to prevent confusion and possible damage to the hobby?    Cheers,  Doug
Like
dmsummers 6.80
...
·  2 likes
·  Share link
Hi Sigga,  Ok, I'll attempt to give a definition of "real" (among many possible definitions...this one given to help this particular discussion).   I'd generally claim that for any given telescope, mount, sensor and filter set combination, the "real" data is what is captured during an observing session(s).   That ground truth may be significantly different from user to user, all of whom use different gear, better (or worse) tracking, different optics, etc.   Within some reasonable processing to address concerns of sensor limitations, atmospheric convolution(s), and other methodic issues, there is a spectrum of reality when processed with care.    I'm not suggesting there is only 1 true/real image (that would be boring).   There does seem however to be a line crossed when fake details are generated and inserted into the dataset from other external data using proprietary methods that haven't yet been vetted by the community.   Hopefully this isn't too wishy washy to see where I'm going...   Tough question; maybe lame answer, but it's an attempt.   cheers,  Doug

edit:  BTW, I'd say for example that of the specific products you mention, any one of them can transform "real" data into something not real by insertion.    "Paint" comes to mind.   I think we'd all agree that if someone wanted to paint in or cut/paste some external detail into an image, that might cross a line (or maybe not?).   Many of us might be tempted to clonestamp correct a star artifact, but we kind of know instinctively that there's a line where we shouldn't go if we want to suggest the product is a photograph and not an abstract art piece?
Edited ...
Like
hbastro
...
·  8 likes
·  Share link
For me an image crosses the line to fake when processing artifacts become details, and when details are added.  Processing to enhance or contrast an aspect of an image, be it use of color, tone, or contrast, mapping methods, etc isn't fake in my opinion. And often renders relationships between objects not otherwise easily seen. 

There are many ways of adding fake details, excessive spatial filtering, creative masking, AI algorithms, spider diffraction algorithms, star replacement, adding little people silhouettes. Their use, in most cases, produces strikingly beautiful images. Eye candy for the community.  

I think a grip on reality is important, even in a pretty picture environment like Astrobin, so I answered YES to the survey...
Dave
Edited ...
Like
barnold84 10.79
...
·  6 likes
·  Share link
Hi Doug,

Trust me, in no form I would claim that I‘m a good astrophotographer. What I‘m sure about is that I am very successful in my hobby, simply because it gives me fun and satisfaction.

Instead of answers, I have a few questions: what specific fake details do you have in mind? (Not to claim that they don’t exists but to have something more concrete to talk about) and what specific threads for the hobby do you have in mind?

I agree with Jim and a lot of AI researchers that this technology like many other things can be used for good and harm, so fake concerns me in the sense about politics and other realms but not necessarily about AP as long as I am free to choose which way to go.

I think it’s Huawei that added the feature to its photography app to replace the Moon in the view through a high quality image of the Moon. I don’t know what I should think about it but in the end it boils down to my world view and that certainly differs from others. 

Therefore, if I shall make a prediction, there will be some reorganization within the field and maybe new clusters will emerge where there are people that do it for business purposes, people who want to be artistic in some way or people that are more purists and try to use as little automation as possible. To some extend this is already there and the new tech will move the cluster boundaries somehow but I don’t see a game changer here.

Cheers,
Björn
Like
dmsummers 6.80
...
·  3 likes
·  Share link
I think Dave (hbastro) gives a good set to consider (and there are more).   As for what damage occurs, it's subtle.   We're in the early innings of this game.   When the public begins to realize that any astrophoto they look at may not represent the reality of the actual gear used, but rather is some algorithmic artistic creation, we as a group of APers will lose trust and possibly some credibility / visibility.   Subtle I realize, but a harm no less for those who care about this aspect of the hobby.

Edit:  There is another kind of damage that occurs when liberties are taken too far.   As an example, when the public sees full moon pictures 2x the size of the real moon superimposed on a landscape photo, I think we can all agree that the public can be damaged too.   Folks involved in outreach then need to spend extra time to correct those deceptions....
Edited ...
Like
andreas1969 6.02
...
·  1 like
·  Share link
Interesting! Following...
Like
Snjór 11.96
...
·  5 likes
·  Share link
Hi Doug,

Trust me, in no form I would claim that I‘m a good astrophotographer. What I‘m sure about is that I am very successful in my hobby, simply because it gives me fun and satisfaction.


Cheers,
Björn

This^^^^
Like
Snjór 11.96
...
·  4 likes
·  Share link
Doug Summers:
Hi Sigga,  Ok, I'll attempt to give a definition of "real" (among many possible definitions...this one given to help this particular discussion).   I'd generally claim that for any given telescope, mount, sensor and filter set combination, the "real" data is what is captured during an observing session(s).   That ground truth may be significantly different from user to user, all of whom use different gear, better (or worse) tracking, different optics, etc.   Within some reasonable processing to address concerns of sensor limitations, atmospheric convolution(s), and other methodic issues, there is a spectrum of reality when processed with care.    I'm not suggesting there is only 1 true/real image (that would be boring).   There does seem however to be a line crossed when fake details are generated and inserted into the dataset from other external data using proprietary methods that haven't yet been vetted by the community.   Hopefully this isn't too wishy washy to see where I'm going...   Tough question; maybe lame answer, but it's an attempt.   cheers,  Doug

edit:  BTW, I'd say for example that of the specific products you mention, any one of them can transform "real" data into something not real by insertion.    "Paint" comes to mind.   I think we'd all agree that if someone wanted to paint in or cut/paste some external detail into an image, that might cross a line (or maybe not?).   Many of us might be tempted to clonestamp correct a star artifact, but we kind of know instinctively that there's a line where we shouldn't go if we want to suggest the product is a photograph and not an abstract art piece?

I might also add how would @Salvatore be expected to police this requirement. He  is a very nice man, this would have him jumping off a bridge or taking to strong drink.
Like
dmsummers 6.80
...
·  Share link
There's no need for police or jumping off bridges (but alcohol can be nice in moderation!).   I'd suggest that if consensus occurred (and we're far from that!), then a form checkbox during upload would do the trick.   As I recall, there is at least one entry on the upload form now that is "required"...this would just be another.   Currently (again, as I recall), you can't progress in the form until required field(s) are entered.
Like
andreatax 8.51
...
·  Share link
When people say they like looking at pretty pictures in the context of AP I always think they are actually after cute kittens/puppy pictures...
Like
Stargazer66207 1.81
...
·  6 likes
·  Share link
Hi, all,
In my opinion, if you want to really get "picky picky", then ALL images that have been extensively processed using the various programs (APP, PixInsight, PhotoShop (all versions), etc,etc,) are not exactly "REAL".  Just go on the web and pick out one NGC object and scan for all of the various images of it, and you quickly see that various processing procedures produce a vast variety of "looks" for the same object.  For me, I prefer the ones that look more "natural", i.e;
not extremely color saturated, nor with garish features, such as exaggerated Ha regions in the spiral arms of galaxies. Just my humble opinion.

Stargazer66207
Like
profbriannz 17.22
...
·  3 likes
·  Share link
A stimulating question and great responses so far. I voted yes - not because I fear AB will be overrun by deep fakes, but because this information is helpful to me as I learn the hobby from being inspired by the great photos on AB.  

The information I have, the more I can try to emulate - or know that I am ever going to be able to achieve it.   For that reason I find photos that have little information on exposure times or processing packages less than useful.  

That doesn’t mean to say that I don’t appreciate such photos, but I do like to know what equipment, what observing plan and what processing was done. That way helps me appreciate the skills of the photographer all the more. 

if AI was to be required as an additional declaration in the post processing, that would be fine with me - and I could evaluate the photograph correspondingly.

But I wouldn’t expect this to be policed, after all photos already  appear here without missing important (for me) details of observing (e.g. integration time) or processing.
Edited ...
Like
Alan_Brunelle
...
·  14 likes
·  Share link
There have been a lot of great comments to this thread already, so I want to add just a little, including opinion.  

I think Dave makes a good point.  The two AI features that I know of, StarNet (which I normally only use for mask generation) and Topaz (which I do not use) seem to have a tendency to remove available information from the image.  In fact the denoise features of PI do as well.  Used poorly, they can also cause detail that is not really present.  As I process, and it may be just me, it is a constant struggle to try to decrease noise, while at the same time try to preserve detail!  In the end it is rare that I get the noise level where I want, yet often have to accept the loss of some detail.  But if the viewer has to enlarge the image 10X to see that loss, it is irrelevant.  Having said all that, in the end it will be up to the user as to how they apply the tools.  This is a strange hobby and I need to take some care to not insult the total viewership and what they do, but I am not in this to make the very best image that can be made for objects that have been imaged and presented here a thousand times before me, with some using telescopes many times the size and cost of mine.  I really do it for me.  My personal ethic is to try to preserve and bring to the forefront those details that I think are there and make them more obvious to the viewer.  

But what is contrived?  False color is clearly contrived.  Its mostly not real and not representative of what a human eye can see.  Yet its used all the time.  Well, you say, its nice to know where the hydrogen is and the nitrogen and oxygen, but that is not why most publish the image as such.  They do so because it looks beautiful.  But beyond art, why false color, well I guess it is because "real" science, itself has promoted it.  Often without full disclosure.  Yes, we all now know that all the hugely popular Hubble images are false color, but I can remember back in the beginning, and for quite some time, these photos were released without a full disclosure of that fact.  That may not be completely true, since it may have been the media that picked them up and rebroadcast them that left out those details.  But even those images take artistic license in presentation.

I would normally think that AB should not get involved in demanding such details.  And as stated above, just using an AI feature does not necessarily remove "fact" from an image and add "fake" to it.  So who draws the line and where?  Maybe AB has some responsibility, only because of the designations of awards to images such as Image of the Day, etc.  But even then I am not sure I am on that side.   I am not even aware that we have to divulge how we process our images.  We can list the tools we use, but is it a requirement?  

Other than forgeries, can deep fake even be a thing in art?  And with AB, this is mostly art.  There is some science, but it is clear when it is presented here.  Some cataloguing, but that is clear as well, and when done, is in the service of letting others see what they should find if they try to locate the same objects.  So faking is counterproductive.

What is the end game of faking astroimages?  To become an AstroBin hero?  To get recognized by some astro publishing magnate and get hired away on a six-figured salary to do pictures for publication?  Would someone create an image of a fake nebula and then name it after themselves?  And who, exactly does faking an astroimage hurt?  I think deep fakes have the potential to cause great harm in our society today.  But I do not think that here is where that damage will be done.

I will also suggest that many visual amateur astronomers might take issue that we are even having this rather smug discussion.  From their perspective, we may be part of the problem!  We present images using tools that the vast majority of the planet would never have access to.  These tools, including our creative software, create images that are highly colored, highly contrasted and heavily promoted in the media.  Yet when a young kid who has seen all of these images their whole life walks up to a real telescope and looks through the eyepiece, they often leave less than impressed.  "Where is the color?"  "I can't see any of the spiral arms in Andromeda!"  Many no longer interested in getting their own telescope one day because the only way to really see these things is by flipping through a Hubble picture album.   

Alan
Like
jeffweiss9 2.33
...
·  2 likes
·  Share link
My problem with AI in astrophotography, as in other slightly more scientific fields, is that no one seems to be able to tell you exactly what it is doing. That is a major difference from most AP tools in PI, PS, whatever.  I can understand a size 3, 1 of 1 Erosion Morphology Transformation in PI or an unsharp mask in PS and reproduce independently their results.  I cannot do that with Starnet or Topaz.   For that reason, I see those tools (even though I have used them on occasion, labeling them with "very light use", as crossing the line scientifically or documentarily -  to use Juan Conejero's term defined as 'presenting factual material with little or no fictional additions.'  People should do what they feel like doing (it is a hobby for enjoyment, after all), but I think it's a good idea to be sure to mention their use with the rest of the software used in the processing.   Whether that should be a box to click on Abin, I don't honestly know, but I'd be happy to go along with that if  that's what eventually happened.

Jeff
Edited ...
Like
jzholloway 2.97
...
·  4 likes
·  Share link
I'm not an expert but I do not know very many people who stack their images manually, apply calibration frames manually or process everything manually through the use of available programs. Not many of us use old school SLR cameras with Kodak film anymore. Many of us use computer software to align or scopes, find and frame our targets, guide, track and capture. Technology is only increasing, and therefore encroaching more and more into this wonderful hobby. I believe we should all strive to use the products available to us in a way that attempts to preserve the targets we are gathering data on, but at the same time, I believe we should have no fear in using them either. Personally, I try and at least record what techniques I use and mention them, though I know I have also become a little lax in that. 

I think in many ways one can tell when an AI or a program is over done. I also think that people's preferences on how their images look are a very personal thing. Some people like images to appear in a certain way over others. Some like over saturation, some don't. Some like stars, some don't. Some like noise, some don't. Again, looking at all the images on AB will show you we each have different preferences when it comes to astrophotography. Saying that, I have no issue in having the disclosure available to share - we have that in many ways already. I can list what software, what telescopes, what cameras I use. I do not agree on policing how people process their images, mainly because all it will do is serve to discourage people who are new to the hobby from continuing.

An example of this is a picture of M45 that someone posted on IG and they mentioned they used a NB filter with a OSC. Obviously, we all know that M45 shouldn't be taken with a NB filter, but not this guy who was new. Unfortunately, two people commented saying just that and railed on the poster because they dared use a filter instead of saying "good job on your first light, how bout next time try not using a filter and see how it comes out." I say this to say that images that are "over cooked" to many of us, my images included, are obvious and trying to police that, imo, is the wrong approach. If you think someone should process a little lighter, do so in an encouraging way. I see no reason why AB should police images processed with certain means.
Like
minhlead 2.41
...
·  2 likes
·  Share link
Should you disclose the use of starnet also?
Like
jeffweiss9 2.33
...
·  Share link
I would say yes.  No one knows exactly what astronomical images they are using for their training set and, even if you did, not even the developers can explain exactly what the AI algorithm is doing to get its results.
-Jeff
Like
Alan_Brunelle
...
·  12 likes
·  Share link
The problem is how the poll is worded. Essentially, force user  disclosure of the use of AI to save astrophotography from the evils of fakes. 

1.  AI is a tool that can be used to do essentially the same things as other tools. And can be applied judiciously.  And they can be applied to make fakes. 

2.  Most other tools can be used to create fakes. They are also tools that can be used judiciously to make "acceptable" images. 

So are we going to be required to disclose the full processing path of each image and to what degree?

I personally would disclose any unique processing action if I felt it to be interesting or profoundly impactful.  

The fact that there are essentially zero novel images to be recorded any more (artistic license aside!), makes the policing of fakes by any means essentially moot.  All fakes should be easily identifiable.
Like
whwang 11.80
...
·  9 likes
·  Share link
In principle, I agree that AI-based processing should be disclosed.  In reality, I am not sure if there are good and easy ways to implement this in Astrobin, other than the already existing data field for processing software.

To me, people are free to do whatever they want on their images.  That's art creation.  On the other hand, astrophotography is not just about art.  It's also amateur science.  To maintain the science aspect of astrophotography, I think being honest about we did is the minimum requirement.  We are still free to do whatever we want, but we need to disclose it.  We can't take a shot of Milky Way in June and a shot of total lunar eclipse in July, and blend them together without telling people this is a blend of two independent shots.  In other form of photographic art, this is allowed.  But in astrophotography, we have to tell people about it.

But of course, we don't tell people the every exact processing step.  Many processes can be considered as "standard" in astrophotography (dark/bias subtraction, flat-fielding, stacking, nonlinear curve stretching, saturation boost, etc), and it's not necessary to disclose them.  Can AI-based processes be considered as standard?  Personally I think no, especially those involved with sharpening.  If the sharpening is just to "recover" the details lost in imaging caused by atmospheric turbulence or optical aberrations, then it is fine.  But if the sharpening has a high tendency to add details that do not actually exist in the nature, then this is like blending multiple unrelated images together.  This should be disclosed.  I think AI-based sharpening (also noise reduction, which often comes with sharpening) falls into this category.
Edited ...
Like
 
This topic was closed by a moderator.