Distinguish between transparency and extra alpha channels? - transparency

I am sorry in advance if I am not explaining my question really well. I am not 100% aware of the stuff I am asking.
Lets say that I have some CMYK tiff files. Is there a way that I can distinguish the difference between transparency and extra alpha channel ?
I used exiftools in the terminal with the command : exiftool -G -S filename.tif and I got a tag (ExtraSamples) that provides information about the alpha channel. Is there a way that I can distinguish the aforementioned difference ?
Thanks.

See the ExtraSamples TIFF tag description. According to the specification, the tag can have one of three values (per sample above the "natural" number of samples for the color model, i.e. 1 for gray, 3 for RGB or 4 for CMYK):
0 = Unspecified data
1 = Associated alpha data (with pre-multiplied color)
2 = Unassociated alpha data
Now, what you mean by "the difference between transparency and extra alpha channel" isn't really clear to me, as I often use the terms "transparency" and "alpha channel" interchangeably. Perhaps you just mean the above (1 "associated" vs 2 "unassociated" alpha).
Any other extra samples, will use 0 ("unspecified"). Note that these extra samples are not used for transparency or alpha information. However, their meaning is application specific, and would require further context to properly interpret. So if your file contains unspecified ExtraSamples, this is most likely not alpha channels or transparency at all.
The link in your comment makes it somewhat more clear what you refer to by "the difference between transparency and extra alpha channel". However, the link talks about the difference between an alpha channel and a (bit) mask, which are just two types of transparency.
ExtraSamples in a TIFF is typically not used for bit masks, instead a separate IFD with SubFileType "mask" (4) is used.

Related

Get EXR image luminance

As far as i know, I think to get an image.exr luminance we have to render the image in different exposure and gather them into one ?
Here is my image made in Adobe which have different luminance from 0(left) to 10 (right) in exr extension.
exr_to_png_forViewingPurpose
FIRST TEST :
so I tried ImageMagick with this command line from this link:
magick .\img.exr -evaluate multiply 2 img1EV.png
So I did the same for +-2EV, and +-3EV but the negatif EV give me an empty image.
SECOND TEST :
I saw on this link that use different exposure as 0.1, 0.2, 0.4 etc...
I did the same with a sofware to get the same value of exposure as him.
Next, I implemented a program in py to get the right luminance from a png (as suggested here).
In the end with the first test, I have the same result for each image that have different exposure.
For the second test, I do have different image but I don't know what to do with them after that.
I don't know if it's the right way to do it, hope someone knows.
FirstTest Result
SecondTest Result

ImageJ - how to ensure two images are directly comparable (what scripting command?)

I need to present some microscopy images side by side and ensure they are directly comparable by eye in a presentation. The pictures were all taken with the same exposures, gain et cetera so the underlying pixel values should be comparable.
However, the microscopy software has a nasty habit of saving the files with one of the colour channels saturated (for some reason), so I have to process the images for presentations.
Previously I'd been using a macro which processes through a folder and calls the scripting command
run("Enhance Contrast", "saturated=0.35");
But on reflection I don't think this is the proper command to call. I don't think it would produce images that are directly comparable, by eye, to each other.
I had thought that the command
run("Color Balance...");
resetMinAndMax();
would be best as it should show the full display range. But the display values shown on the histogram do vary depending on the image.
Is this appropriate for making directly comparable images or should I run a command like
setMinAndMax();
more appropriate. With 0 as minimum and an arbitrary figure as the maximum. This is driving me mad, as I keep on getting asked about whether my images are directly comparable but I simply don't know!
Usually, resetMinAndMax(); is the best way to ensure that your images are displayed consistently.
Note however that it also depends on the bit depth of your images.
8-bit grayscale images and RGB color images are displayed with a range of 0-255 upon resetMinAndMax();
For 16-bit (short) and 32-bit (float) images, the display range is calculated from the actual minimum and maximum values of the image (see the documentation of the Brightness/Contrast dialog and the resetMinAndMax() macro function)
So for 16-bit and 32-bit images, you can use the Set Display Range dialog (Set button in the B&C window) with one of the default Unsigned 16-bit range options to ensure consistent display, or the macro call:
setMinAndMax(0, 65535);
If you want to use the images in a presentation, copy them using Edit > Copy to System, or convert them to either 8-bit or RGB before saving them and inserting them in a presentation.

I have hand-drawn some work on grid paper and scanned it, how can I use Photoshop to remove the gridlines

The grid is a blue/green. The work is in a black ink, and has a fair bit of variety of pressures, which I want to retain.
Here's a link to a small selection.
I have Photoshop v3
My attempts have involved using Select, Color Range, and sampling some grid, then inverting.
Is there a better way?
I also have some experience with Python and PIL, if that's a useful alternative.
This is a Photoshop answer, rather than a programming answer, but that seems to match your question's needs.
I applied a Black and White filter, and enabled a Blue filter, then set the Blue channel sensitivity to 300%, like this in Photoshop CC.
and got pretty good results like this:
In an older vsersion of Photoshop, you may need to go to Image->Mode->Lab Color and then go into the Channels palette and deselect Lab leaving just a and b channels selected, then use Select->Color Range to get the blues (or maybe the blacks!!!!) before going back to RGB mode.

iOS: Is there a way to alter the color of every pixel on the screen?

How does apple alter the color of every single pixel on the screen (i.e. grayscale / inversion of colors), regardless of what object the color belongs to. It obviously isn't reading background color properties since it even affects images, as well.
How would one approach this?
To clarify my question, how can I change the intensity / hue of every pixel on the screen, similar to how f.lux does it?
How does apple alter the color of every single pixel on the screen?
Apple probably uses an API called CGSetDisplayTransferByTable which is not publicly available on iOS.
The display transfer table controls how each possible value in each of the three RGB channels is displayed on screen and can convert it to a different value. It works similar to Photoshop's "Curves" tool. By using the right transfer table it's possible to invert the screen, adjust the hue or enhance contrast.
Since the transfer table is part of the graphics hardware and is always active, there's zero performance overhead involved. On Mac OS there are actually two transfer tables: one for the application and one for the OS.
how can I change the intensity / hue of every pixel on the screen
Without jailbreak, you can't.

ImageMagick - Transparent background - Act like Photoshop's "Magic wand"

I'm trying to convert hundreds of images that
Have an unknown subject centered in the image
Have a white background
I've used ImageMagick's convert utility in the following way
convert ORIGINAL.jpg -fuzz 2% -matte -transparent "#FFFFFF" TRANSPARENT.png
The problem is, some of my subjects are within the "white" scale, so, just like the weatherman wearing a green tie, some of my subjects seem to be disitegrating.
Is there any way to solve this via ImageMagick? Are there any alternative solutions? Scripting GIMP?
As you said, GIMP has a magic wand tool that can be used to select continuous areas of the same color, and so it can avoid the "green tie syndrome". The problem is that it may introduce a problem if there is something like a human hair crossing the image (that will seperate some of the white areas). Another common problem, especially with pictures of people, is when they put their hand next to the body and between the hand and the body there is a small hole.
Basically, it is not too hard to create a GIMP script that opens in batch many images, uses the magic wand to select the pixel at some corner (or if desired, in several known fixed places, not just one) and then removes the selection.
If it's hard to find a white area at a fixed spot, it is possible to do a search inside - meaning that the script searches for a white pixel on the borders, and it goes inside gradually in a spiral untill it finds some white pixel. But this is very very unefficient in the basic scripting engine, so I hope you don't need this.
If any of the suggested options above is OK, tell me and I'll create a gimp script for it. It will be even better if you can post some samples images, but I'll try to help even without these.

Resources