Get EXR image luminance - image-processing

As far as i know, I think to get an image.exr luminance we have to render the image in different exposure and gather them into one ?
Here is my image made in Adobe which have different luminance from 0(left) to 10 (right) in exr extension.
exr_to_png_forViewingPurpose
FIRST TEST :
so I tried ImageMagick with this command line from this link:
magick .\img.exr -evaluate multiply 2 img1EV.png
So I did the same for +-2EV, and +-3EV but the negatif EV give me an empty image.
SECOND TEST :
I saw on this link that use different exposure as 0.1, 0.2, 0.4 etc...
I did the same with a sofware to get the same value of exposure as him.
Next, I implemented a program in py to get the right luminance from a png (as suggested here).
In the end with the first test, I have the same result for each image that have different exposure.
For the second test, I do have different image but I don't know what to do with them after that.
I don't know if it's the right way to do it, hope someone knows.
FirstTest Result
SecondTest Result

Related

image similarity algorithm for charts

for automating the tests of a legacy Windows application I need to compare screenshots of charts.
Pixel comparison works fine as long as the Windows session opens with same resolution, DPI, color depth, font size, font family, etc., otherwise the screenshot taken during the test may differ slightly from that recorded during the development of the test.
Therefore, I am looking for a method that allows slight variations and produces a score rather than a boolean.
Started with scaling the retrieved screenshot to match the size of recorded one. Of course, pixel comparison fails.
Then I tried to use SSIM to get a similarity score (used https://github.com/rhys-e/structural-similarity). Definitely it does not work for my case -- see below a simplified experiment.
Any suggestions?
Thanks in advance,
Adrian.
SSIM experiments
This is the reference picture:
This one contains a black line slightly above than the reference --> getting 0.9447093986742424
This one, completely different --> getting 0.9516260505445076

Scale down large image and keep the quality

I have one huge logo with dimensions 1770x1770 and need to scale it down to 80x80. The logo is not complicated and shouldn't loose quality but when I scale it down, it looks really bad, quality is drastically decreased and text is not readable at all.
I know that beginning dimensions look huge but the logo is simple so it should be possible to keep good quality.
The best results which I had were when I was scaling the image in a couple of phases by 20-30%.
Thoughts? Thank you :)
It is of the nature of raster images they cannot change size with impunity.
If you have your logo in a vector format, like SVG, or postscript, you better use another program to generate a low resolution version of it.
Otherwise, if the original is raster, them one way to try to reduce it is to downscale it a bit of a time (around 10%), and run enhance-filters at each step. It will yield you a better result, but still far from perfect.
You can do that programatically - open up a Python console in filters->python->console
Then, retrieve a reference to your image with:
img = gimp.image_list()[0] (the 0 ) referes to the last image tab open - use "1" for the one before that, and so on.
And them just type:
while img.width > 80:
pdb.gimp_image_scale(img, int(img.width * 0.9), int(img.height * 0.9))
pdb.plug_in_unsharp_mask(img, img.layers[0], 3, 0.5, 4)
(If you know nothing about Python: beware of the identation - the two lines inside the while must have a common whitespace prefix - and hit in a blank line to actually execute it)

How does gravatar adjust the colour in your images automatically?

I decided I wanted to change my gravatar to be circular. I have it circular on my blog with css and prefer the effect so decided to use a bit of imagemagick to give my image a circular alpha channel (so I could have it on SO, e.t.c. as well). a couple of quick commands later I was sorted:
# crop to square 422x422 with 0x0 offset
convert mike_gravatar.jpeg -crop 422x422+0+0 mike_gravatar_square.jpeg
# give circular alpha channel around it NOTE this is for a 422x422 image (note the 211x211)
convert mike_gravatar_square.jpeg \( +clone -threshold -1 -negate -fill white -draw "circle 211,211,211,0" \) -alpha off -compose copy_opacity -composite mike_gravatar_circle.png
brilliant, now we just upload this to gravatar, I will have a nice circular cropped image and all will be well with the world.
as you have probably guessed from the question all is not well:
ok, I must have clearly messed up my imagemagick and not checked that the before and after image are the same, reopen the images next to one another, and see that they are indeed the same. I try uploading again to gravatar and notice that they seem to process the images after the "cropping" stage, here is what it looks like in the browser after the file upload (before the cropping messes it up):
Alright, lets do some digging, someone else must have stumbled upon this before, so I have a look around and one lone soul in a desolate forum wasteland cries out. Now there is no response to this, but the relevant text is here:
It seems that if a photo or picture uploaded to Gravatar's cropper
doesn't have jet black, it will auto-level the nearest grey to black
and darken the whole image, including darkening whites into greys. Can
confirm that this occurs with any PNG image that has a grey background
or has a large enough proportion of it, whether or not it has 255
whites and regardless if it has alpha-blending or not
So it seems like I can fix this by putting in a single black pixel, that sounds alright so I try adding a black pixel, then a single black and a single white pixel, result:
So basically now I'm out of ideas:
does anyone have any idea what post-processing gravatar does, so I can undo it or counteract it's effects with pre-processing?
is this "feature" documented anywhere, or can it be turned off, or gotten around?
I think it would be quite cool to preprocess the image to counteract the darkening they would do to it but that would require knowing exactly what they do in order to change things and obviously might not be possible (depends on the relative movement of each colour, I suppose)
EDIT:
I tried making an inverse image to see if it was basing the processing on the average or the extreme values and that was also darkened, it seems that it's more likely to be the average:
Alright, I've got a solution that "worked for me" unfortunately it is just empirical and "good enough" (I'd quite like to check what's actually happening but unfortunately don't have any more time to devote to being nerd sniped) I'm gonna post what I did, what I think might have happened and how I think this should be solved if someone has enough time.
First off what I did, I simply tried whitening the image by random amounts, what ended up working was a gamma around 2
convert mike_gravatar_circle.png -gamma 2 mike_gravatar_circle_light_2.png
here is what the picture looks like both before and after processing by gravatar:
I feel it's pretty ridiculous that I need to clobber my picture like I do on the left to make it look normal so I'm going to leave this question open to see if anyone can show me a better/cleaner way of doing this.
EDIT: forgot to mention my (completely unfounded) guesses as to how this should be solved. So my guess is that gravatar might try and make the average color of the image some type of midrange value (as that might seem sensible... I guess, I don't know) and picks up the alpha as being all white. trying some experiments to determine could be interesting, but only if they had an api to automate uploading and downloading the images or it would be painful effort, I'm looking forward to any suggestions as to what people think is happening.

Programmatically get non-overlapping images from MP4

My ultimate goal is to get meaningful snapshots from MP4 videos that are either 30 min or 1 hour long. "Meaningful" is a bit ambitious, so I have simplified my requirements.
The image should be crisp - non-overlapping, and ideally not blurry. Initially, I thought getting a keyframe would work, but I had no idea that keyframes could have overlapping images embedded in them like this:
Of course, some keyframe images look like this and those are much better:
I was wondering if someone might have source code to:
Take a sequence of say 10-15 continuous keyframes (jpg or png) and identify the best keyframe from all of them.
This must happen entirely programmatically. I found this paper: http://research.microsoft.com/pubs/68802/blur_determination_compressed.pdf
and felt that I could "rank" a few images based on the above paper, but then I was dissuaded by this link: Extracting DCT coefficients from encoded images and video given that my source video is an MP4. Of course, this confuses me because the input into the system is just a sequence of jpg images.
Another link that is interesting is:
Detection of Blur in Images/Video sequences
However, I am not sure if this will work for "overlapping" images.
Th first pic is from a interlaced video at scene change.The two fields belong to different scenes. De-interlacing the video will help, try the ffmpeg filter -filter:v yadif . I am not sure how yadiff works but if it extracts the two fields and scale them to original size, it would work. Another approach is to detect if the two fields(extract alternate lines and form images with half the height and diff them) are very different from each other and ignore those images.

Getting the average color of frames in a video

End goal: Get the average colour of each frame in a video.
My plan so far is this:
Export the frames of the video as individual images.
Batch process the images to resize them as 1px x 1px as I believe this will provide the average colour.
Get the RGB value of that one pixel and record it as text.
Where I'm stuck is step 3. I've no idea how one would go about this programmatically.
I only need to do this once or twice, so it doesn't need to be completely automatic, I'm just keen to avoid copy pasting colour values manually.
EDIT: The first two steps don't require any programming so I am pretty open to using whatever language your solution requires. My forte is PHP, and this is for an Arduino project, so C-like languages are fine, but whatever will get the job done. I use a Mac but Windows or Linux also not a problem either.
Since you are doing something with image processing, I assume you are doing this in matlab.
So you can read the image first:
A = imread('filename.jpg');
To get the Red color you can use:
A(1,1,1);
To get Green and Blue just change the index to 2 and 3 respectively in the third column.

Resources