In ImageMagick, is it possible to -auto-gamma based on a region? - imagemagick

I'm using ImageMagick to prepare a set of ~20,000 photos for a timelapse video. A common problem in timelapse videos is flickering, due to changing lighting conditions, passing clouds, hue changes, etc.
I've used IM to convert each image to greyscale and -auto-gamma, which is a drastic improvement in lighting "stability". Very good, but not yet perfect.
I would now like to do the following, but can't figure out how.
1. determine ideal auto gamma based only on the lower 30% of the image
2. apply that ideal gamma to the entire image
Each of my images has sky above and buildings below. The sky changes dramatically as clouds pass by, but the buildings' lighting is fairly stable.
I tried -region, but as expected, it only applies the gamma to the region specified. Is it possible to do what I'm hoping for? Thanks for any advice!

Yes, I think so.
You can crop the bottom 30% of the image like this:
convert image.jpg -gravity south -crop x30%+0+0 bottom.jpg
so that means you can get the mean of the bottom 30% of the image like this:
convert image.jpg -gravity south -crop x30%+0+0 -format "%[mean]" info:
and you can also get the quantum range as well all in one go if you add that in:
convert image.jpg -gravity south -crop x30%+0+0 -format "%[mean] %[fx:quantumrange]" info:
Now, the gamma is defined as the logarithm of the mean divided by the logarithm of the midpoint of the dynamic range, but we can normalize both these numbers to the range [0-1] as follows:
log(mean/quantumrange) / log(0.5)
so we'll let bc work that out for us like this:
echo "scale=4; l($mean30/$qr)/l(0.5)" | bc -l
and we can use the result of that to apply a gamma correction to the entire image. So, I have put all that together in a single script, which I call b30gamma. You save it under that name and then type:
chmod +x b30gamma
to make it executable. Then you can run it on an image like this and the result will be saved as out.jpg so as not to destroy the input image:
./b30gamma input.jpg
Here is the script:
#!/bin/bash
# Pick up image name as parameter
image=$1
# Get mean of bottom 30% of image, and quantum range (65535 for Q16, or 255 for Q8)
read mean30 qr < <(convert "$image" -gravity south -crop x30%+0+0 -format "%[mean] %[fx:quantumrange]" info:)
# Gamma = log(mean)/log(dynamic range centre point)
gcorr=$(echo "scale=4;l($mean30/$qr)/l(0.5)" | bc -l)
# Debug
echo Image: $image, Mean: $mean30, quantum range: $qr, Gamma: $gcorr
# Now apply this gamma to entire image
convert "$image" -gamma $gcorr out.jpg

Related

ImageMagick: Divide AE distortion by total pixels in fx output info format

I am trying to use ImageMagick 7 to detect if a specific channel in an image is largely pure black and pure white (plus a little antialiasing, and there's a chance the image could be pure black). This is to distinguish from another kind of image that shares a naming convention but has photographic-like image data in the r/g/b channels.
(Basically both image types are specular maps from different engines. The one I'm trying to differentiate here is more modern and has the metallic map in the blue channel; the other is much older and just has the specular colour in the RGB channels and the gloss map in the alpha.)
Currently I'm comparing the channel to a clone of itself that has had a 50% threshold applied, using the AE metric to see if it's largely the same apart from a small amount of antialiasing, and a fuzz of 1% to account for occasional aberration from pure black/white. This command works, but of course at the moment it only returns the number of distorted pixels:
magick ( "file.png" -channel b -separate ) ^
( +clone -channel b -separate -threshold 50% ) ^
-fuzz 1% -metric AE -compare ^
-format "%[distortion]" info:
Because the input image sizes will vary, I want to divide the distortion by the total number of pixels in the image to get the relative amount of the image that's not pure black/white -- under 10% has seemed good so far in my manual testing -- but I can't get the format syntax right. Everything I've tried -- for example "%[fx:%[distortion]/w*h]" -- has given the magick: undefined variable `[distortion]' # error/fx.c/FxGetSymbol/1169 error.
What syntax should I use? (And if there's a better way to do what I'm doing, I always appreciate it!)
I believe the following is what you want in Imagemagick. Basically you save the distortion in -set option: argument and then use it in -fx later.
However, +clone gives you just the b channel, so there should be no need for -channel b -separate in your second line.
magick ( "file.png" -channel b -separate ) ^
( +clone -threshold 50% ) ^
-fuzz 1% -metric AE -compare ^
-set option:distort "%[distortion]" ^
-format "%[fx:distort/(w*h)]" info:
Fred (#fmw42) has already provided an excellent method. There is another method for differentiating pure black and white images from greyscale images with a fuller tonal scale which may interest you. Credit to Anthony Thyssen for the technique described here.
If you use -solarize 50% in ImageMagick it inverts all the highlights, so it effectively folds your histogram in half and all the whites become pure black and all the near-whites become near blacks. The command looks like this:
magick INPUT -solarize 50% OUTPUT
So, if I apply that to a couple of input images - the first one pure black and near white, the second a greyscale - and show the corresponding output image on the right you'll see the effect:
If you now inspect the mean and standard deviation of the two solarised images:
magick {a,b}-sol.jpg -format "%f, mean: %[mean], stdev: %[standard-deviation]\n" info:
a-sol.jpg, mean: 2328.91, stdev: 3175.67
b-sol.jpg, mean: 16319.5, stdev: 9496.04
you can see that the mean and standard deviation of the first (pure black and white) image is low because all the bright whites have folded to near blacks, whereas the mean and standard deviation of the greyscale image are both higher because the tones are more spread out.

ImageMagick: get biggest square out of image

I do have thousands of images in different sizes; now I would like to get the biggest square out of it that's possible without transparent/black background. Of course, the ratio should be kept, if it's e.g. a landscape image all height should be in the destination image but the left and right should be cropped; for portrait images, the other way round.
How's that possible?
I think you mean this. If you start with a landscape image bean.jpg:
magick bean.jpg -gravity center -extent "%[fx:h<w?h:w]x%[fx:h<w?h:w]" result.jpg
If you start with a portrait image, scooby.jpg:
magick scooby.jpg -gravity center -extent "%[fx:h<w?h:w]x%[fx:h<w?h:w]" result2.jpg
The part inside the double-quotes is the interesting part. It is basically setting the extent of the image, like:
-extent 100x100
where 100 is the width and the height. Rather than that though, I am using a calculated expression which tests whether the height (h) is less than the width (w) using a ternary operator. That results in taking whatever is smaller out of the current height and width as the new height and width, So there are really two calculated expressions in there, with an x between them, similar to 100x100.
Note that this method requires ImageMagick v7 or better - i.e. it uses the magick command rather than v6's convert command. If you have v6, you need to use more steps. First, get the width and the height of the image, then choose the smaller of the two and then issue a convert command with the gravity and extent both set. In bash:
# Get width and height
read w h < <(identify -format "%w %h" scooby.jpg)
# Check them
echo $w,$h
272,391
# Set `n` to lesser of width and height
n=$w
[ $h -lt $n ] && n=$h
# Now do actual crop
convert scooby.jpg -gravity center -extent "${n}x${n}" result.jpg
If you have thousands to do, I would suggest using GNU Parallel if you are on macOS or Linux. If you are on Windows, sorry, you'll need a loop and be unable to easily use all your CPU cores.
I have not tested the following, so only try it out on a small, COPIED, sample of a few files:
# Create output directory
mkdir output
# Crop all JPEG files, in parallel, storing result in output directory
parallel --dry-run magick {} -gravity center -extent "%[fx:h<w?h:w]x%[fx:h<w?h:w]" output/{} ::: *.jpg
If the commands look good, remove the --dry-run part to do it for real.
If you're using ImageMagick v7, Mark Setchell has provided a simple method above (or below). If you're using IMv6, you can crop the largest center square from any image using a command lke this...
convert input.png -set option:distort:viewport "%[fx:min(w,h)]x%[fx:min(w,h)]" \
-distort affine "%[fx:w>h?(w-h)/2:0],%[fx:w<h?(h-w)/2:0] 0,0" output.png
That sets the output viewport size to the largest square you can crop from the input image. Then it adjusts the position of the input image so it is centered within that square viewport.
This command should work from a command prompt or script on most *nix systems. If you're using Windows, replace that continued line backslash "\" with a caret "^". If you're using a BAT script in Windows you'll also have to make all the single percent signs "%" into doubles "%%".
You can also simply change "convert" to "magick" to run this command using IMv7.
I find this easier to remember:
convert in.png -gravity Center -extent 1:1 out.png

How to detect frames to keep or discard based on luminosity levels?

I have a series of images from a slow motion capture of pulsing electrical discharges. Many of the frames are nearly black. I would like to selectively keep the frames that are more interesting; eg have more luminosity.
I've considered using ImageMagick or GraphicsMagick (or any other; not married to any tool - I'm up for more efficient suggestions).
How would I go about selecting such images and then discarding the other images without appreciable luminosity levels? I'm assuming that I have to establish a baseline first of "black" and then perhaps visually find the least luminous frame image and then use that as the lower limit to use for getting meaningful images / frames...
Example of DISCARD ("empty" frame):
Example of KEEP (frame with "data"):
I would suggest ImageMagick to Erode the image (clean-up noise), reduce data to a monochrome binary image, and print the statistical mean of the image.
convert 5HzsV.jpg -format "%[mean]" -monochrome -morphology Erode Diamond info:
# => 0
convert lLZFX.jpg -format "%[mean]" -monochrome -morphology Erode Diamond info:
# => 149.992
So a bash script might be as easy as...
for image in $(ls *.jpg)
do
L=$(convert "$image" -format "%[mean]" -monochrome -morphology Erode Diamond info:)
if [[ $L -gt 0 ]]; then
echo "Image $image is not empty! # $L"
fi
done
Of course that can be adjusted to meet your needs.
The way images are encoded, you'll likely find that the 'interesting' images are bigger, because the uniformly dark background compresses better than a random spark. For instance, your empty Jpeg is 21K while the interesting one is 39K.

Autolevels over an entire image while only sampling from a smaller area

I have some poorly scanned images (around 0.25M) that I need to adjust automatically. The outside edges (about 20% of the area) are throwing off the auto levels commands.
Is there a way to calculate the image histogram from a subset of the image data while executing autolevels over the entire area?
eg:
convert **-samplearea [box]** -autolevels infile.png
Imagemagick's -autolevels command will calculate the min/max of the image, and passes it to the -level command. Calculating what the autolevel values would be from an image's region would be just a -crop, -format, and info:.
convert source.png -crop $geometry -format "%[min],%[max]" info:
Where $geometry is defined.
So if I wanted apply level correction in respect to the area in the top-left corner of the image, my commands would read:
# Grab the area to analyze (top-left, but not on edge)
geometry="50x50+5+5"
# Grab values
levels=$(convert source.jpeg -crop $geometry -format "%[min],%[max]" info:)
# Apply to whole image
convert source.jpeg -level $levels out.jpeg

Fitting images by specifying the new aspect ratio

Can ImageMagick's convert append white or black bars to maintain aspect ratio after specifying just the aspect ratio?
More concretely
Suppose I have a 2000x1000 widescope image and I would like to compute a new image that has an aspect ratio of 4:3 to fit, say a TV. I can do
convert input.png -background black -extent 2000x1500 -gravity center output.jpg
But here I have manually chosen 2000x1500 to produce an extra 250x2 pixels of blakc. Can I ask convert to:
change the aspect ratio to 4:3
not lose any pixels; not interpolate any pixels
center the image
?
If it's also possible to chose the background color as the dominant color in the image (as in iTunes 11), do mention how.
Convert does not have the built-in capability to pad an image out to a given aspect ratio, so you will need to script this. Here is how this might be done in bash:
#!/bin/bash -e
im="$1"
targetaspect="$2"
read W H <<< $(identify -ping -format "%w %h" "$im")
curaspect=$(bc <<< "scale=10; $W / $H")
echo "current-aspect: $curaspect; target-aspect: $targetaspect"
comparison=$(bc <<< "$curaspect > $targetaspect")
if [[ "$comparison" = "1" ]]; then
targeth=$(bc <<< "scale=10; $W / $targetaspect")
targetextent="${W}x${targeth%.*}"
else
targetw=$(bc <<< "scale=10; $H * $targetaspect")
targetextent="${targetw%.*}x$H"
fi
echo convert "$im" -background black \
-gravity center -extent "$targetextent" \
output.jpg
Call this script with the input image and the target aspect ratio given as a floating point number (for example, 4/3 = 1.333):
$ do-aspect input.png 1.333
Notes:
bc is used for floating point math, because bash itself has only integer arithmetic.
Note that -gravity center is on the final command line before -extent. This is because gravity is a setting while extent is an operator. Settings should always precede the operators that they affect, or else convert will do unexpected things when your commands start to get more complicated.
When you're happy with the results of the program, you can either copy and execute its output, or just remove the echo from before the final convert.
To your question about finding the dominant color of an image, there are different ways of doing that, but one common method is to resize the image to 1x1 and output the color of the resultant pixel - it will be the average color of the whole image:
convert input.png -resize 1x1 -format '%[pixel:p[0,0]]' info:
You can do other processing on the image before resizing to get a different heuristic.

Resources