ImageMagick - calculate perspective without knowing image dimensions - imagemagick

How can I get the maximum width and height of an image, perform some maths on it, then use it in my perspective distortion?
I have a bunch of images to which I want to apply a perspective distortion.
The only problem is, each image is a different size.
This code works on an image where I know the size (1440 * 900).
convert test.jpg -matte \
-virtual-pixel transparent \
-distort Perspective '0,0 75,0 \
0,900 0,450 \
1440,0 1440,200 \
1440,900 1200,900' \
distorted.jpg
I know I can get the maximum values by using %h and %w - but I can't find a way to multiply those numbers.
Essentially, what I want to do is define the points like this:
-distort Perspective '0,0 75,0 \
0,%h 0,(%h/2) \
%w,0 %w,200 \
%w,%h (%w*0.75),%h'
For bonus points, I'd like to be able to call the perspective using -distort Perspective '#points.txt'

You can use ImageMagick's built-in fx operator to do maths for you, without involving bash mathematics, bc or eval.
Like this:
persp=$(convert image.jpg -format "0,0 75,0 0,%h 0,%[fx:int(h/2)] %w,0,%w,200 %w,%h %[fx:int(w*0.75)],%h" info:)
echo $persp
0,0 75,0 0,900 0,450 1440,0,1440,200 1440,900 1080,900
Then do:
convert image.jpg ... -distort Perspective "$persp" ... distorted.jpg
Oh, for those bonus points... ;-)
convert image.jpg -format "0,0 75,0 0,%h 0,%[fx:int(h/2)] %w,0,%w,200 %w,%h %[fx:int(w*0.75)],%h" info: > points.txt
convert image.jpg ... -distort Perspective #points.txt distorted.jpg

Related

How to preserve red color only using ImageMagick command line

I have the following image:
What I want is to preserve only red color and desaturate
every other color into grayscale. Resulting in this:
How can I do that with Imagemagick command line?
I tried this but failed:
convert original.png \( -clone 0 -transparent red -alpha extract -transparent black \) redonly.png
Here is one way, though not perfect, using ImageMagick. I specify hue=0 deg for red and tolerance=25 deg both in range 0 to 360.
Input:
hue=0
tolerance=25
toler=`convert xc: -format "%[fx:(100*$tolerance/360)]" info:`
hueval=`convert xc: -format "%[fx:50-$hue]" info:`
thresh=`convert xc: -format "%[fx:100-$tolerance]" info:`
convert tomato.jpg \
\( -clone 0 -colorspace gray -colorspace sRGB \) \
\( -clone 0 -colorspace HSL -channel 0 -separate +channel \
-evaluate AddModulus ${hueval}% \
-solarize 50% -level 0x50% \
-threshold $thresh% \) \
-swap 0,1 -alpha off -compose over -composite \
result.jpg
Also not perfect, but fairly easy to understand. Basically, you could use the fx operator to inspect the Hue of each pixel and, depending on its Hue/colour, return either the original pixel or its greyscale equivalent.
So, as a first stab, you might do this to replace all pixels exhibiting a high Hue value with their greyscale (lightness) equivalent:
magick ripe.jpg -fx "(u.hue<0.1)? u : u.lightness" result.jpg
Then you might realise that red Hues wrap around at 0/360 degrees on the Hue circle, so you could do:
magick ripe.jpg -fx "(u.hue<0.1)||(u.hue>0.9)? u : u.lightness" result.jpg
Explanation
There are a couple of things going on here. Firstly, the -fx operator is a very low-level, extremely powerful (and sadly rather slow because it is interpreted) way of running a piece of "code" on every pixel in the image. Secondly, I am running a ternary operator with the format:
condition? valueA : valueB
so I am testing a condition for every pixel, and if true I return valueA, and if false I return valueB. When I refer to u and u.hue and u.lightness, the u means the first image in my command - I could load two images and use features of the first to select features of the second, then I would use u and v to differentiate. Finally, the values are scaled on the range [0,1] so I don't test for "Hue>350 in the range [0,360]", instead I test for Hue>0.9 as a sloppy equivalent - I guess I could have used Hue>(350/360). Note that you can make the expression arbitrarily complicated and also put it in a separate file to re-use it like this:
magick ripe.jpg -fx #DeSatNonRed.fx result.jpg
DeSatNonRed.fx might look something like this:
hueMin=350/360;
hueMax=20/360;
(hue>hueMin) || (hue<hueMax) ? u : u.lightness
Note that, in the general case, you should also really consider Saturation when looking at Hues, which you can add in above, but which I omitted for clarity, and because your image is almost fully saturated anyway.
Keywords: Image processing, ImageMagick, low-level fx operator, ternary, pixel-by-pixel, evaluate, expression.

Imagemagick convert to name tiles as row/column doesn't work as expected with -extent

I have an image, 5120  ×  4352 that I crop into 2048x2048 tiles. I want to name my cropped tiles like
tile_0_0.png
tile_0_1.png
tile_0_2.png
tile_1_0.png
tile_1_1.png
tile_1_2.png
...
But this command:
convert image.png -crop 2048x2048 -gravity northwest \
-extent 2048x2048 -transparent white \
-set 'filename:tile' '%[fx:page.x/2048]_%[fx:page.y/2048]' \
+repage +adjoin 'tile_%[filename:tile].png'
Gives me this result:
tile_0_0.png
tile_0_1.png
tile_0_16.png
tile_1_0.png
tile_1_1.png
tile_1_16.png
tile_4_0.png
tile_4_1.png
tile_4_16.png
I suspect it has do with the tiles on the last row and column aren't fully 2048x2048, but the extent command makes the end result still 2048, but how can I use this with tiles and file names?
My current workaround is to first resize the original image like this, and then run the above command:
convert image.png -gravity northwest \
-extent 2048x2048 -transparent white bigger.png
But it would be nice to do it in one swoop :)
Using ImageMagick you could set a viewport that is just enough larger than the input image so it divides evenly by 2048. Then a no-op distort will enlarge the viewport to that size. That way the "-crop 2048x2048" will create pieces that are already 2048 square.
Here's a sample command I worked up in Windows, and I'm pretty sure I translated it to work correctly as a *nix command.
convert image.png \
-set option:distort:viewport '%[fx:w-(w%2048)+2048]x%[fx:h-(h%2048)+2048]' \
-virtual-pixel none -distort SRT 0 +repage -crop 2048x2048 \
-set 'filename:tile' '%[fx:page.x/2048]_%[fx:page.y/2048]' \
+repage +adjoin 'tile_%[filename:tile].png'
The "-distort SRT" operation does nothing except expand the viewport to dimensions that divide evenly by 2048, with a result just like doing an "-extent" before the crop. And "-virtual-pixel none" will leave a transparent background in the overflow areas.
Edited to add: The formula for extending the viewport in the above command will incorrectly add another 2048 pixels even if the dimension is already divisible by 2048. It also gives an incorrect result if the dimension is less than 2048. Consider using a formula like this for setting the viewport to handle those conditions...
'%[fx:w+(w%2048?2048-w%2048:0)]x%[fx:h+(h%2048?2048-h%2048:0)]'

Batch trim noisy images

I have a massive set of noisy images of drawings that people have created. I'd like to have some function to trim them down to ONLY the drawing.
Here are some examples:
Because of the noise -trim doesn't work
I also tried to use the example linked here (www.imagemagick.org/Usage/crop/#trim_blur), but it was ineffective because of differing noise levels both within and between images.
Lastly, I tried to increase the contrast to increase the likelihood of the lines of the actual drawing being identified, but for similar reasons to the above (differing noise levels), it only sharpened the lines in part of each image.
If anyone has any ideas, I'd love to hear them!
Not sure if this will work for all your images, as there are quite a few problems with them:
artefacts around the edges
uneven lighting, or shadows
noise
low-contrast
but you should get some ideas for addressing some of the issues.
To get rid of the artefacts around the edge, you could reduce the extent of the image by 2.5% on all sides - essentially a centred crop, like this:
convert noisy1.jpg -gravity center -extent 95x95% trimmed.png
To see the shadows/uneven lighting, I will normalise your image to a range of solid black to solid white and you will see the shadow at bottom left:
convert noisy1.jpg -normalize result.png
To remove this, I would clone your image and calculate the low frequency average over a larger area and then subtract that so that slowly changing things are removed:
convert noisy1.jpg \( +clone -statistic mean 25x25 \) -compose difference -composite -negate result.png
That gives this, and then you can try normalising it yourself to see that the shadow is gone:
If I now apply a Canny Edge Detection to that, I get this:
convert noisy1.jpg \( +clone -statistic mean 25x25 \) -compose difference -composite -normalize -negate -canny 0x1+10%+30% result.png
Here is a very crude, but hopefully effective, little script to do the whole lot. It doesn't do any checking of parameters. Save as $HOME/cropper.
#!/bin/bash
src=$1
dst=cropped-$1
tmp="tmp-$$.mpc"
trimbox=$(convert "$1" -extent 95x95% -write "$tmp" \( +clone -statistic mean 25x25 \) -compose difference -composite -normalize -negate -canny 0x1+10%+30% -format %# info:)
convert "$tmp" -crop $trimbox "$dst"
rm tmp-$$.*
Make the script executable with:
chmod +x $HOME/cropper
And run with a single image like this:
cd /path/to/some/images
$HOME/cropper OneImage.jpg
If you have hundreds of images, I would make a backup first, and then do them all in parallel with GNU Parallel
parallel $HOME/cropper {} ::: *.jpg

Removing background using imagemagick, on a white product

Here's the original image I'm trying to remove background from:
I am trying to use imagemagick to remove the background from an image. When the image has a white product, my script doesn't work well. It removes the white from inside the product also. In brief I'm trying to do the following
create a mask image (replace background pixels with white with fuzz and threshold)
apply the mask over the original image to generate the output
If I use a fuzz factor of 0, like shown below, i get the background removed, but it creates a nasty halo around it. What can be done here?
I would take advantage of HSL colorspace, and create an alpha mask from the lightness channel.
convert tshirt.jpg \( \
+clone -colorspace HSL -separate \
-delete 0,1 -fx 'u>0.975?0:1' \) \
-compose CopyOpacity -composite \
out.png
I would go for a simple threshold to pick out the white and then some sort of filtration to remove the noise/ragged edges. So, for example
convert shirt.jpg -threshold 99.99% -negate result.jpg
which gives this:
Then apply some median filtering to smooth it:
convert shirt.jpg -threshold 99.99% -median 5 -negate result.jpg
or maybe a bigger filter:
convert shirt.jpg -threshold 99.99% -median 11 -negate result.jpg
which gives this
Alternatively, you may get on better with an erosion and a dilation...
convert shirt.jpg -threshold 99.99% -negate \
-morphology erode diamond:3 \
-morphology dilate diamond:3 result.jpg
You may like to use Anthony Thyssen's flicker_compare to flicker between the input and result image to see what you have got, see here.
./flickercompare -o flick.gif shirt.jpg result.jpg

Perspective distortion rendering - ImageMagick

whenever we apply some transformation using imagemagick convert command , it tries to ensure that the resulting image is of the same size as that of the original image . is there a way such that we get the whole rendered image with transparent/white background .
convert -verbose maanavulu_GIST-TLOTKrishna.tif \
-alpha set -matte -virtual-pixel transparent \
-distort perspective-projection '1.06,0,0.0,0,2.066,0.0,0.0,0.0' \
1.jpg
There's some tricks with fx & repage hinted at by the Distorting Usage documentation. I've found the easiest approach would be to set option:distort:viewport option to something large enough to capture the whole distortion, then -trim it to the finial size.
convert -verbose maanavulu_GIST-TLOTKrishna.tif \
-alpha set -matte -virtual-pixel transparent \
-set option:distort:viewport 1000x1000 \
-distort perspective-projection '1.06,0,0.0,0,2.066,0.0,0.0,0.0' \
-trim 1.jpg

Resources