Avoid double optimization of image - imagemagick

We have a storage of images where some images are optimized and some are not.
I'm working on 'script' that will go via every image in the storage and run optimize process.
I'm wondering:
Is there a way to check if image has been optimized?
Will I lose quality if image has been optimized with -quality 85% few times?
I have tried to run -quality 85% few times on same image and I could not see any lose in quality (after 3-th run the image's size was not changed). However I did not find a proof on official documentation.
Thanks!

You can check if the quality setting is already 75 before optimising:
identify -format %Q fred.jpg
92
Then optimise and check again:
convert fred.jpg -quality 75 optimised.jpg
identify -format %Q optimised.jpg
75
If you are using bash, this is easy:
q=$(identify -format %Q fred.jpg)
[ $q -ne 75 ] && mogrify -quality 75 fred.jpg
Another way to mark images as optimised might be to set a comment in the file to the word "optimised", like this:
# Set comment
mogrify -set comment "optimised" fred.jpg
# Get comment
identify -format %c fred.jpg
optimised
So, you would test if an image comment contains the word "optimised", and if not, optimise the image and change the comment to show as much:
[[ $(identify -format %c fred.jpg) != *optimised* ]] && { echo Optimising fred.jpg; mogrify -set comment "optimised" -quality 75 fred.jpg; }
Another possibility might be to use Extended File Attributes to tag files (with setfattr) as you reduce their quality and then use getfattr to check their quality rather than doing them again.
If you had hundreds or thousands of images to process, I would suggest GNU Parallel. Please try the example below on a small copy of a few files in a directory:
parallel '[[ $(identify -format "%c" {}) != *optimised* ]] && { echo Optimising {}; mogrify -set comment "optimised" -quality 75 {}; }' ::: *jpg
You will find it will process them all in parallel the first pass, and then do nothing on the second pass.
If you have too many images, and get "ERROR: Too many args", you can pass the filenames in on stdin like this, instead of globbing after the ::::
find /some/where -iname \*.jpg -print0 | parallel -0 ...

Related

How can I merge images from two folders into a together side-by-side with imagemagick?

I have two folders, A and B, with image files that have corresponding names.
For example, each contain files labelled 01.png, 02.png, 03.png, etc.
How can I merge the corresponding files such that I have a third folder C that contains all merged photos so that both of the originals are side by side.
I am on Linux, if that changes anything.
I am not near a computer to thoroughly test, but this seems easiest to me:
#!/bin/bash
# Goto directory A
cd A
# For each file "f" in A
for f in *.png; do
# Append corresponding file from B and write to AB
convert "$f" ../B/"$f" +append ../AB/"$f"
done
Or use GNU Parallel and do them all at once!
cd A
parallel convert {} ../B/{} +append AB/{} ::: *.png
Using ImageMagick version 6, if your images are all the same dimensions, and if your system memory can handle reading all the input images into a single command, you can do that with a command like this...
convert FolderA/*.jpg -set filename:f "%[f]" \
-set option:distort:viewport %[fx:w*2] -distort SRT 0 null: \
FolderB/*.jpg -gravity east -layers composite FolderC/"%[filename:f]"
That starts by reading in all the images from FolderA and extending their viewport to double their width to the right.
Then it adds the special built-in "null:" to separate the lists of images before reading in the second list. Then it reads in all the images from FolderB.
Then after setting the gravity to "east", it composites each image from FolderB over the extended right half of each corresponding image from FolderA. That creates the effect of appending the images side by side.
The command sets a variable at the beginning to hold the filenames of the first list of input files, then uses those as the names of the output files and writes them to FolderC.
If you're using ImageMagick version 7, use the command "magick" instead of "convert".
You can do that with some bash scripting code. Assume you have two folders A and B with the corresponding image names in them. Also you have an empty folder AB to hold the results. Then using ImageMagick with the bash looping code, you can do something like this:
Collect the names of all the files in folder A and put into an array
Collect the names of all the files in folder B and put into an array
Loop over the number of images in the folders
Process them with ImageMagick +append and save to folder AB
outdir="/Users/fred/desktop/AB"
aArr=(`find /Users/fred/desktop/A -type f -iname "*.jpg" -o -iname "*.png"`)
numA="${#aArr[*]}"
bArr=(`find /Users/fred/desktop/B -type f -iname "*.jpg" -o -iname "*.png"`)
numB="${#bArr[*]}"
if [ $numA -eq $numB ]; then
for ((i=0; i<numA; i++)); do
nameA=`basename "${aArr[$i]}"`
nameA=`convert "$nameA" -format "%t" info:`
nameB=`basename "${bArr[$i]}"`
nameB=`convert "$nameB" -format "%t" info:`
convert "${aArr[$i]}" "${aArr[$i]}" +append ${outdir}/${nameA}_${nameB}.jpg
done
fi

Can ImageMagick be prevented from overwriting an existing image?

When converting an image, ImageMagick's default behavior seems to be to overwrite any existing file. Is it possible to prevent this? I'm looking for something similar to Wget's --no-clobber download option. I've gone through ImageMagick's list of command-line options, and the closest option I could find was -update, but this can only detect if an input file is changed.
Here's an example of what I'd like to accomplish: On the first run, convert input.jpg output.png produces an output.png that does not already exist, and then on the second run, convert input.jpg output.png detects that output.png already exists and does not overwrite it.
Just test if it exists first, assuming bash:
[ ! -f output.png ] && convert input.png output.png
Or slightly less intuitively, but shorter:
[ -f output.png ] || convert input.png output.png
Does something like this solve your problem?
It will write to output.png but if the file already exists a new file will be created with a random 5 character suffix (eg. output-CKYnY.png then output-hSYZC.png, etc.).
convert input.jpg -resize 50% $(if test -f output.png; then echo "output-$(head -c5 /dev/urandom | base64 | tr -dc 'A-Za-z0-9' | head -c5).png"; else echo "output.png"; fi)

Add ken burn effect on video from list of images

I have created video from list of images using ffmpeg
system("ffmpeg -framerate 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4")
Now i want to add Ken burn effect, can i do it with ffmpeg or imagemagic or any command line tool on linux.
I can't speak of ruby-on-rails, linux, or ffmpeg technologies. But if you would like to create a panning effect made popular by Ken Burn, you would extract regions of an image, and animate them together.
#!/bin/bash
# A 16:10 ratio
WIDTH=64
HEIGHT=40
# Extract parts of an image with -extent operator
for index in $(seq 40)
do
TOP=$(expr 120 + $index)
LEFT=$(expr 150 + $index)
FILENAME=$(printf /tmp/wizard-%02d.jpg $index)
convert wizard: -extent "${WIDTH}x${HEIGHT}+${TOP}+${LEFT}" $FILENAME
done
# Replace this with your ffmpeg script
SLICES=$(ls /tmp/wizard-*.jpg)
RSLCES=$(ls /tmp/wizard-*.jpg | sort -rn)
convert $SLICES $RSLCES -set delay 15 -layers Optimize /tmp/movie.gif
Edited by Mark Setchell beyond this point... (just trying to help out)
Much as I hate editing other people's posts, the first part of Eric's code can equally be written this way if you find that easier to understand:
# Extract parts of an image with -extent operator
for index in {1..40}
do
((TOP=120 + $index))
((LEFT=150 + $index))
FILENAME=$(printf /tmp/wizard-%02d.jpg $index)
convert wizard: -extent "${WIDTH}x${HEIGHT}+${TOP}+${LEFT}" $FILENAME
done

Capture information on images generated by ImageMagick

I'm considering using ImageMagick to extract images from individual pages from a PDF file. How can I capture the names of the files getting generated by it? It seems that the -verbose option includes information on the files getting generated, but is that a reliable way of gathering that? Any other alternatives?
Depending on what you want to do, I think you have at least 3 options...
Option 1
If you want to know the number of pages up front, a priori, you can get ImageMagick to tell you like this:
identify -format %n FreddyFrog.pdf
8
or if you want it in a variable,
pages=$(identify -format %n FreddyFrog.pdf)
echo $pages
8
Option 2
You can tell ImageMagick how to format the names of the output files, like this, where the %04d says to use 4 digits and front-pad with zeroes:
convert -density 72 FreddyFrog.pdf FreddyFrog-%04d.tif
Then you will automatically know the names of the output files.
ls FreddyFrog-*
FreddyFrog-0000.tif FreddyFrog-0003.tif FreddyFrog-0006.tif
FreddyFrog-0001.tif FreddyFrog-0004.tif FreddyFrog-0007.tif
FreddyFrog-0002.tif FreddyFrog-0005.tif
Option 3
You can use the shell to draw a line in the sand before you convert the files and then find everything newer afterwards:
> before # or "touch before" if you prefer
convert FreddyFrog..... # strut your IM stuff
for f in *; do [[ "$f" -nt before ]] && echo $f; done # list files newer than line in sand
rm before # clean up
rm before

ImageMagick pdf to black and white pdf

I would like to convert a pdf file to a Black and White PDF file with ImageMagick. But I've got two problems:
I use this command:
convert -colorspace Gray D:\in.pdf D:\out.pdf
But this command convert only the FIRST page... How to convert all pages?
After use this command the resolution is terrible... but if I use -density 300 option the file size has increased more than double. So I would like to use the same DPI setting, but how to use?
Thanks a lot
Assuming you have all the necessary command line tools installed you can do the following:
Split and join PDF using pdfseparate and pdfunite (Poppler tools).
Extract the original density using pdfinfo plus grep/egrep and, for instance, sed. This will not guarantee the same size of the PDF file, just the same DPI.
Putting it all together you can have a series of bash commands as following:
pdfseparate in.pdf temp-%d.pdf; for i in $(seq $(ls -1 temp-*.pdf | wc -l)); do mv temp-$i.pdf temp-$(printf %03d $i).pdf; done
for f in temp-*.pdf; do convert -density $(pdfinfo $f | egrep -o 'Page size:[[:space:]]*[0-9]+(\.[0-9]+)?[[:space:]]*x[[:space:]]*[0-9]+(\.[0-9]+)?' | sed -e 's/^Page size:\s*//'| sed -e 's/\s*x\s*/x/') -colorspace Gray {,bw-}$f; done
pdfunite bw-temp-*.pdf out.pdf
rm {bw-,}temp-*.pdf
Note 1: there as a dirty workaround (for/wc/seq/printf) for a proper ordering of 10-999 pages PDFs (I did not figure out how to put leading zeros in pdfseparate).
Note 2: I guess ImageMagick treats PDFs as just another binary image file so for instance for mainly text files this will result in huge PDFs. Thus, this is a very bad method to convert text-based PDFs to B&W.

Resources