Script for formatting image's EXIF output from Imagemagick - imagemagick

I need to get a string that consists of the information from the image's EXIF meta data. For example, I would like:
CameraModel, Focal Length in 35mm Format: 24mm, F11, 1/500, ISO 200
I could see all the information present from
identify -format '%[EXIF:*]' image.jpg
However, I'm having trouble consolidating the output and generate the information.
The first problem, while '%[EXIF:*]' prints all EXIF data, if I replace the star with a specific EXIF tag, it doesn't print out anything. I know I could simply print out all EXIF data a few times and use grep to get the one I need, then combine them together, but it feels better to retrieve just the value I'm looking for.
The second problem, the aperture value FNumber is in a format like "63/10", but I need it to be 6.3; Annoyingly, the shutter speed ExposureTime is like 10/5000 and I need it to be 1/500. What kind of conversion do I need for each case?
Thanks!

Here is something to get you started using awk to look for the EXIF keywords and save the corresponding setting as they go past. Then at the END it prints all it found:
#!/bin/bash
identify -format '%[EXIF:*]' a.jpg | awk -F= '
/^exif:Model/ {model=$2}
/^exif:FocalLengthIn35mmFilm/ {focal=$2}
/^exif:FNumber/ {f=$2}
/^exif:ExposureTime/ {t=$2}
/^exif:ISOSpeedRatings/ {ISO=$2}
END {
# Check if fNumber is a rational number and refactor if it is
n=split(f,k,"/")
if(n==2){
f=sprintf("%.1f",k[1]/k[2]);
}
# Check if exposure time is a rational number and refactor if it is
n=split(t,k,"/")
if(n==2){
m=int(k[2]/k[1]);
t=sprintf("1/%d",m);
}
print model,focal,f,t,ISO
}' OFS=,
Sample Output
iPhone 4,35,2.8,1/914,80
I have not tested the conversion from rational numbers too extensively... between the festive whiskies...

Even simpler is to use EXIFTOOL directly.
infile="P1050001.JPG"
exiftool -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
Camera Model Name : DMC-FZ30
Focal Length In 35mm Format : 35 mm
F Number : 2.8
Exposure Time : 1/8
ISO : 200
or
infile="P1050001.JPG"
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
P1050001.JPG,DMC-FZ30,35 mm,2.8,1/8,200

jzxu wrote: This is fine, too, however, I noticed the parameter
-csv, how do I get rid of the comma and replace them with spaces?
One way on unix is simply to use tr to replace the command with a space as follows:
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tr "," " "
SourceFile Model FocalLengthIn35mmFormat FNumber ExposureTime ISO
P1050001.JPG DMC-FZ30 35 mm 2.8 1/8 200
Or if you do not want the header line:
exiftool -csv -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tail -n +2 | tr "," " "
P1050001.JPG DMC-FZ30 35 mm 2.8 1/8 200
There may be other internal EXIFTOOL formatting options. See https://sno.phy.queensu.ca/~phil/exiftool/exiftool_pod.html
I am not an expert on EXIFTOOL, so perhaps I have missed something. But I do not see a space delimited output format. However, this makes the output in tab delimited format.
infile="P1050001.JPG"
exiftool -s3 -T -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile"
DMC-FZ30 35 mm 2.8 1/8 200
So one could use tr to replace tabs with spaces.
exiftool -s3 -T -model -FocalLengthIn35mmFormat -FNumber -ExposureTime -ISO "$infile" | tr "\t" " "
DMC-FZ30 35 mm 2.8 1/8 200

This works fine for me in Imagemagick 6.9.9.29 Q16 Mac OSX.
infile="P1050001.JPG"
cameramodel=`identify -ping -format "%[EXIF:model]" "$infile"`
focallenght35=`identify -ping -format "%[EXIF:FocalLengthIn35mmFilm]" "$infile"`
fnumber1=`identify -ping -format "%[EXIF:FNumber]" "$infile" | cut -d/ -f1`
fnumber2=`identify -ping -format "%[EXIF:FNumber]" "$infile" | cut -d/ -f2`
exptime1=`identify -ping -format "%[EXIF:ExposureTime]" "$infile" | cut -d/ -f1`
exptime2=`identify -ping -format "%[EXIF:ExposureTime]" "$infile" | cut -d/ -f2`
isospeed=`identify -ping -format "%[EXIF:ISOSpeedRatings]" "$infile"`
fnumber=`echo "scale=1; $fnumber1/$fnumber2" | bc`
exptime=`echo "scale=3; $exptime1/$exptime2" | bc`
echo "CameraModel=$cameramodel, FocalLengthIn35mmFilm: $focallenght35 mm, F$fnumber, $exptime sec, ISO $isospeed"
CameraModel=DMC-FZ30, FocalLengthIn35mmFilm: 35 mm, F2.8, .125 sec, ISO 200
Or alternately,
infile="P1050001.JPG"
declare `convert -ping "$infile" -format "cameramodel=%[EXIF:model]\n focallenght35=%[EXIF:FocalLengthIn35mmFilm]\n fnumber=%[EXIF:FNumber]\n exptime=%[EXIF:ExposureTime]\n isospeed=%[EXIF:ISOSpeedRatings]\n" info:`
fnumber1=`echo $fnumber | cut -d/ -f1`
fnumber2=`echo $fnumber | cut -d/ -f2`
fnumber=`echo "scale=1; $fnumber1/$fnumber2" | bc`
exptime1=`echo $exptime | cut -d/ -f1`
exptime2=`echo $exptime | cut -d/ -f2`
exptime=`echo "scale=0; $exptime2/$exptime1" | bc`
echo "CameraModel=$cameramodel, FocalLengthIn35mmFilm: $focallenght35 mm, F$fnumber, 1/$exptime sec, ISO $isospeed"
CameraModel=DMC-FZ30, FocalLengthIn35mmFilm: 35 mm, F2.8, 1/8 sec, ISO 200

Related

Batch append images in groups of two with Imagemagick

I have a directory of images and need to merge those images horizontally in groups of two, then save the output of each to a new image file:
image-1.jpeg
image-2.jpeg
image-3.jpeg
image-4.jpeg
image-5.jpeg
image-6.jpeg
Using Imagemagick via command line, is there a way to loop through every other image in a directory and run magick convert image-1.jpeg image-2.jpeg +append image-combined-*.jpg?
So the result would be combined pairs of images:
image-1.jpeg image-2.jpeg -> image-combined-1.jpg
image-3.jpeg image-4.jpeg -> image-combined-2.jpg
image-5.jpeg image-6.jpeg -> image-combined-3.jpg
Get them all appended succinctly and in parallel with GNU Parallel and actually use all those lovely CPU cores you paid Intel for!
parallel -N2 convert {1} {2} +append combined-{#}.jpeg ::: *jpeg
where:
-N2 says to take two files at a time
{1} and {2} are the first two parameters
{#} is the sequential job number, and
::: demarcates the start of the parameters
If your CPU has 8 cores, GNU Parallel will run 8 converts at once, unless you specify say 4 jobs at a time by adding -j4.
If you are learning and just finding your way with GNU Parallel add:
--dry-run so you can see what it would do without actually doing anything
-k to keep the outputs in order
So, I mean:
parallel --dry-run -k -N2 convert {1} {2} +append combined-{#}.jpeg ::: *jpeg
Sample Output
convert image-1.jpeg image-2.jpeg +append combined-1.jpeg
convert image-3.jpeg image-4.jpeg +append combined-2.jpeg
convert image-5.jpeg image-6.jpeg +append combined-3.jpeg
On macOS, you can simply install GNU Parallel with:
brew install parallel
If you have thousands, or hundreds of thousands of files, you may run into an error Argument list too long - although this is pretty rare on macOS because the limit is 262,144 characters:
sysctl -a kern.argmax
kern.argmax: 262144
If that happens, you can use this syntax to pipe the filenames in GNU Parallel instead:
find /somewhere -iname "*.jpeg" -print0 | parallel -0 -N2 convert {1} {2} +append combined-{#}.jpeg
If the images are all the same size and orientation, and if your system has the memory to read in all the images in the directory, it can be done as simply as this...
magick *.jpeg -set option:doublewide %[fx:w*2] \
+append +repage -crop %[doublewide]x%[h] +repage image-combined-%02d.jpg
This can be scripted easily using ImageMagick. I could show you how in Unix. But if you have more than 9 images, then you may have to rename with leading zeros, since alphabetically image-10 will come before image-2. You do not mention your IM version or platform and scripting will differ depending upon OS.
Here is a Unix solution. I have images rose-01.jpg ... rose-06.jpg in folder test on my desktop (Mac OSX). Each image has a label under it with its filename so we can keep track of the files.
cd
cd desktop/test
arr=(`ls *.jpg`)
num=${#arr[*]}
for ((i=0; i<num; i=i+2)); do
j=$((i+1))
k=$((i+2))
magick ${arr[$i]} ${arr[$j]} +append newimage_${j}_${k}.jpg
done
Note that arrays start with index 0. So I use j=i+1 and k=i+2 for the images that correspond to 1,2 3,4 5,6 in the filenames from ls in the array.
The result is (newimage_1_2.jpg, newimage_3_4.jpg, newimage_5_6.jpg)
An alternate solution is to montage all the images together two-by-two as an array of 2x3 and then equally crop them into 3 sections vertically. So in ImageMagick, this also works since these images are all the same size.
cd
cd desktop/test
arr=(`ls *.jpg`)
num=${#arr[*]}
num2=`magick xc: -format "%[fx:ceil($num/2)]" info:`
magick montage ${arr[*]} -tile 2x -geometry +0+0 miff:- | magick - -crop 1x3# +repage newimage.jpg
The results are: newimage-0.jpg, newimage-1.jpg, newimage-2.jpg
Ole Tang wrote:
Fails on filenames like My summer photo.jpg
So here is the solution using ImageMagick as modified from my original post.
Images:
rose 1.png
rose 2.png
rose 3.png
rose 4.png
rose 5.png
rose 6.png
OLDIFS=IFS
IFS=$'\n'
arr=(`ls *.png`)
for ((i=0;i<6;i++)); do
echo "${arr[$i]}"
done
IFS=OLDIFS
num=${#arr[*]}
for ((i=0; i<num; i=i+2)); do
j=$((i+1))
k=$((i+2))
magick "${arr[$i]}" "${arr[$j]}" +append newimage_${j}_${k}.jpg
done
This produces:
newimage_1_2.jpg
newimage_3_4.jpg
newimage_5_6.jpg

Imagemagick parallel conversion

I want to get screenshot of each page of a pdf into jpg. To do this I am using ImageMagick's convert command in command line.
I have to achieve the following -
Get screenshots of each page of the pdf file.
resize the screenshot into 3 different sizes (small, med and preview).
store the different sizes in different folders (small, med and preview).
I am using the following command which works, however, it is slow. How can I improve its execution time or execute the commands parallely.
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Splitting the command for readability
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Updated Answer
I see you have long, multi-page documents and while my original answer is good for making multiple sizes of a single page quickly, it doesn't address doing pages in parallel. So, here is a way of doing it using GNU Parallel which is available for free for OS X (using homebrew), installed on most Linux distros and also available for Windows - if you really must.
The code looks like this:
#!/bin/bash
shopt -s nullglob
shopt -s nocaseglob
doPage(){
# Expecting filename as first parameter and page number as second
# echo DEBUG: File: $1 Page: $2
noexten=${1%%.*}
convert -density 400 -quality 100 "$1[$2]" \
-resize 1310x650 -write "${noexten}-p-$2-large.jpg" \
-resize 230x160 -write "${noexten}-p-$2-med.jpg" \
-resize 170x117 "${noexten}-p-$2-small.jpg"
}
export -f doPage
# First, get list of all PDF documents
for d in *.pdf; do
# Now get number of pages in this document - "pdfinfo" is probably quicker
p=$(identify "$d" | wc -l)
for ((i=0;i<$p;i++));do
echo $d:$i
done
done | parallel --eta --colsep ':' doPage {1} {2}
If you want to see how it works, remove the | parallel .... from the last line and you will see that the preceding loop just echoes a list of filenames and a counter for the page number into GNU Parallel. It will then run one process per CPU core, unless you specify -j 8 if you want say 8 processes to run in parallel. Remove the --eta if you don't want any updates on when the command is likely to finish.
In the comment I allude to pdfinfo being faster than identify, if you have that available (it's part of the poppler package under homebrew on OS X), then you can use this to get the number of pages in a PDF:
pdfinfo SomeDocument.pdf | awk '/^Pages:/ {print $2}'
Original Answer
Something along these lines so you only read it in once and then generate successively smaller images from the largest one:
convert -density 400 -quality 100 x.pdf \
-resize 1310x650 -write large.jpg \
-resize 230x160 -write medium.jpg \
-resize 170x117 small.jpg
Unless you mean you have, say, a 50 page PDF, and you want to do all 50 pages in parallel. If you do, say so, and I'll show you that using GNU Parallel when I get up in 10 hours...

Imagemagick GraphicsMagick image mean command

I use the following command that works in imagemagick to get the mean of a picture
identify -format "%[mean]" photo.jpg
the same command does not work under graphicsmagick. Is there an equivalent I can use?
You can do this, for example:
gm identify -verbose photo.jpg | grep -E "Mean|Red|Green|Blue"
Or, if you want Red, Green and Blue as 3 separate integers
gm identify -verbose photo.jpg | awk '/Mean:/{s=s int($2) " "} END{print s}'
0 29 225
Or, if you want the average of all channels, like this:
gm identify -verbose photo.jpg | awk '/Mean:/{n++;t+=$2} END{print int(t/n)}'
85

How can I batch apply different watermarks for horizontal and vertical pictures in ImageMagick?

How can I mass apply different watermarks (horizontal and vertical) on different images (horizontal and vertical)?
I have folder tree with hundreds of PNG files like this (as example):
modern
classic
balance
I have two watermarks:
watermark-horizontal.png
watermark-vertical.png
How can I apply horizontal watermark on horizontal photos and vertical watermark on vertical photos?
I can do it for a single photo like this:
convert watermark-horizonal.png some-horizontal.png result.png
How can I do the same for many?
Like this:
#!/bin/bash
for f in *.png
do
read w h <<< $(convert "$f" -ping -format "%w %h" info: )
if [ $w -gt $h ]; then
echo "$f is $h tall and $w wide (landscape)"
convert watermark-horizonal.png "$f" "wm-$f"
else
echo "$f is $h tall and $w wide (portrait)"
convert watermark-vertical.png "$f" "wm-$f"
fi
done
If you want to recurse, you can do this:
#!/bin/bash
find . -name "*.png" -print0 | while read -d $'\0' f
do
read w h <<< $(convert "$f" -ping -format "%w %h" info: )
if [ $w -gt $h ]; then
echo "$f is $h tall and $w wide (landscape)"
else
echo "$f is $h tall and $w wide (portrait)"
fi
done
Save in a file called go, then type the following in a Terminal
chmod +x go # Do this just ONCE to make script executable
./go # Do this any number of times to run it
By the way, I use the following command for my watermarking:
composite -dissolve 50% -resize "1x1<" -gravity center copyright.png x.png x.png

How to export -verbose results for multiple files from ImageMagick

I successfully used -verbose to find median RGB values for 1 .jpg file.
Next step is finding median RGB values for about 2000 .jpg files.
Id like to figure out how to do this automatically rather than one at a time.
Id also like to figure out how to export the data resulting from -verbose over 2000 files to something like .csv or .txt.
Does anyone know how to best approach this?
This will give you all details of all .jpg images in the current working directory:
identify -verbose *.jpg >verbose.txt
If you are on a Unix system, this one will dump to verbose.txt lines with filename => overall RGB mean value:
for f in *.jpg; do echo "$f => `identify -verbose "$f" | grep mean | tail -n1 | cut -d':' -f2 | xargs`"; done >verbose.txt
This one will dump to verbose.txt lines with filename => R = mean value, G = mean value, B = mean value:
for f in *.jpg; do echo "$f => `identify -verbose "$f" | grep mean | head -n3 | cut -d':' -f2 | xargs | awk '{print "R = "$1" "$2", G = "$3" "$4", B = "$5" "$6}'`"; done >verbose.txt

Resources