I have a bunch of PNG files named foo<bar>.png I wish to convert to TIF animation. <bar> is a number varies from 0 to 25 in leaps of five. ImageMagick place foo5.png last in the animation while it is supposed to be second. Is there a way, apart from renaming the file to foo05.png to place it in the right place?
If you have more input images than are convenient enough to type (say, foo0..foo100.png), you could do this (on Linux, Unix and Mac OS X):
convert \
-delay 10 \
$(for i in $(seq 0 5 100); do echo foo${i}.png; done) \
-loop 0 \
animated.gif
Simple and easy, list your images and sort them:
convert -delay 10 -loop 0 $(ls -1 *.png | sort -V) animated.gif
You just give the order of your PNG files as they should appear in the animation. Use:
foo0.png foo5.png foo10.png foo15.png foo20.png foo25.png
instead of
foo*.png
After all, it's only 6 different file names which should be easy enough to type:
convert \
-delay 10 \
foo0.png foo5.png foo10.png foo15.png foo20.png foo25.png \
-loop 0 \
animated.gif
You can use "find" with "sort":
convert -delay 10 $(find . -name "*.png" -print0 | sort -zV | xargs -r0 echo) -loop 0 animated.gif
Even easier than ls and sort is to use the built-in -v option of ls:
convert -delay 10 -loop 0 `ls -v *.png` animated.gif
with `...` being executed instead of interpreted as string.
Or if you know a bit of python, then you can easily leverage the help of it from python shell.
Hit up python shell by typing python in your terminal. And apply following magic spells-
# Suppose your files are like 1.jpeg, 2.jpeg etc. upto 100.jpeg
files = []
for i in range(1, 101):
files.append('{}.jpeg'.format(i))
command = 'convert -delay 10 {} -loop 0 animated.gif'.format(' '.join(files))
from subprocess import call
call(command, shell=True)
Your job should be done!
Actually, you can do something like:
convert $(ls -v *.png) animated.gif
Related
I have a sequence of images that are blocks of a larger image, which together make up the whole image. The blocks are the result of splitting the original image along evenly spaced horizontal and vertical lines, so they don't have weird dimensions.
Is there a way to combine them with FFmpeg (or something else like ImageMagick) without re-encoding the images?
This answer suggests the hstack or vstack FFmpeg filter, but my image blocks aren't necessarily the full width or the full height of the original image.
Like this:
Perhaps this could be achieved with multiple FFmpeg commands using hstack or vstack (I'd prefer just one command though). Or with a complex filter?
e.g.
Edit: I tried using filter_complex with FFmpeg:
ffmpeg -i 0.jpg -i 1.jpg -i 2.jpg -i 3.jpg -i 4.jpg -i 5.jpg \
-filter_complex "[0][1]hstack=inputs=2[row 0]; \
[2][3]hstack=inputs=2[row 1];
[4][5]hstack=inputs=2[row 2];
[row 0][row 1][row 2]vstack=inputs=3[out]" \
-map "[out]" -c copy out.jpg
but it can't filter and copy streams at the same time.
You can do that with ImageMagick. However, it will decompress and recompress your jpg files. You will lose some quality.
Unix Syntax for IM 6:
convert \
\( 0.jpg 1.jpg +append \) \
\( 2.jpg 3.jpg +append \) \
\( 4.jpg 5.jpg +append \) \
-append \
mona_lisa.jpg
If using Windows remove the \ from the parentheses and replace the end of line \ with ^.
If using IM 7, replace convert with magick.
I found an interesting patch from 2010 against libjpeg which adds this feature to jpegtran. It won't split blocks, so your images will need to be multiples of 8 or even 16 pixels in each axis.
Unfortunately it's against the libjpeg 6b as distributed for ubuntu10. This includes some patches from 8d and doesn't really correspond neatly to any official libjpeg version (as far as I can see).
You need to download the ubuntu10.04 sources, extract their libjpeg, and patch that. The steps are:
wget http://old-releases.ubuntu.com/releases/releases/10.04/release/source/ubuntu-10.04-src-1.iso
Now open that ISO and pull out the files from ubuntu/pool/main/libj/libjpeg6b. Archive Manager can do this, perhaps there's some scriptable tool as well.
Now run:
# original sources
tar xf libjpeg6b_6b.orig.tar.gz
cd jpeg-6b
# apply ubuntu patches
zcat ../libjpeg6b_6b-15ubuntu1.diff.gz | patch
# apply jpegtran patches
patch < ../append-jpeg6b.patch
./configure
make
sudo make install
That will install the modified jpegtran to /usr/local.
Given 0.jpg and 1.jpg:
You can run it like this:
jpegtran -appright 1.jpg 0.jpg > join.jpg
To make:
I'm running
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
I'm also running ImageMagick 6.9.
I'd like to convert a PDF image into WebP. AFAIK, out of the box, ImageMagick on Linux cannot convert to WebP, so I sudo apt-get install webp which installs cwebp.
cwebp allows to specify the -q parameter, and ImageMagick allows to specify the -quality parameter.
When I run $ cwebp -q 90 image.png -o image.webp, it takes cwebp around 8 seconds to convert it. If I run convert image.png -quality 90 image.webp, it takes ImageMagick around 30 seconds to convert it. It seems like the -quality parameter is not passed through to cwebp. It also may be the case that convert attempts to run a lossless conversion, which in cwebp is achieved with an explicit -lossless flag.
I run the test commands for a 10 MB test png image.
I would like to achieve 8 second conversion times with convert command. How can I do it?
I realize you want imagemagick, but if you are able to consider alternatives, libvips can do pdf -> webp quickly at the command line, and without any configuring.
For example, with this PDF (Audi R8 brochure) on my 2015 laptop, I see:
$ time convert -density 600 r8.pdf[3] -quality 90 x.webp
real 0m36.699s
user 0m23.787s
sys 0m1.628s
$ vipsheader x.webp
x.webp: 9921x4961 uchar, 3 bands, srgb, webpload
Which I think is broadly in line with the times you are seeing.
With libvips, I see:
$ time vips copy r8.pdf[dpi=600,page=3] x.webp[Q=90]
real 0m7.195s
user 0m6.861s
sys 0m0.505s
$ vipsheader x.webp
x.webp: 9921x4961 uchar, 3 bands, srgb, webpload
The same result, but within your 8s time budget.
You can set a lot of other webp options if you want more control over compression.
It turns out, that the delegates are invoked using the rules in /etc/ImageMagick-6/delegates.xml.
It lists a bunch of rules on how to convert between different types of images.
For my case, the png->webp conversion, I needed the string:
<delegate decode="png" encode="webp" command=""cwebp" -quiet %Q "%i" -o "%o""/>
While in this file I don't know the -quaility parameter value, and there seems to be no way to capture it.
However, if you wish to keep the value of the -q parameter for cwebp, you have the option of hard-coding the -q $YOUR_VALUE right into the command inside the delegate tag.
This solution is still slower than invoking cwebp directly, since ImageMagick can take up to 8 seconds before invoking the delegate.
I have a directory of images and need to merge those images horizontally in groups of two, then save the output of each to a new image file:
image-1.jpeg
image-2.jpeg
image-3.jpeg
image-4.jpeg
image-5.jpeg
image-6.jpeg
Using Imagemagick via command line, is there a way to loop through every other image in a directory and run magick convert image-1.jpeg image-2.jpeg +append image-combined-*.jpg?
So the result would be combined pairs of images:
image-1.jpeg image-2.jpeg -> image-combined-1.jpg
image-3.jpeg image-4.jpeg -> image-combined-2.jpg
image-5.jpeg image-6.jpeg -> image-combined-3.jpg
Get them all appended succinctly and in parallel with GNU Parallel and actually use all those lovely CPU cores you paid Intel for!
parallel -N2 convert {1} {2} +append combined-{#}.jpeg ::: *jpeg
where:
-N2 says to take two files at a time
{1} and {2} are the first two parameters
{#} is the sequential job number, and
::: demarcates the start of the parameters
If your CPU has 8 cores, GNU Parallel will run 8 converts at once, unless you specify say 4 jobs at a time by adding -j4.
If you are learning and just finding your way with GNU Parallel add:
--dry-run so you can see what it would do without actually doing anything
-k to keep the outputs in order
So, I mean:
parallel --dry-run -k -N2 convert {1} {2} +append combined-{#}.jpeg ::: *jpeg
Sample Output
convert image-1.jpeg image-2.jpeg +append combined-1.jpeg
convert image-3.jpeg image-4.jpeg +append combined-2.jpeg
convert image-5.jpeg image-6.jpeg +append combined-3.jpeg
On macOS, you can simply install GNU Parallel with:
brew install parallel
If you have thousands, or hundreds of thousands of files, you may run into an error Argument list too long - although this is pretty rare on macOS because the limit is 262,144 characters:
sysctl -a kern.argmax
kern.argmax: 262144
If that happens, you can use this syntax to pipe the filenames in GNU Parallel instead:
find /somewhere -iname "*.jpeg" -print0 | parallel -0 -N2 convert {1} {2} +append combined-{#}.jpeg
If the images are all the same size and orientation, and if your system has the memory to read in all the images in the directory, it can be done as simply as this...
magick *.jpeg -set option:doublewide %[fx:w*2] \
+append +repage -crop %[doublewide]x%[h] +repage image-combined-%02d.jpg
This can be scripted easily using ImageMagick. I could show you how in Unix. But if you have more than 9 images, then you may have to rename with leading zeros, since alphabetically image-10 will come before image-2. You do not mention your IM version or platform and scripting will differ depending upon OS.
Here is a Unix solution. I have images rose-01.jpg ... rose-06.jpg in folder test on my desktop (Mac OSX). Each image has a label under it with its filename so we can keep track of the files.
cd
cd desktop/test
arr=(`ls *.jpg`)
num=${#arr[*]}
for ((i=0; i<num; i=i+2)); do
j=$((i+1))
k=$((i+2))
magick ${arr[$i]} ${arr[$j]} +append newimage_${j}_${k}.jpg
done
Note that arrays start with index 0. So I use j=i+1 and k=i+2 for the images that correspond to 1,2 3,4 5,6 in the filenames from ls in the array.
The result is (newimage_1_2.jpg, newimage_3_4.jpg, newimage_5_6.jpg)
An alternate solution is to montage all the images together two-by-two as an array of 2x3 and then equally crop them into 3 sections vertically. So in ImageMagick, this also works since these images are all the same size.
cd
cd desktop/test
arr=(`ls *.jpg`)
num=${#arr[*]}
num2=`magick xc: -format "%[fx:ceil($num/2)]" info:`
magick montage ${arr[*]} -tile 2x -geometry +0+0 miff:- | magick - -crop 1x3# +repage newimage.jpg
The results are: newimage-0.jpg, newimage-1.jpg, newimage-2.jpg
Ole Tang wrote:
Fails on filenames like My summer photo.jpg
So here is the solution using ImageMagick as modified from my original post.
Images:
rose 1.png
rose 2.png
rose 3.png
rose 4.png
rose 5.png
rose 6.png
OLDIFS=IFS
IFS=$'\n'
arr=(`ls *.png`)
for ((i=0;i<6;i++)); do
echo "${arr[$i]}"
done
IFS=OLDIFS
num=${#arr[*]}
for ((i=0; i<num; i=i+2)); do
j=$((i+1))
k=$((i+2))
magick "${arr[$i]}" "${arr[$j]}" +append newimage_${j}_${k}.jpg
done
This produces:
newimage_1_2.jpg
newimage_3_4.jpg
newimage_5_6.jpg
I am using custom batch script to make resized copies (33% and 66%) of all PNG images in folder. Here is my code:
for f in $(find /myFolder -name '*.png');
do
sudo cp -a $f "${f/%.png/-3x.png}";
sudo convert $f -resize 66.67% "${f/%.png/-2x.png}";
sudo convert $f -resize 33.33% $f;
done
It works fine, except when the original image is indexed. In this case the smaller version of the image is RGB (so even larger file size then original image).
I have try several versions but not worked. One that I guess supposed to sort this out was fallowing:
for f in $(find /myFolder -name '*.png');
do
sudo cp -a $f "${f/%.png/-3x.png}";
sudo convert $f -define png:preserve-colormap -resize 66.67% "${f/%.png/-2x.png}";
sudo convert $f -define png:preserve-colormap -resize 33.33% $f;
done
But it doesn't work.
EDIT:
This is updated co, but it still doesn't work as it supposed to (see the attached image-left is original, right is resized):
for f in $(find /myFolder -name '*.png');
do
sudo cp -a $f "${f/%.png/-3x.png}";
numberOfColors=`identify -format "%k" $f`
convert "$f" \
\( +clone -resize 66.67% -colors $numberOfColors -write "${f/%.png/-2x.png}" +delete \) \
-resize 33.33% -colors $numberOfColors "$f"
done
Original image:
Scaled version:
Use "-sample" instead of "-resize" to preserve the color set. This causes the resizing to be done by nearest-neighbor color selection rather than any kind of interpolation.
Otherwise, the colormap ends up with more than 256 colors and the png encoder can't preserve it, due to the 256-color limit on the size of a PNG PLTE chunk. I cannot guarantee that you'll like the appearance of the result, though.
Also, be sure you are using a recent version of ImageMagick.
I'm not observing this problem with the current release (6.9.3-7). Your script works fine and produces clean -2x and -3x images.
There are several things to address here...
find vs glob
You say you want to process all files in a folder, then you use find which will search down into sub-directories as well. If you just want to process files in the current directory, you can let bash do the globbing directly for you. So, instead of
for f in $(find . -name "*.png"); do
you can just do:
shopt -s nullglob
for f in *.png; do
Performance
You run convert twice and load the original image twice, and that is not very efficient. You can run a single process that loads a single image and resizes to two different sizes and writes both to disk. So, instead of
for ...; do
convert ...
convert ...
done
you can write the following to start one convert, read the image once, clone it in memory and write it out, delete the spare copy in memory and then resize the original image and re-save that.
for ...; do
convert "$f" \
\( +clone -resize 66.67% -write "${f/%.png/-2x.png}" +delete \) \
-resize 33.33% "$f"
done
Palette
It seems you actually only want to output palettised (indexed) images with "any" colormap rather than with a "specific" colormap. Glenn's answer is perfect if you want to retain a specific colormap. However, if any colormap is ok, you can use -colors to reduce the colours in the resulting image to a level where the PNG library can make the decision to create a palettised image. Glenn knows a lot more than me about that as he wrote it! However, I think if you reduce the colours to 250 (or so) you will probably get a 256 entry colormap and if you reduce the colours to around 60 or so, you will get a 64 entry colourmap. So, you would do:
shopt -s nullglob
for f in *.png; do
sudo cp ... ...
convert "$f" \
\( +clone -resize 66.67% -colors 250 -write "${f/%.png/-2x.png}" +delete \) \
-resize 33.33% -colors 250 "$f"
done
You can try experimenting with other numbers of colours and see how that affects filesize - the number you need will depend on your images.
I want to get screenshot of each page of a pdf into jpg. To do this I am using ImageMagick's convert command in command line.
I have to achieve the following -
Get screenshots of each page of the pdf file.
resize the screenshot into 3 different sizes (small, med and preview).
store the different sizes in different folders (small, med and preview).
I am using the following command which works, however, it is slow. How can I improve its execution time or execute the commands parallely.
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg & convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Splitting the command for readability
convert -density 400 -quality 100 /input/test.pdf -resize 170x117> -scene 1 /small/test_%d_small.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 230x160> -scene 1 /med/test_%d_med.jpg
convert -density 400 -quality 100 /input/test.pdf -resize 1310x650> -scene 1 /preview/test_%d_preview.jpg
Updated Answer
I see you have long, multi-page documents and while my original answer is good for making multiple sizes of a single page quickly, it doesn't address doing pages in parallel. So, here is a way of doing it using GNU Parallel which is available for free for OS X (using homebrew), installed on most Linux distros and also available for Windows - if you really must.
The code looks like this:
#!/bin/bash
shopt -s nullglob
shopt -s nocaseglob
doPage(){
# Expecting filename as first parameter and page number as second
# echo DEBUG: File: $1 Page: $2
noexten=${1%%.*}
convert -density 400 -quality 100 "$1[$2]" \
-resize 1310x650 -write "${noexten}-p-$2-large.jpg" \
-resize 230x160 -write "${noexten}-p-$2-med.jpg" \
-resize 170x117 "${noexten}-p-$2-small.jpg"
}
export -f doPage
# First, get list of all PDF documents
for d in *.pdf; do
# Now get number of pages in this document - "pdfinfo" is probably quicker
p=$(identify "$d" | wc -l)
for ((i=0;i<$p;i++));do
echo $d:$i
done
done | parallel --eta --colsep ':' doPage {1} {2}
If you want to see how it works, remove the | parallel .... from the last line and you will see that the preceding loop just echoes a list of filenames and a counter for the page number into GNU Parallel. It will then run one process per CPU core, unless you specify -j 8 if you want say 8 processes to run in parallel. Remove the --eta if you don't want any updates on when the command is likely to finish.
In the comment I allude to pdfinfo being faster than identify, if you have that available (it's part of the poppler package under homebrew on OS X), then you can use this to get the number of pages in a PDF:
pdfinfo SomeDocument.pdf | awk '/^Pages:/ {print $2}'
Original Answer
Something along these lines so you only read it in once and then generate successively smaller images from the largest one:
convert -density 400 -quality 100 x.pdf \
-resize 1310x650 -write large.jpg \
-resize 230x160 -write medium.jpg \
-resize 170x117 small.jpg
Unless you mean you have, say, a 50 page PDF, and you want to do all 50 pages in parallel. If you do, say so, and I'll show you that using GNU Parallel when I get up in 10 hours...