I have created video from list of images using ffmpeg
system("ffmpeg -framerate 1 -pattern_type glob -i '*.jpg' -c:v libx264 out.mp4")
Now i want to add Ken burn effect, can i do it with ffmpeg or imagemagic or any command line tool on linux.
I can't speak of ruby-on-rails, linux, or ffmpeg technologies. But if you would like to create a panning effect made popular by Ken Burn, you would extract regions of an image, and animate them together.
#!/bin/bash
# A 16:10 ratio
WIDTH=64
HEIGHT=40
# Extract parts of an image with -extent operator
for index in $(seq 40)
do
TOP=$(expr 120 + $index)
LEFT=$(expr 150 + $index)
FILENAME=$(printf /tmp/wizard-%02d.jpg $index)
convert wizard: -extent "${WIDTH}x${HEIGHT}+${TOP}+${LEFT}" $FILENAME
done
# Replace this with your ffmpeg script
SLICES=$(ls /tmp/wizard-*.jpg)
RSLCES=$(ls /tmp/wizard-*.jpg | sort -rn)
convert $SLICES $RSLCES -set delay 15 -layers Optimize /tmp/movie.gif
Edited by Mark Setchell beyond this point... (just trying to help out)
Much as I hate editing other people's posts, the first part of Eric's code can equally be written this way if you find that easier to understand:
# Extract parts of an image with -extent operator
for index in {1..40}
do
((TOP=120 + $index))
((LEFT=150 + $index))
FILENAME=$(printf /tmp/wizard-%02d.jpg $index)
convert wizard: -extent "${WIDTH}x${HEIGHT}+${TOP}+${LEFT}" $FILENAME
done
Related
I have two folders, A and B, with image files that have corresponding names.
For example, each contain files labelled 01.png, 02.png, 03.png, etc.
How can I merge the corresponding files such that I have a third folder C that contains all merged photos so that both of the originals are side by side.
I am on Linux, if that changes anything.
I am not near a computer to thoroughly test, but this seems easiest to me:
#!/bin/bash
# Goto directory A
cd A
# For each file "f" in A
for f in *.png; do
# Append corresponding file from B and write to AB
convert "$f" ../B/"$f" +append ../AB/"$f"
done
Or use GNU Parallel and do them all at once!
cd A
parallel convert {} ../B/{} +append AB/{} ::: *.png
Using ImageMagick version 6, if your images are all the same dimensions, and if your system memory can handle reading all the input images into a single command, you can do that with a command like this...
convert FolderA/*.jpg -set filename:f "%[f]" \
-set option:distort:viewport %[fx:w*2] -distort SRT 0 null: \
FolderB/*.jpg -gravity east -layers composite FolderC/"%[filename:f]"
That starts by reading in all the images from FolderA and extending their viewport to double their width to the right.
Then it adds the special built-in "null:" to separate the lists of images before reading in the second list. Then it reads in all the images from FolderB.
Then after setting the gravity to "east", it composites each image from FolderB over the extended right half of each corresponding image from FolderA. That creates the effect of appending the images side by side.
The command sets a variable at the beginning to hold the filenames of the first list of input files, then uses those as the names of the output files and writes them to FolderC.
If you're using ImageMagick version 7, use the command "magick" instead of "convert".
You can do that with some bash scripting code. Assume you have two folders A and B with the corresponding image names in them. Also you have an empty folder AB to hold the results. Then using ImageMagick with the bash looping code, you can do something like this:
Collect the names of all the files in folder A and put into an array
Collect the names of all the files in folder B and put into an array
Loop over the number of images in the folders
Process them with ImageMagick +append and save to folder AB
outdir="/Users/fred/desktop/AB"
aArr=(`find /Users/fred/desktop/A -type f -iname "*.jpg" -o -iname "*.png"`)
numA="${#aArr[*]}"
bArr=(`find /Users/fred/desktop/B -type f -iname "*.jpg" -o -iname "*.png"`)
numB="${#bArr[*]}"
if [ $numA -eq $numB ]; then
for ((i=0; i<numA; i++)); do
nameA=`basename "${aArr[$i]}"`
nameA=`convert "$nameA" -format "%t" info:`
nameB=`basename "${bArr[$i]}"`
nameB=`convert "$nameB" -format "%t" info:`
convert "${aArr[$i]}" "${aArr[$i]}" +append ${outdir}/${nameA}_${nameB}.jpg
done
fi
When converting an image, ImageMagick's default behavior seems to be to overwrite any existing file. Is it possible to prevent this? I'm looking for something similar to Wget's --no-clobber download option. I've gone through ImageMagick's list of command-line options, and the closest option I could find was -update, but this can only detect if an input file is changed.
Here's an example of what I'd like to accomplish: On the first run, convert input.jpg output.png produces an output.png that does not already exist, and then on the second run, convert input.jpg output.png detects that output.png already exists and does not overwrite it.
Just test if it exists first, assuming bash:
[ ! -f output.png ] && convert input.png output.png
Or slightly less intuitively, but shorter:
[ -f output.png ] || convert input.png output.png
Does something like this solve your problem?
It will write to output.png but if the file already exists a new file will be created with a random 5 character suffix (eg. output-CKYnY.png then output-hSYZC.png, etc.).
convert input.jpg -resize 50% $(if test -f output.png; then echo "output-$(head -c5 /dev/urandom | base64 | tr -dc 'A-Za-z0-9' | head -c5).png"; else echo "output.png"; fi)
We have a storage of images where some images are optimized and some are not.
I'm working on 'script' that will go via every image in the storage and run optimize process.
I'm wondering:
Is there a way to check if image has been optimized?
Will I lose quality if image has been optimized with -quality 85% few times?
I have tried to run -quality 85% few times on same image and I could not see any lose in quality (after 3-th run the image's size was not changed). However I did not find a proof on official documentation.
Thanks!
You can check if the quality setting is already 75 before optimising:
identify -format %Q fred.jpg
92
Then optimise and check again:
convert fred.jpg -quality 75 optimised.jpg
identify -format %Q optimised.jpg
75
If you are using bash, this is easy:
q=$(identify -format %Q fred.jpg)
[ $q -ne 75 ] && mogrify -quality 75 fred.jpg
Another way to mark images as optimised might be to set a comment in the file to the word "optimised", like this:
# Set comment
mogrify -set comment "optimised" fred.jpg
# Get comment
identify -format %c fred.jpg
optimised
So, you would test if an image comment contains the word "optimised", and if not, optimise the image and change the comment to show as much:
[[ $(identify -format %c fred.jpg) != *optimised* ]] && { echo Optimising fred.jpg; mogrify -set comment "optimised" -quality 75 fred.jpg; }
Another possibility might be to use Extended File Attributes to tag files (with setfattr) as you reduce their quality and then use getfattr to check their quality rather than doing them again.
If you had hundreds or thousands of images to process, I would suggest GNU Parallel. Please try the example below on a small copy of a few files in a directory:
parallel '[[ $(identify -format "%c" {}) != *optimised* ]] && { echo Optimising {}; mogrify -set comment "optimised" -quality 75 {}; }' ::: *jpg
You will find it will process them all in parallel the first pass, and then do nothing on the second pass.
If you have too many images, and get "ERROR: Too many args", you can pass the filenames in on stdin like this, instead of globbing after the ::::
find /some/where -iname \*.jpg -print0 | parallel -0 ...
I have 600 TIFF files in a directory, c:\temp.
The file names are like:
001_1.tif,
001_2.tif,
001_3.tif
002_1.tif,
002_2.tif,
002_3.tif
....
....
200_1.tif,
200_2.tif,
200_3.tif
The combined files should be placed in same directory and the files should be named like:
1_merged.tif
2_merged.tif
.....
.....
200_merged.tif
I am looking for any single command-line /batch-file to do so through ImageMagick convert/ mogrify command or any other command/tools.
Please note the overall time taken should not be more than 5 second.
Assuming you want to combine the 600 single-page TIFFs into one single multi-page TIFF (per set of 3), it is as simple as:
convert 001_*.tiff 1_merged.tiff
convert 002_*.tiff 2_merged.tiff
[....]
convert 200_*.tiff 200_merged.tiff
Please note that nobody will be able to guarantee any timing/performance benchmarks... least while we don't even have any idea how exactly your input TIFFs are constituted. (Are they 10000x10000 pixels or are they 20x20 pixels?, Are they color or grayscale?, etc.pp.)
This is different from Mark's answer, because he seems to have assumed you want to combine the input files all into a 1-page image, where the originals are tiled across a larger page...
This should do it - I will leave you to do error checking in case you haven't actually got all the images you suggest!
#ECHO OFF
setlocal EnableDelayedExpansion
FOR /L %%A IN (1,1,200) DO (
set "formattedValue=000000%%A"
set "x=!formattedValue:~-3!"
convert !x!_*.tif +append !x!_merged.tif
echo !x!
)
So, if your images look like this
001_1.tif
001_2.tif
001_3.tif
you will get this in merged_001.tif
If you change +append to -append then merged_001.tif will be like this:
If you remove +append altogether, you will get 200 multi-page TIFs with 3 pages each - same as Kurt's answer.
I would like to convert a pdf file to a Black and White PDF file with ImageMagick. But I've got two problems:
I use this command:
convert -colorspace Gray D:\in.pdf D:\out.pdf
But this command convert only the FIRST page... How to convert all pages?
After use this command the resolution is terrible... but if I use -density 300 option the file size has increased more than double. So I would like to use the same DPI setting, but how to use?
Thanks a lot
Assuming you have all the necessary command line tools installed you can do the following:
Split and join PDF using pdfseparate and pdfunite (Poppler tools).
Extract the original density using pdfinfo plus grep/egrep and, for instance, sed. This will not guarantee the same size of the PDF file, just the same DPI.
Putting it all together you can have a series of bash commands as following:
pdfseparate in.pdf temp-%d.pdf; for i in $(seq $(ls -1 temp-*.pdf | wc -l)); do mv temp-$i.pdf temp-$(printf %03d $i).pdf; done
for f in temp-*.pdf; do convert -density $(pdfinfo $f | egrep -o 'Page size:[[:space:]]*[0-9]+(\.[0-9]+)?[[:space:]]*x[[:space:]]*[0-9]+(\.[0-9]+)?' | sed -e 's/^Page size:\s*//'| sed -e 's/\s*x\s*/x/') -colorspace Gray {,bw-}$f; done
pdfunite bw-temp-*.pdf out.pdf
rm {bw-,}temp-*.pdf
Note 1: there as a dirty workaround (for/wc/seq/printf) for a proper ordering of 10-999 pages PDFs (I did not figure out how to put leading zeros in pdfseparate).
Note 2: I guess ImageMagick treats PDFs as just another binary image file so for instance for mainly text files this will result in huge PDFs. Thus, this is a very bad method to convert text-based PDFs to B&W.