Edit 26/04/2016 : This is a bug that has been fixed in ImageMagick-6.9.3-5 and 7.0.0-0
.
Converting DNG files, when I run
convert "my_img.dng" "my_img_converted.jpg"
Imagemagick changes the date modified of the original my_img.dng to the time when conversion happened.
.
Do you know how to avoid that and keep the original file intact?
I would say that is a bug! A workaround is as follows:
convert dng:- result.jpg < original.dng
Or, equivalently if you prefer reading left-to-right and don't mind superfluous processes:
cat original.dng | convert dng:- result.jpg
Related
When converting an image, ImageMagick's default behavior seems to be to overwrite any existing file. Is it possible to prevent this? I'm looking for something similar to Wget's --no-clobber download option. I've gone through ImageMagick's list of command-line options, and the closest option I could find was -update, but this can only detect if an input file is changed.
Here's an example of what I'd like to accomplish: On the first run, convert input.jpg output.png produces an output.png that does not already exist, and then on the second run, convert input.jpg output.png detects that output.png already exists and does not overwrite it.
Just test if it exists first, assuming bash:
[ ! -f output.png ] && convert input.png output.png
Or slightly less intuitively, but shorter:
[ -f output.png ] || convert input.png output.png
Does something like this solve your problem?
It will write to output.png but if the file already exists a new file will be created with a random 5 character suffix (eg. output-CKYnY.png then output-hSYZC.png, etc.).
convert input.jpg -resize 50% $(if test -f output.png; then echo "output-$(head -c5 /dev/urandom | base64 | tr -dc 'A-Za-z0-9' | head -c5).png"; else echo "output.png"; fi)
Heyho,
i am using imagemagick 6.8. (2015-03-20) and I tried to convert a .pgm file to a .jp2 file. The output file should be smaller by the factor specified in the jp2:rate=x option, but it does not seem to work.
I am using this command
convert input.pgm -define jp2:rate=20.0 output.jp2
But the resulting file, output.jp2, is bigger than expected (Only compression rate 6 instead of the expected 20)
Could somebody explain this to me please?
The following works for ImageMagick Version 6.7.7-10 2017-07-31 Q16 http://www.imagemagick.org
convert input.pgm -define jp2:rate=0.05 #not 20.0
I have 600 TIFF files in a directory, c:\temp.
The file names are like:
001_1.tif,
001_2.tif,
001_3.tif
002_1.tif,
002_2.tif,
002_3.tif
....
....
200_1.tif,
200_2.tif,
200_3.tif
The combined files should be placed in same directory and the files should be named like:
1_merged.tif
2_merged.tif
.....
.....
200_merged.tif
I am looking for any single command-line /batch-file to do so through ImageMagick convert/ mogrify command or any other command/tools.
Please note the overall time taken should not be more than 5 second.
Assuming you want to combine the 600 single-page TIFFs into one single multi-page TIFF (per set of 3), it is as simple as:
convert 001_*.tiff 1_merged.tiff
convert 002_*.tiff 2_merged.tiff
[....]
convert 200_*.tiff 200_merged.tiff
Please note that nobody will be able to guarantee any timing/performance benchmarks... least while we don't even have any idea how exactly your input TIFFs are constituted. (Are they 10000x10000 pixels or are they 20x20 pixels?, Are they color or grayscale?, etc.pp.)
This is different from Mark's answer, because he seems to have assumed you want to combine the input files all into a 1-page image, where the originals are tiled across a larger page...
This should do it - I will leave you to do error checking in case you haven't actually got all the images you suggest!
#ECHO OFF
setlocal EnableDelayedExpansion
FOR /L %%A IN (1,1,200) DO (
set "formattedValue=000000%%A"
set "x=!formattedValue:~-3!"
convert !x!_*.tif +append !x!_merged.tif
echo !x!
)
So, if your images look like this
001_1.tif
001_2.tif
001_3.tif
you will get this in merged_001.tif
If you change +append to -append then merged_001.tif will be like this:
If you remove +append altogether, you will get 200 multi-page TIFs with 3 pages each - same as Kurt's answer.
I'm considering using ImageMagick to extract images from individual pages from a PDF file. How can I capture the names of the files getting generated by it? It seems that the -verbose option includes information on the files getting generated, but is that a reliable way of gathering that? Any other alternatives?
Depending on what you want to do, I think you have at least 3 options...
Option 1
If you want to know the number of pages up front, a priori, you can get ImageMagick to tell you like this:
identify -format %n FreddyFrog.pdf
8
or if you want it in a variable,
pages=$(identify -format %n FreddyFrog.pdf)
echo $pages
8
Option 2
You can tell ImageMagick how to format the names of the output files, like this, where the %04d says to use 4 digits and front-pad with zeroes:
convert -density 72 FreddyFrog.pdf FreddyFrog-%04d.tif
Then you will automatically know the names of the output files.
ls FreddyFrog-*
FreddyFrog-0000.tif FreddyFrog-0003.tif FreddyFrog-0006.tif
FreddyFrog-0001.tif FreddyFrog-0004.tif FreddyFrog-0007.tif
FreddyFrog-0002.tif FreddyFrog-0005.tif
Option 3
You can use the shell to draw a line in the sand before you convert the files and then find everything newer afterwards:
> before # or "touch before" if you prefer
convert FreddyFrog..... # strut your IM stuff
for f in *; do [[ "$f" -nt before ]] && echo $f; done # list files newer than line in sand
rm before # clean up
rm before
I would like to convert a pdf file to a Black and White PDF file with ImageMagick. But I've got two problems:
I use this command:
convert -colorspace Gray D:\in.pdf D:\out.pdf
But this command convert only the FIRST page... How to convert all pages?
After use this command the resolution is terrible... but if I use -density 300 option the file size has increased more than double. So I would like to use the same DPI setting, but how to use?
Thanks a lot
Assuming you have all the necessary command line tools installed you can do the following:
Split and join PDF using pdfseparate and pdfunite (Poppler tools).
Extract the original density using pdfinfo plus grep/egrep and, for instance, sed. This will not guarantee the same size of the PDF file, just the same DPI.
Putting it all together you can have a series of bash commands as following:
pdfseparate in.pdf temp-%d.pdf; for i in $(seq $(ls -1 temp-*.pdf | wc -l)); do mv temp-$i.pdf temp-$(printf %03d $i).pdf; done
for f in temp-*.pdf; do convert -density $(pdfinfo $f | egrep -o 'Page size:[[:space:]]*[0-9]+(\.[0-9]+)?[[:space:]]*x[[:space:]]*[0-9]+(\.[0-9]+)?' | sed -e 's/^Page size:\s*//'| sed -e 's/\s*x\s*/x/') -colorspace Gray {,bw-}$f; done
pdfunite bw-temp-*.pdf out.pdf
rm {bw-,}temp-*.pdf
Note 1: there as a dirty workaround (for/wc/seq/printf) for a proper ordering of 10-999 pages PDFs (I did not figure out how to put leading zeros in pdfseparate).
Note 2: I guess ImageMagick treats PDFs as just another binary image file so for instance for mainly text files this will result in huge PDFs. Thus, this is a very bad method to convert text-based PDFs to B&W.