convert image.png -crop 256x256 +repage +adjoin out_%d.png
takes for a large image 20000x8000 aprox 114 seconds.
(about 4000 256x256 pngs are generated)
Are there any imagemagick options to improve the speed of cropping?
One thing you can do is be sure to use a "Q8" build of ImageMagick rather than the default "Q16", if your original images have 8-bit samples. Each pixel occupies 8 bytes (16-bit R, G, B, A) even if it's just a black-and-white drawing. Using Q8 cuts that in half. You can change the behavior with the "-limit" option, to use more memory instead of disc.
The -limit option is described in the "options" documentation for
ImageMagick.
You can improve the speed of PNG compression by using "-quality 1" which selects Huffman-only compression and the "sub" PNG filter. If you know that the image has fewer than 256 colors, "-quality 0" might be more effective (the time consumed will be about the same but the resulting file will be smaller).
You could also consider libvips. It can do this kind of operation quickly and without using much memory.
I tried a benchmark. With a 20,000 x 8,000 pixel RGB PNG on this machine (four core / eight thread i7). I see:
$ vipsheader big.png
big.png: 20000x8000 uchar, 3 bands, srgb, pngload
$ /usr/bin/time -f %M:%e convert big.png -crop 256x256 +repage +adjoin out_%d.png
2582228:61.78
$ echo out* | wc
1 2528 31754
%M:%e means display peak memory and elapsed time, so that's 2.5gb of RAM and 62s of real time to make 2528 PNG tiles.
libvips has a command called dzsave (DeepZoom save) which can write a set of tiles in parallel, and will stream the image rather than loading the whole thing into memory. With a couple of options you can make it produce the same output as your convert command.
I see:
$ /usr/bin/time -f %M:%e vips dzsave big.png x --depth one --tile-size 256 --overlap 0 --suffix .png
161952:9.20
$ echo x_files/0/*.png | wc
1 2528 49450
So 161mb of ram and 9.2s of real time to make the same 2528 files.
There's a chapter in the docs about dzsave explaining how to use it:
https://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html
As Glenn says, most time is being spent in PNG encode and decode. libpng is very slow, and it uses deflate compression, which is even slower. If you switch to TIFF instead, it gets much quicker:
$ /usr/bin/time -f %M:%e vips dzsave big.tif x --depth one --tile-size 256 --overlap 0 --suffix .tif
163476:1.34
Now it's just 1.34s. JPG would be faster still (since there would be less IO), but I guess that wouldn't work well for floor plans.
Related
How can I create a large blank image with ImageMagick without using gigabytes of disk space?
convert -debug All -size 100000x100000 canvas:white big_white.png
This takes at least 8 GB of disk space since it won't run with ImageMagick policy disk set to 8 GB.
If your aim is to create a 100,000x100,000 blank PNG, I would recommend libvips to you. It is very frugal with resources compared to ImageMagick.
So, to create a 100000x100000 pixel black PNG in Terminal:
vips black black.png 100000 100000 --bands 3
takes 300s on my machine and uses 227MB of RAM. I tested with:
/usr/bin/time -l vips black black.png 100000 100000 --bands 3
If you insist on making that white, you can invert it:
vips invert black.png white.png
which takes 360s on my machine and uses 406MB of RAM.
By comparison, ImageMagick needs 3,300 seconds and ?? GB of RAM to do the same:
/usr/bin/time -l convert -size 100000x100000 canvas:white big_white.png
By the way, you should be more specific about whether you want a greyscale or a colour PNG, whether a palletised image is acceptable or not, and whether you want 8-bits/sample or 16-bits/sample.
I have a collection of analog video recordings. About 10% of the files are entirely static. How could I programmatically look at all files and delete the files that contain mostly static?
The following utilities have command line options to analyze video, however none have built in functionality to detect the absence of video content.
ffmpeg
ffprobe
HandBrake
I've tried using ffmpeg to export still images and then use image magick to compare the difference between those images. Unfortunately, the difference between an image of static, and actual video content returns nearly the same difference percentage. (9% vs 7%)
ffmpeg -ss 00:30 -i PICT0050.AVI -vframes 1 -q:v 2 output1.jpg
magick compare -metric PSNR output1.jpg output2.jpg diff.jpg
9.2191
magick compare -metric PSNR output1.jpg output3.jpg diff.jpg
7.70127
Comparing sample 1 with sample 2 results in 9% difference
Comparing sample 1 with sample 3 results in 7% difference
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5
Sample 6
Sample 7
It looks like the static images are black and white without any colour - so I would look at the mean saturation and if it is low/zero, I would assume they are static. So, in bash and assuming your images are named sampleXXX.jpg:
for f in sample*jpg; do
convert "$f" -colorspace HSL -channel S -separate -format '%M avg sat=%[fx:int(mean*100)]\n' info:
done
Sample Output
sample1.jpg avg sat=0
sample2.jpg avg sat=0
sample3.jpg avg sat=21
sample4.jpg avg sat=0
sample5.jpg avg sat=39
sample6.jpg avg sat=31
which suggests that samples 1,2 and 4 are static.
Another way is to look at the amount of edges using Imagemagick to rank the amount of noise. The noise images will have more edges.
for img in *; do
edginess=`convert $img -edge 1 -scale 1x1! -format "%[fx:mean]" info:`
echo "$img $edginess"
done
1bidV.jpg 0.0472165
3FJUJ.jpg 0.275502 <---- noise image
QpQvA.jpg 0.332296 <---- noise image
b4Gxy.jpg 0.0428422
gQcXP.jpg 0.0437578
vF1YZ.jpg 0.322911 <---- noise image
I tried to 'montage' 15000 small PNG images (about 10kb each) in order to make them all into one big image, but midway through the process I get a warning that I have no disk space left. I have 30gb left in my SSD.
The command I'm running:
montage -mode concatenate -background none -tile "101x" "${X}_*.png" out.png
Why does this happen and how much disk space would I need for such a task?
I had the same problem of running out of disk space with 3 x 9GB temp files produced during the process. I had 100 1800px x 1800px PNG files to tile.
That happened without defining the width of the array. Fixing the width to 10x solved the problem for me, restricting the image to 18000px wide, producing a 10MB output file.
montage -background red -tile 10x -geometry +1+1 *png montage.png
I guess there is a bug somewhere with extremely wide files but I don't know exactly. Try small, then go wider until it fails.
I am trying to convert a BMP from 24 bits/pixel to 16 bit/pixel Mode in ImageMagick.
convert /tmp/a/new/37.bmp -depth 5 -define bmp:format=bmp2 /tmp/a/new/37_v2_16bit.bmp
convert /tmp/a/new/37.bmp -depth 5 -define bmp:format=bmp3 /tmp/a/new/37_v3_16bit.bmp
The result has the same 8 bit per R., per G. and per B., according to output of:
identify -verbose
What am I doing wrong? How to get 16-bit color in BMP ?
Thank you!
P. S.
-depth value
depth of the image. This is the number of bits in a pixel. The only acceptable values are 8 or 16.
http://linux.math.tifr.res.in/manuals/html/convert.html
=(
Official Documentation says (no restrictions mentioned):
-depth value
depth of the image.
This the number of bits in a color sample within a pixel. Use this option to specify the depth of raw images whose depth is unknown such as GRAY, RGB, or CMYK, or to change the depth of any image after it has been read.
convert /tmp/a/new/37.bmp -colors 256 /tmp/a/new/37_256.bmp
makes the file smaller, but visually it is the same! wth?! )))))
convert /tmp/a/new/37.bmp -colors 65536 /tmp/a/new/37_64k.bmp
same size, same visual picture.
convert /tmp/a/new/37.bmp -dither None -colors 256 /tmp/a/new/37_256_nd.bmp
a bit smaller again, but it does not look like 256-colored! bug? 256 colored 800x600 BMP is ~ 800x600x1 Bytes (without headers) ~ 480 000 Bytes. But it says ~650 000 Bytes)))) funny program))
The documentation you quoted from linux.math... is pretty old (2001) and is incorrect about -depth. The "-depth 16" option does not mean 16-bit pixels (like R5G6R5 or R5G5R5A1); -depth 16 means 48-bit/pixel R16, G16, B16 or 64-bit/pixel R16, G16, B16, A16 pixels. The "Official documentation" that you quoted (2015) is correct.
ImageMagick doesn't support that kind of 16 bit/pixel formats, so you'll need to store them in an 8 bit/channel format and live with the larger filesize.
It also appears that for images with 256 or fewer colors, it will write a colormapped image with 1, 4, or 8-bit indices. You don't have to make any special request, it'll do that automatically. Use "-compress none" for uncompressed BMP's. The current ImageMagick (version 6.9.2-8) gives me the expected 480kbyte file if I start with an 800x600 image with more than 256 colors and use
convert im.bmp -colors 256 -compress none out.bmp
ImageMagick does support a 16-bit "bitfields" BMP format while reading but I don't see any indication that it can write them, and haven't tried either reading or writing such images.
It's not ImageMagick but ffmpeg, more associated with video, can create a 16bit bmp image if you are referring to the 565 format?
ffmpeg -i ffmpeg-logo.png -sws_flags neighbor -sws_dither none -pix_fmt rgb565 -y ffmpeg-logo-16bit-nodither.bmp
That intentionally disables dithering but if you want that just omit the sws parts, e.g.
ffmpeg -i ffmpeg-logo.png -pix_fmt rgb565 -y ffmpeg-logo-16bit-dithered.bmp
If your images are inherently from an rgb565 source then it should not dither them but I'd always be cautious and inspect a few closely before doing any batch conversions.
Based on the discussion in the comments it sounds like PNG would be a good format for preserving old screenshots verbatim as it uses lossless compression but maybe that's not applicable due to use with vintage software?
I am converting various PDFs uploaded by end users into images using following command
-density 140 -limit memory 64MB -limit map 128MB [pdffile] page.png
Here is the result. On the right we have original PDF and on the left output image. As you can see the colors are quite noticeably different.
What could be causing this and how fix it?
try following command:
-density 140 -limit memory 64MB -limit map 128MB -colorspace RGB [pdffile] page.png
Edit: I later discovered that ImageMagick can do it fine, I just needed to use -colorspace sRGB
My final command was:
convert -density 560 -limit memory 64MB -limit map 128MB \
-colorspace sRGB [pdffile] -scale 25% page.png
The oversampling and scaling down was to counter the poor anti-aliasing mentioned below.
Before I discovered that, here was my earlier solution...
In my case the colors produced by ImageMagick's convert were oversaturated, quite like those in the question. I was trying to convert this file using IM 6.7.7.10-6ubuntu3.
-resample 100 made no difference.
-colorspace RGB seemed to produce more accurate saturations, but the entire image was darker than it should have been.
Curiously, this suggestion to use GhostScript instead of ImageMagick for the conversion, produced very close to the correct colors:
gs -q -sDEVICE=png16m -dSubsetFonts=true -dEmbedAllFonts=true \
-sOutputFile=page.png -r200 -dBATCH -dNOPAUSE [pdffile]
(The original suggestion passed the -dUseCIEColor option, but in my case this appeared to reduce the gamma: light pixels were fine, but the dark pixels were too dark, so I removed it.)
After that, the only thing that bothered me was that the anti-aliasing/edges were a little off in places (especially visible on curves passing 45 degrees). To improve that, I created the output at four times the required resolution, and then scaled down afterwards, rendering those errors almost imperceptible. Note that I had to use ImageMagick's -scale for this, and not -geometry or -resize, in order to avoid bicubic ringing effects.
Use the -resample option:
-density 140 -resample 100 -limit memory 64MB -limit map 128MB [pdffile] page.png
Open Source MuPDF util mutool retains color and size using default parameters below
you need though to list the pages separated by a comma at the end of the command.
mutool draw -o draw%d.png abook.pdf 1,2
Otherwise if using Linux try Windows for better colorspace RGB interpretation when using imagemagick's convert.
The following images show how anti-aliasing improves if you sample at a higher resolution and then scale down.
Although 1120 was slightly better quality than 560, it took a long time to convert, so I would probably choose 560 for a good time:quality trade-off.
-colorspace sRGB -density 140
-colorspace sRGB -density 280 -scale 50%
-colorspace sRGB -density 420 -scale 33.3333%
-colorspace sRGB -density 560 -scale 25%
-colorspace sRGB -density 1120 -scale 12.5%
(It is easier to see the difference if you download the last two images and flip between them in your favourite image viewer. Or scroll up this list of images, instead of down. You should seem them becoming progressively uglier.)