how to tile a single cloned image graphic magic - image-processing

I would like to create an image with a single image tiled in width and height with graphicsmagick .
I tried this command which work :
gm montage -geometry 2x2 mypic.png mypic.png mypic.png out.png
However, I would like to repeat this pattern image a great number of time (over 100x100).
Is it possible to make that without repeating mypic.png 10000 times ? ?

I do not know GraphicsMagick. But I assume it is similar to ImageMagick, since it was a spin-off from ImageMagick. In ImageMagick, you can do that easily in two ways:
Input:
montage lena.jpg -duplicate 24 -tile 5x5 -geometry +0+0 result.jpg
convert -size 1280x1280 tile:lena.jpg result2.jpg
See the various ways to do tiling at https://imagemagick.org/Usage/canvas/#tile
I am not sure if GraphicsMagick has -duplicate, since that was introduce in ImageMagick 6.6.8-10 3/27/2011 long after they split-off.
ImageMagick has many more features than GraphicsMagick, but may be slightly slower. You may want to consider using ImageMagick rather than GraphicMagick

You don't say how large the images you are planning to make are, but if they are very large, you could run into a couple of issues.
First, JPEG is limited to 65536 x 65536 pixels, so you'll need something like bigtiff or PNG if you need larger than that.
Secondly, you can need huge amounts of memory to compose large images. For example, on this laptop I can run:
$ time convert -size 50000x50000 tile:k2.jpg result.jpg
real 6m11.366s
user 1m19.671s
sys 0m20.836s
to makes a 50k x 50k pixel JPG in about 6m.
convert will assemble the whole image before it starts writing the result. If you don't have bucketloads of RAM, it'll use a huge temporary file instead. If I look in /tmp during processing, I see:
$ ls -l /tmp
total 1199684
-rw------- 1 john john 20000000000 Dec 1 15:56 magick-9559WtN2jwPlvrMm
A 20gb temporary file. That's 50000 * 50000 * 4 * 2, so it's making a 16-bit, four channel temporary image. Because convert is spending all its time blocked in disc IO, it's rather slow.
You could consider other systems -- libvips is a streaming image processing library, so it can execute commands like this without having to make complete intermediate images. I see:
$ time vips replicate k2.jpg result.jpg 35 25
real 0m13.592s
user 0m16.383s
sys 0m1.426s
$ vipsheader result.jpg
result.jpg: 50750x51200 uchar, 3 bands, srgb, jpegload
That's copying k2.jpg 35 times horizontally and 25 times vertically to make an image slightly larger than 50k x 50k. It does not make a temporary file, and finishes in about 15 seconds. It'll have no problems going to very, very large output images -- I regularly process images of 300,000 x 300,000 pixels (though not in jpg format, obviously).

Related

Imagemagick resized pictures are different using a single command or two commands

I can't understand why those two scripts seem to produce a different result, given that the second one is like the first one but separated into two commands.
First script:
convert lena_std.tif -compress None -resize 160x160 -compress None -resize 32x32 test1.bmp
Second script:
convert lena_std.tif -compress None -resize 160x160 test2.bmp
convert test2.bmp -compress None -resize 32x32 test3.bmp
I use the following command to check the difference between the results:
convert test1.bmp test3.bmp -metric AE -compare diff.bmp
I use Imagemagick on Ubuntu 22.04. My convert -version indicates: Version: ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25.
Because when you scale you interpolate pixels.
Roughly, the code considers the pixel at (x,y) in the result, and computes where it comes from in the source. This is usually not an exact pixel, more like an area, when you scale down, or part of a pixel, when you scale up. So to make up the color of the pixel at (x,y) some math is applied: if you scale down, some averaging of the source area, and if you scale up, something that depends on how close the source is to the edge of the pixel and how different the color of neighboring pixels are.
This math can be very simple (the color of the closest pixel), simple (some linear average), a bit more complex (bi-cubic interpolation) or plain magic (sinc/Lanczos), the more complex forms giving the better results.
So, in one case, you obtain a result directly from the source to the pixel you want, and in the other you obtain the final result from an approximation of what the image would look at the intermediate size.
Another way to see it is that each interpolation has a spatial frequency response (like a filter in acoustics), and in one case you apply a single filter and in the other one you compose two filters.

Eliminate hairlines from a vector graphics by converting to oversampled bitmap and then downscaling - How with ImageMagick?

I used Apple Numbers (a Spreadsheet app with styling options) to create a UX flowchart of various user interfaces of an app.
Apple Numbers has a PDF export option.
The problem is that even though some border lines in the table have been set to "none" in the export you nevertheless get small visible hairlines, see this cutout:
[
I want to to eliminate the hairlines by image processing
Before creating a flyover video over the graphics.
My basic idea is:
Convert vector to bitmap with very high resolution (oversampling, e.g. to 600 or 1200 DPI)
Then downsample to the target resolution (e.g. 150 DPI) with an algorithm which eliminates the hairlines (disappearing in the dominance of neighboring pixels) while overally still remaining as crisp and sharp as possible.
So step 1, I already figured out, by these two possibilities:
a. Apple Preview has a PDF to PNG export option where you can specify the DPI.
b. ImageMagick convert -density 600 source.pdf export.png
But for step 2 there are so many possibilities:
resample <DPI> or -filter <FilterName> -resize 25% or -scale 12.5% (when from 1200 to 150)
Please tell me by which methods (resample, resize, scale) and which of the interpolation algorithms or filters I shall use to achieve my goal of eliminating the hairlines by dissolving them into their neighboring pixels, with the rest (normal 1px lines, rendered text and symbols, etc) remaining as crisp as possible.
ImageMagick PDF tp PNG conversion with different DPI settings:
convert -density XXX flowchart.pdf flowchart-ImageMagick-XXX.png
flowchart-ImageMagick-150.png ; flowchart-ImageMagick-300.png ; flowchart-ImageMagick-600.png
Apple Preview PDF to PNG export with different DPI settings:
flowchart-ApplePreview-150.png ; flowchart-ApplePreview-300.png ; flowchart-ApplePreview-600.png
Different downscaling processings
a) convert -median 3x3 -resize 50% flowchart-ApplePreview-300.png flowchart-150-from-ApplePreview-300-median-3x3.png thanks to the hint from #ChristophRackwitz
b) convert -filter Box -resize 25% flowchart-ImageMagick-600.png flowchart-150-from-ImageMagick-600-resize-box.png
Comparison
flowchart-ApplePreview-150.png
flowchart-150-from-ApplePreview-300-median-3x3.png
✅ Hairlines gone
❌ But font is not as crisp anymore, median destroyed that.
flowchart-150-from-ImageMagick-600-resize-box.png
🆗 Overally still quite crisp
🆗 Hairline only very very faint, even only faint when zoomed in
Both variants are somehow good enough for my KenBurns / Dolly cam ride over them. Still I wished that there'd be an algorithm that keeps cripness but still eliminates 1px lines in very high DPI bitmaps. But I guess this is a Jack of all trades only in my phantasy.
Processing Durations
MacBook Pro 15'' (Mid 2014, 2,5 GHz Quad-Core Intel Core i7)
ImageMagick PDF to PNG
PDF source Ca. 84x60cm (33x23'')
300dpi -> 27s
600dpi -> 1m58s
1200dpi -> 37m34s
ImageMagic Downscaling
time convert -filter Box -resize 25% 1#600.png 1#150-from-600.png
# PNG # 39700 × 28066: 135.57s user 396.99s system 109% cpu 8:08.08 total
time convert -median 3x3 -resize 50% 2#300.png 2#150-from-300-median3x3.png
# PNG # 19850 × 14033: 311.48s user 9.42s system 536% cpu 59.76 total
time convert -median 3x3 -resize 50% 3#300.png 3#150-from-300-median3x3.png
# PNG # 19850 × 14033: 237.13s user 8.33s system 544% cpu 45.05 total

libvips rotate is throwing no space left on device

I am using libvips to rotate the images. I am using a VM that have 3002 MB Ram and 512MB temp storage.The AWS Lambda Machine.
The command I running to rotate images is
vips rot original.jpg rotated.jpg d90
It throwing the following error
Exit Code: 1, Error Output: ERROR: wbuffer_write: write failed unix error: No space left on device
The jpg image is arround 10Mb.
Here's how libvips will rotate your jpg image.
90 degree rotate requires random access to the image pixels, but JPEG images can only be read strictly top-to-bottom, so as a first step, libvips has to unpack the JPG to a random access format. It uses vips (.v) format for this, which is pretty much a C array with a small header.
For images under 100mb (you can change this value, see below) decompressed, it will unpack to a memory buffer. For images over 100mb decompressed, it will unpack to a temporary file in /tmp (you can change this, see below).
Next, it does the rotate to the output image. It can do this as a single streaming operation, so it will typically need enough memory for 256 scanlines on the input image, and 256 on the output, so around another 30mb or so in this case, plus some more working area for each thread.
In your specific case, the input image is being decompressed to a temporary file of 30,000 x 10,000 x 3 bytes, or about 900mb. This is way over the 512mb you have in /tmp, so the operation fails.
The simplest solution is to force the loader to load via a memory buffer. If I try:
$ vipsheader x.jpg
x.jpg: 30000x10000 uchar, 3 bands, srgb, jpegload
$ time vips rot x.jpg y.jpg d90 --vips-progress --vips-leak
vips temp-3: 10000 x 30000 pixels, 8 threads, 128 x 128 tiles, 256 lines in buffer
vips x.jpg: 30000 x 10000 pixels, 8 threads, 30000 x 16 tiles, 256 lines in buffer
vips x.jpg: done in 0.972s
vips temp-3: done in 4.52s
memory: high-water mark 150.43 MB
real 0m4.647s
user 0m5.078s
sys 0m8.418s
The leak and progress flags make vips report some stats. You can see the initial decompress to the temporary file is talking 0.97s, the rotate to the output is 4.5s, it needs 150mb of pixel buffers and 900mb of disc.
If I raise the threshold, I see:
$ time VIPS_DISC_THRESHOLD=1gb vips rot x.jpg y.jpg d90 --vips-progress --vips-leak
vips temp-3: 10000 x 30000 pixels, 8 threads, 128 x 128 tiles, 256 lines in buffer
vips x.jpg: 30000 x 10000 pixels, 8 threads, 30000 x 16 tiles, 256 lines in buffer
vips x.jpg: done in 0.87s
vips temp-3: done in 1.98s
memory: high-water mark 964.79 MB
real 0m2.039s
user 0m3.842s
sys 0m0.443s
Now the second rotate phase is only 2s since it's just reading memory, but memory use has gone up to around 1gb.
This system is introduced in the libvips docs here:
http://jcupitt.github.io/libvips/API/current/How-it-opens-files.md.html

Image conversion to PNG vs JPG using ImageMagick

I'm trying to convert some images into either PNG or JPG and trying to find out which format will result in the smaller file size. With most of the cases PNG will give me the best compression but some odd images I get better compression out of JPG. I have two questions:
What characteristics of image will cause it to give better results?
Is there a way to pre-determine which format will give me better
results without converting them first?
This photo gives better compression result using PNG
This photo provides substantially better compression using JPG
I have absolutely no time to develop this line of thought further but the image entropy is probably a good discriminant for selecting JPEG or PNG - see my earlier comment on your question.
If you use ImageMagick, you can calculate the Entropy easily like this:
identify -verbose -features 1 image.jpg | grep -i -A1 entropy
Your top image gives output like this:
identify -verbose -features 1 t.jpg | grep -i -A1 entropy
Sum Entropy:
0.703064, 0.723437, 0.733147, 0.733015, 0.723166
Entropy:
1.01034, 1.12974, 1.14983, 1.15122, 1.11028
Difference Entropy:
0.433414, 0.647495, 0.665738, 0.671079, 0.604431
and your bottom image gives output like this:
identify -verbose -features 1 b.jpg | grep -i -A1 entropy
Sum Entropy:
1.60934, 1.62512, 1.65567, 1.65315, 1.63582
Entropy:
2.19687, 2.33206, 2.44111, 2.43816, 2.35205
Difference Entropy:
0.737134, 0.879926, 0.980157, 0.979763, 0.894245
I suspect images with a higher entropy will compress better as JPEGs and those with a lower entropy will fare better as PNGs - but I have to dash now :-)
There are 5 values for each type of entropy - horizontal, vertical, left diag, right diag and overall. I think the last value is the only one you need consider.
Updated
Ok, I have had a little more time to spend on this now. I do not have a pile of sample images to test my theory on, so I did it a different way. I made a little script to calculate the following for a given input file:
ratio of JPEG size to PNG size
entropy
Here it is:
#!/bin/bash
f="$1"
jsize=$(convert "$f" -strip JPG:- | wc -c)
psize=$(convert "$f" PNG:- | wc -c)
jpratio=$(echo $jsize*100/$psize | bc)
# Make greyscale version for entropy calculation
rm temp*.jpg 2> /dev/null
convert "$f" -colorspace gray temp.jpg
entropy=$(identify -verbose -features 1 temp.jpg | grep -A1 " Entropy:" | tail -n 1 | awk -F, '{print $5}')
echo $jpratio:$entropy
So, for a given image, you would do this:
./go image.jpg
8:3.3 # JPEG is 8x bigger than PNG and entropy is 3.3
Then I took your image and added different amounts of noise to it to increase its entropy, like this
for i in {1..99}; do convert bottom.jpg +noise Gaussian -evaluate add ${i}% xx${i}.jpg;done
that gives me files called xx1.jpg with 1% noise, xx2.jpg with 2% noise and so on, up to xx99.jpg with 99% noise.
Then I ran each of the files through the first script, like this:
for f in xx*.jpg; do ./go $f;done > data.txt
to give me data.txt.
Then I created the following gnuplot command file plot.cmd:
set title 'Plotted with Gnuplot'
set ylabel 'Entropy'
set xlabel 'JPEG size/PNG Size'
set grid
set terminal postscript color landscape dashed enhanced 'Times-Roman'
set output 'file.eps'
plot 'data.txt'
and ran it with
gnuplot plot.cmd
And I got the following plot which shows that as ImageMagick's entropy number increases, the ratio of JPEG size to PNG size improves in favour of JPEG... not very scientific, but something at least. Maybe you could run the script against the type of images you normally use and see what you get.
That depends very much on your use case.
1) JPG is usually not as good for text because the artifacts tend to "smear" or blur the image. For photos, this is usually not a problem; also for high-resolution textual images the problem will be much less pronounced (because the blur radius is smaller relative to image size).
Note that PNG is usually used to losslessly compress images, while JPG is inherently lossy. With a higher compression ratio, JPG files will be much smaller, but the artifacts will be more pronounced. Note also that there are programs that are able to do lossy compression in PNG (which could well beat JPG compression in some cases).
In short: PNGs will work well with computer-generated images, because those tend to be quite regular and thus easy to deflate. JPGs will fare better with photos which tend to have more jitter, which is hard to compress. When you move away from ImageMagick and libpng, there are other possibilities.
2) While it would be possible to train a neural network to decide whether JPG or PNG would compress better, it would probably take longer and be less exact than just trying both and looking at the output. Note also that there are some approximative measurements that can tell you if an image is too blurry (which may help you setting the correct compression level if you want to tune further).
One big difference is that PNG allows for alpha transparencies so that you can see parts of what is behind the image. Jpg will block out a rectangle.

ImageMagick JPEG quality/size

I'm using the following command to generate a thumbnail:
mogrify -resize 128x128 -quality 75 "some thumb file"
For a sample file:
If i don't specify quality 75, i get a 40Kb file
If i specify quality 75, i get a 36 kb file and it looks awful
The same file resized in photoshop is < 10 kb - and it looks awesome!
Is it possible to use imagemagick to resize a thumbnail to such a low filesize so that the resulting image wouldn't suck?
Maybe i'm missing some other setting here?
What you've got without -quality was probably quality 92, or the quality of the input image (which, if large, could look OK despite low quality setting).
https://imagemagick.org/script/command-line-options.php#quality
The JPEG quality depends on mostly 2 things:
the used Quantization Matrix (or separate QMs: one for Y, and other for Cb and Cr)
whether there is chroma subsampling, e.g. the image uses one 8x8 block (coding unit) to store color information for a 16x16 (in 4:2:0 case) block of pixels
Your preferred quality, 90, is the lowest for which there is no subsampling. It may be that for small images, like thumbnails, high res color information is important.
Final note - Photoshop has it's own choice of quantization matrices for their "quality" settings. These are different than mogrify's and libjpeg's in general.
You should find the correct quality level in mogrify, and not rely on the number from Photoshop.
If you want to emulate the PS compression, you can get their QM-s:
$ djpeg -v -v saved_by_photoshop.jpg >/dev/null
And then compress some image using these matrices. cjpeg can do it using -qtables file_with_QMs.txt.

Resources