How to stitch back cropped image with imageMagick? - imagemagick

I have a big big image, lets name it orig-image.tiff.
I want to cut it in smaller pieces, apply things on it, and stitch back together the newly created little images.
I cut it into pieces with this command :
convert orig-image.tiff -crop 400x400 crop/parts-%04d.tiff
then I'll generate many images by applying a treatment to each part-XXXX.tiff image and end up with images from part-0000.png to part-2771.png
Now I want to stitch back the images into a big one. Can imagemagick do that?

If you were using PNG format, the tiles would "remember" their original position, as #Bonzo suggests, and you could take them apart and reassemble like this:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as PNGs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.png
and reassemble:
convert tile*png -layers merge BigBoy.png
That is because the tiles "remember" their original position on the canvas - e.g. +248+248 below:
identify tile-1023.png
tile-1023.png PNG 8x8 256x256+248+248 16-bit sRGB 319B 0.000u 0:00.000
With TIFs, you could do:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as TIFs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.tif
and reassemble with the following but sadly you need to know the layout of the original image:
montage -geometry +0+0 -tile 32x32 tile*tif BigBoy.tif
Regarding Glenn's comment below, here is the output of pngcheck showing the "remembered" offsets:
pngcheck tile-1023*png
Output
OK: tile-1023.png (8x8, 48-bit RGB, non-interlaced, 16.9%).
iMac:~/tmp: pngcheck -v tile-1023*png
File: tile-1023.png (319 bytes)
chunk IHDR at offset 0x0000c, length 13
8 x 8 image, 48-bit RGB, non-interlaced
chunk gAMA at offset 0x00025, length 4: 0.45455
chunk cHRM at offset 0x00035, length 32
White x = 0.3127 y = 0.329, Red x = 0.64 y = 0.33
Green x = 0.3 y = 0.6, Blue x = 0.15 y = 0.06
chunk bKGD at offset 0x00061, length 6
red = 0xffff, green = 0xffff, blue = 0xffff
chunk oFFs at offset 0x00073, length 9: 248x248 pixels offset
chunk tIME at offset 0x00088, length 7: 13 Dec 2016 15:31:10 UTC
chunk vpAg at offset 0x0009b, length 9
unknown private, ancillary, safe-to-copy chunk
chunk IDAT at offset 0x000b0, length 25
zlib: deflated, 512-byte window, maximum compression
chunk tEXt at offset 0x000d5, length 37, keyword: date:create
chunk tEXt at offset 0x00106, length 37, keyword: date:modify
chunk IEND at offset 0x00137, length 0
No errors detected in tile-1023.png (11 chunks, 16.9% compression).

Related

speed up imagemagick file conversion to monochrome image

$ file in.jp2
in.jp2: JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 3560x4810, components 3
I use the following command to convert a .jp2 file to a monochrome pdf file. But it takes 20 seconds to convert a file.
convert in.jp2 -threshold 75% -type bilevel -monochrome -compress Zip out.pdf
Is there a way to speed up the conversion without losing any resolution and increasing the output file size?

How can display a Digital Elevation Model (DEM) (.raw) using Python?

I want to display a DEM file (.raw) using Python, but there may be something wrong with the result.
Below is my code:
img1 = open('DEM.raw', 'rb')
rows = 4096
cols = 4096
f1 = np.fromfile(img1, dtype = np.uint8, count = rows * cols)
image1 = f1.reshape((rows, cols)) #notice row, column format
img1.close()
image1 = cv2.resize(image1, (image1.shape[1]//4, image1.shape[0]//4))
cv2.imshow('', image1)
cv2.waitKey(0)
cv2.destroyAllWindows()
And I got this result:
display result
The original DEM file is placed here: DEM.raw
There's nothing wrong with your code, that's what's in your file. You can convert it to a JPEG or PNG with ImageMagick at the command line like this:
magick -size 4096x4096 -depth 8 GRAY:DEM.raw result.jpg
And you'll get pretty much the same:
The problem is elsewhere.
Taking the hint from Fred (#fmw42) and playing around, oops I mean "experimenting carefully and scientifically", I can get a more likely looking result if I treat your image as 4096x2048 pixels with 16 bpp and MSB first endianness:
magick -size 4096x2048 -depth 16 -endian MSB gray:DEM.raw -normalize result.jpg

Tools run from unix command line to decrease bit depth of grayscale images in PDFs

My workplace scanner creates exorbitantly large PDFs from low-resolution grayscale scans of hand-written notes. I currently use Acrobat Pro to extract PNG images from the PDF, then use Matlab to reduce the bit depth, then use Acrobat Pro to combine them back into PDFs. I can reduce the PDF file size by one to two orders of magnitude.
But is it ever a pain.
I'm trying to write scripts to do this, composed of cygwin command line tools. Here is one PDF that was shrunk using my byzantine scheme:
$ pdfimages -list bothPNGs.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 550 558 gray 1 2 image no 25 0 72 72 6455B 8.4%
2 1 image 523 519 gray 1 2 image no 3 0 72 72 5968B 8.8%
I had used Matlab to reduce the bit depth to 2. To test the use of unix tools, I re-extract the PNGs using pdfimages, then use convert to recombine them to PDF, specifying a bit depth in doing so:
$ convert -depth 2 sparseDataCube.png asnFEsInTstep.png bothPNGs_convert.pdf
# Results are the same regardless of the presence/absence of `-depth 2`
$ pdfimages -list bothPNGs_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 550 558 gray 1 8 image no 8 0 72 72 6633B 2.2%
2 1 image 523 519 gray 1 8 image no 22 0 72 72 6433B 2.4%
Unfortunately, the bit depth is now 8. My bit depth argument doesn't actually seem to have any effect.
What would the recommended way to reduce the bit depth of PNGs and recombine into PDF? Whatever tool is used, I want to avoid antialiasing filtering. In non-photographic images, that just causes speckle around the edges of text and lines.
Whatever solution is suggested, it will be hit-or-miss whether I have the right Cygwin packages. I work in a very controlled environment, where upgrading is not easy.
This looks like another similar sounding question, but I really don't care about any alpha layer.
Here are two image files, with bit depths of 2, that I generated for testing:
Here are the tests, based on my initial (limited) knowledge, as well as on respondent Mark's suggestions:
$ convert -depth 2 test1.png test2.png test_convert.pdf
$ pdfimages -list test_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3204B 32%
2 1 image 100 100 gray 1 8 image no 22 0 72 72 3221B 32%
$ convert -depth 2 test1.png test2.png -define png:color-type=0 -define png:bit-depth=2 test_convert.pdf
$ pdfimages -list test_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3204B 32%
2 1 image 100 100 gray 1 8 image no 22 0 72 72 3221B 32%
The bit depths of images within the created PDF file are 8 (rather than 2, as desired and specified).
Thanks to Mark Setchell and Cris Luengo's comments and answers, I've come up with some tests that may reveal what is going on. Here are the 2-bit and 8-bit random grayscale test PNG's created using Matlab:
im = uint8( floor( 256*rand(100,100) ) );
imwrite(im,'rnd_b8.png','BitDepth',8);
imwrite(im,'rnd_b2.png','BitDepth',2);
The 2-bit PNGs have much less entropy than the 8-bit PNGs.
The following shell commands create PDFs with and without compression:
convert rnd_b2.png rnd_b2.pdf
convert rnd_b2.png -depth 2 rnd_b2_d2.pdf
convert rnd_b2.png -compress LZW rnd_b2_lzw.pdf
convert rnd_b8.png rnd_b8.pdf
convert rnd_b8.png -depth 2 rnd_b8_d2.pdf
convert rnd_b8.png -compress LZW rnd_b8_lzw.pdf
Now check file sizes, bit depth, and compression (I use bash):
$ ls -l *.pdf
8096 rnd_b2.pdf
8099 rnd_b2_d2.pdf
7908 rnd_b2_lzw.pdf
22523 rnd_b8.pdf
8733 rnd_b8_d2.pdf
29697 rnd_b8_lzw.pdf
$ pdfimages -list rnd_b2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3178B 32%
$ pdfimages -list rnd_b2_d2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3178B 32%
$ pdfimages -list rnd_b2_lzw.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3084B 31%
$ pdfimages -list rnd_b8.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 9.78K 100%
$ pdfimages -list rnd_b8_d2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3116B 31%
$ pdfimages -list rnd_b8_lzw.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 13.3K 136%
Essentially, convert does not create PNGs of user-specified bit depths to put into PDFs; it converts 2-bit PNGs to 8-bit. This means that PDFs created from 2-bit PNGs have much less than entropy that the maximum for 8-bit images. I confirmed this by extracting the PNGs and confirming that there are only 4 grayscale levels in the data.
The fact that rnd_b8_d2.pdf is comparable in size to the PDFs created from 2-bit PNGs reveals how convert handles -depth 2 that precedes the output file specification. It seems that it does reduce dynamic range to 2 bits at some point, but expands it out to 8 bits for incorporation into the PDF.
Next, compare files sizes with their compression ratios, taking uncompressed 8-bit random grayscales as the baseline, i.e., rnd_b8.pdf:
rnd_b2.pdf 8096 / 22523 = 36%
rnd_b2_d2.pdf 8099 / 22523 = 36%
rnd_b2_lzw.pdf 7908 / 22523 = 35%
rnd_b8.pdf 22523 / 22523 = 100%
rnd_b8_d2.pdf 8733 / 22523 = 39%
rnd_b8_lzw.pdf 29697 / 22523 = 131%
It seems that the ratio from pdfimages is the amount of space taken by the image compared to a maximum entropy 8-bit image.
It also seems that compression is done by convert regardless of whether it is specified in the switches. This is from the fact that rnd_b2*.pdf are all of similar size and ratios.
I assume that the 31% increase of rnd_b8_lzw.pdf is overhead due to the attempt at compression when no compression is possible. Does this seem reasonable to "you" image processing folk? (I am not an image processing folk).
Based on the assumption that compression happens automatically, I don't need Matlab to reduce the dynamic range. The -depth 2 specification to convert will decrease the dynamic range, and even though the image is in the PDF as 8-bits, it is automatically compressed, which is almost as efficient as 2-bit images.
There is only one big concern. According to the above logic, the following files should all look comparable:
rnd_b2.pdf
rnd_b2_d2.pdf
rnd_b2_lzw.pdf
rnd_b8_d2.pdf
The first 3 do, but the last does not. It is the one that relies on the -depth 2 specification to convert to reduce dynamic range. Matlab shows that there are only 4 grayscale levels from 0 to 255 used, but middle two levels occur twice as often as the edge levels. Using -depth 4, I found that only the minimum and maximum grayscale levels are always half of the uniform distribution among all the other grayscale levels. The reason for this became apparent when I plotted the mapping of gray levels in rnd_b8.pdf compared to the 4-bit depth counterpart:
The "bins" of 8-bit gray level values that map to the minimum and maximum 4-bit gray levels is half as wide as for the other 4-bit gray levels. It might be because the bins are symmetrically defined such that (for example), the values that map to zero include negative and positive values. This wastes half the bin, because it lies outside the range of the input data.
The take-away is that one can use the -depth specification to convert, but for small bit depths, it is not ideal because it doesn't maximize the information in the bits.
AFTERNOTE: And interesting beneficial effect that I observed, which is obvious in hindsight, especially in light of Cris Luengo's comment. If the images in the PDF do indeed have limited bit depth, e.g., 4 bits, then you can extract them with pdfimages and re-package them in PDF without worrying too much about specifyng the right -depth. In the re-packaging into PDF, I noticed that the result of -depth 5 and -depth 6 did not increase the PDF file size much over -depth 4 because the default compression squeezes out any space wasted in the 8-bit image within the PDF. Subjectively, the quality remains the same too. If I specify a -depth 3 or below, however, the PDF file size decreases more noticeably, and the quality declines noticeably too.
Further helpful observations: After the better part of a year, I had a need to package scanned files into a PDF file again, but this time, I used a scanner that created PNG files for each page. I had no desire to re-spend the time taken above to reverse-engineer the behaviour of ImageMagick tools. Not being bogged down in the weeds, I was able to to notice three helpful code idiom details, at least to me, and I hope it helps someone else. For context, assume that you want to downgrade the grayscale depth to 2 bits, which allows for 4 levels. I found this to be plenty for scanned text documents, with neglegible loss in readability.
First, if you scanned in (say) 200 dpi grayscale, and you want to downgrade to 2 bits, you need specify the -density prior to the first (input) file: convert -density 200x200 -depth 2 input.png output.pdf. Not doing so yields extremely coarse resolution, even though pdfimage -list shows 200x200. Second, you want to use one convert statement to convert a collection of PNG files to a single depth-limited PDF file. I found this out because I initially converted multiple PNG files into one PDF file, then converted to a depth of 2. The file size shrinks, but not nearly as much as it could. In fact, if when I had only 1 input file, the size actually increased by a third. So the ideal pattern for me was convert -density 200x200 -depth 2 input1.png input2.png output.pdf. Third, documents manually scanned one page at a time often need page rotation adjustments, and web searching yields the recommendation to use pdftk rather than (say) convert (well discussed here). The rationale is that convert rasterizes. Even though scans are rasterized, I elected to use pdftk to avoid the possibility of re-rasterizing, and the associated possibility of degraded fidelity. pdfjam might also do nicely, but starting code patterns for page-specific rotations were already given for pdftk. From experimentation, the pattern for me was (say) pdftk input.pdf cat 1west 2east 3east output output.pdf.
Updated Answer
I am still looking at this. One thing I have noticed is that it does appear to honour compression when writing PDFs...
# Without compression
convert -depth 2 -size 1024x768 gradient: a.pdf
pdfimages -list a.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 1024 768 gray 1 8 image no 8 0 72 72 12.1K 1.6%
# With compression
convert -depth 2 -size 1024x768 gradient: -compress lzw a.pdf
pdfimages -list a.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 1024 768 gray 1 8 image no 8 0 72 72 3360B 0.4%
You can list the available types of compression with:
identify -list compress
It seems to accept the following for PDF output:
JPEG
LZW
ZIP
Note that your test images do not achieve very good compression, but then again, consider how representative they really are of your documents - they look very random and such things always compress poorly.
Initial Answer
Please try adding:
-define png:bit-depth=2
and/or
-define png:color-type=X
where X is either 0 (grayscale) or 3 (indexed, i.e. palettised)
So, specifically:
convert image1.png image2.png -define <AS ABOVE> output.pdf

Convert a PNG with 4 bands to any format with 1 band and a color table

I have a PNG with 4 bands but I want only 1 band with a colour table. I tried saving it as a 256-color-bitmap in MS Paint and it worked.
But I need to do it automatically. I tried ImageMagick: convert E8.png E8256.bmp but it did not work.
So this is the original picture:
ImageMagick identify:
E8.png PNG 8250x4090 8250x4090+0+0 8-bit sRGB 231KB 0.000u 0:00.000
gdalinfo:
Driver: PNG/Portable Network Graphics
Files: E8.png
Size is 8250, 4090
Coordinate System is `'
Image Structure Metadata:
INTERLEAVE=PIXEL
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 4090.0)
Upper Right ( 8250.0, 0.0)
Lower Right ( 8250.0, 4090.0)
Center ( 4125.0, 2045.0)
Band 1 Block=8250x1 Type=Byte, ColorInterp=Red
Mask Flags: PER_DATASET ALPHA
Band 2 Block=8250x1 Type=Byte, ColorInterp=Green
Mask Flags: PER_DATASET ALPHA
Band 3 Block=8250x1 Type=Byte, ColorInterp=Blue
Mask Flags: PER_DATASET ALPHA
Band 4 Block=8250x1 Type=Byte, ColorInterp=Alpha
I want to have a picture with one band and a color table so I opened E8.png with MS Paint and saved it as 256-color-bitmap. The result:
ImageMagick identify:
E8256.bmp BMP3 8250x4090 8250x4090+0+0 8-bit sRGB 256c 33.75MB 0.265u 0:00.138
gdalinfo:
Driver: BMP/MS Windows Device Independent Bitmap
Files: E8256.bmp
Size is 8250, 4090
Coordinate System is `'
Origin = (-1890.000000000000000,1890.000000000000000)
Pixel Size = (3780.000000000000000,-3780.000000000000000)
Corner Coordinates:
Upper Left ( -1890.000, 1890.000)
Lower Left ( -1890.000,-15458310.000)
Upper Right (31183110.000, 1890.000)
Lower Right (31183110.000,-15458310.000)
Center (15590610.000,-7728210.000)
Band 1 Block=8250x1 Type=Byte, ColorInterp=Palette
Color Table (RGB with 256 entries)
0: 0,0,0,255
1: 128,0,0,255
...
255: 255,255,255,255
But when I try convert E8.png E8imagemagick.bmp I get:
ImageMagick identify:
E8imagemagick.bmp BMP 8250x4090 8250x4090+0+0 8-bit sRGB 135MB 0.406u 0:00.409
gdalinfo:
Driver: BMP/MS Windows Device Independent Bitmap
Files: E8imagemagick.bmp
Size is 8250, 4090
Coordinate System is `'
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 4090.0)
Upper Right ( 8250.0, 0.0)
Lower Right ( 8250.0, 4090.0)
Center ( 4125.0, 2045.0)
Band 1 Block=8250x1 Type=Byte, ColorInterp=Red
Band 2 Block=8250x1 Type=Byte, ColorInterp=Green
Band 3 Block=8250x1 Type=Byte, ColorInterp=Blue
Edit: Here(uploaded.net -- Dropbox) is the original PNG and here (uploaded.net -- dropbox) the BMP I obtained using MS Paint.
Maybe this command:
convert E8.png -colors 256 E8-256colors.bmp
gets you closer to what you want? It is a bit large, though, this bitmap... (129 MByte). So this one should be smaller:
convert E8.png -type palette -colors 256 E8-palette-256colors.bmp
The last one is only 16 MByte.
You headline says 'any format', so PNG may be in order too? It creates much smaller output:
convert E8.png -type palette -colors 256 E8-palette-256colors.png
(The size now is only 122 kByte.)
Your original image consists of 6 colors only, and so does your new output:
identify -format "%f: %k\\n" E8.png E8-palette-256colors.png
E8.png: 6
E8-palette-256colors.png: 6
Or
identify E8.png E8-palette-256colors.png
E8.png PNG 8250x4090 8250x4090+0+0 8-bit sRGB 231KB 0.000u 0:00.000
E8-palette-256colors.png[1] PNG 8250x4090 8250x4090+0+0 8-bit sRGB 6c 125KB 0.000u 0:00.000
Not sure what you want exactly...
convert E8256.bmp -separate -type palette PNG8:out%d.png

Strange block texture background in imagemagick conversion frin PDF to JPG

I'm using this command to convert PDF to JPG:
exec("convert -scale 772x1000 -density 150 -trim \"".$toc_path.$filename."[0]\" -background white -flatten -quality 100 \"".$img_path. "covers/". $img_filename ."\"");
Random Fuzzy B & W background gets converted in to huge squares:
The trick was to increase the density up to 300, and then scale down to whatever size you you like eg:
convert -scale 772x1000 -density 300 -trim \"".$toc_path.$filename."[0]\" -background white -flatten -quality 80 \"".$img_path. "covers/". $img_filename ."\"");
# density 75 with no scale, size = 1.3mb - smaller blocks looked bad
# density 150 with no scale size = 2.0mb = large blocks looked bad
# density 300 with no scale size = 2.4mb = no blocks looked like original
# density 300 with scale to 1000 lines = 170kb = no blocks looked like original

Resources