Gnuplot shifted x-axis - alignment

Hello I have following gnuplot example data in 3 files:
format kompression avgcover avgdiff1 avgdiff2
jp2 10 95.68 3.74 4.02
jp2 20 95.63 3.79 4.01
jp2 30 95.62 3.80 3.92
jp2 40 95.81 3.61 3.79
jp2 50 96.13 3.29 3.72
jp2 60 96.59 2.83 3.64
jp2 70 96.76 2.66 3.25
jp2 80 97.05 2.37 2.99
jp2 90 97.17 2.25 2.83
jp2 100 97.24 2.18 2.52
format kompression avgcover avgdiff1 avgdiff2
jpg 10 95.12 2.25 2.83
jpg 20 95.23 3.79 4.01
jpg 30 95.34 2.66 3.25
jpg 40 95.23 3.61 3.79
jpg 50 96.16 3.64 3.72
jpg 60 96.86 2.83 2.37
jpg 70 96.23 2.66 3.25
jpg 80 97.12 2.37 3.64
jpg 90 97.44 2.25 2.83
jpg 100 97.24 2.18 2.52
format kompression avgcover avgdiff1 avgdiff2
jxr 10 95.12 2.25 2.83
jxr 20 95.23 3.79 4.01
jxr 30 95.34 2.66 3.25
jxr 40 95.23 3.61 3.79
jxr 50 96.16 3.64 3.72
jxr 60 96.86 2.83 2.37
jxr 70 96.23 2.66 3.25
jxr 80 97.12 2.37 3.64
jxr 90 97.44 2.25 2.83
jxr 100 97.24 2.18 2.52
Now I have the following gnuplot command:
set term postscript eps enhanced color
set xlabel "Kompression"
set ylabel "Wert"
set size 1,1
set yrange[90:100]
set y2range[0:10]
set y2tics
set ytics nomirror
set key outside right top
set style data histogram
set style fill solid border -1
set output 'test.eps'
set key outside right top
set style line 1 lt 1 lw 3 pt 2 lc rgb "red"
set style line 2 lt 1 lw 1 pt 1 lc rgb "orchid"
set style line 3 lt 1 lw 3 pt 2 lc rgb "blue"
set style line 4 lt 1 lw 1 pt 1 lc rgb "light-turquoise"
set style line 5 lt 1 lw 3 pt 2 lc rgb "dark-green"
set style line 6 lt 1 lw 1 pt 1 lc rgb "greenyellow"
plot './ergebnisse/outputFDSQL_jp2.dat' using 3:xtic(2) ls 1
title'JP2(cov)' axis x1y1, '' u 4 ls 2 title 'JP2(notc)' axis x1y2, '' u 5 ls 2
title 'JP2(morc)'axis x1y2, './ergebnisse/outputFDSQL_jpg.dat' using 3:xtic(2) ls 3
title 'JPG(cov)' axis x1y1 , '' u 4 ls 4 title 'JPG(notc)' axis x1y2, '' u 5 ls 4
title 'JPG(morc)' axis x1y2, './ergebnisse/outputFDSQL_jxr.dat' using 3:xtic(2) ls
5 title 'JXR(cov)' axis x1y1 , '' u 4 ls 6 title 'JXR(notc)' axis x1y2, '' u 5 ls 6
title 'JXR(morc)' axis x1y2
This results in the following outputfile:
Here you can see that the values from 0-100 on the x-axis are shifted not correct, does anybody see the error in the code?

There doesn't seem to be anything wrong with this code, the data your posted and your codes gives me this:
Can you think of anything else you are doing?
Edit:
The file headers were messing up your graph, which were not posted originally with the data. You can comment them out, rather than delete them. Use # before the commented text.

Related

ImageMagick Linux mimick ezgif.com result

Ezgif.com is a great page but can ImageMagick get the same results?
Ezgif settings:
Delay: 200 per image
Crossfade frames
Fader delay: 6
Frame Count: 10
My attempt via Linux Terminal:
convert -delay 200 -loop 0 *.jpg myimage.gif
As i told you on FL, you need to simply use
convert -resize (Smallest size) -delay 200 -morph 200 (source) (destination)
e.g:
convert -resize 200x200 -delay 200 -morph 200 /var/home/user1/pictures/*.jpg /var/home/user1/myresult.gif
on my side i can show you the result like this used on WINDOWS
in Windows Pictures folder
used this command:
C:\Users\Public\Pictures\Sample Pictures>convert -resize 20% -delay 20 -loop 0 *.jpg -morph 5 myimage.gif
Here is Unix shell script to create a faded animation of 4 images each with 10 intermediate fades. It loops over each successive pair of images and creates the faded intermediate images by blending the pair at different percentages.
Input:
(
imgArr=(lena.jpg mandril3.jpg zelda1.jpg peppers.jpg)
for ((i=0; i<4; i++)); do
img1=${imgArr[$i]}
j=$((i+1))
jj=$((j%4))
img2=${imgArr[$jj]}
for ((k=0; k<11; k++)); do
pct=$((10*k))
convert $img1 $img2 -define compose:args=$pct -compose blend -composite miff:-
done
done
) | convert -delay 20 - -loop 0 anim.gif
Animation:
Note that I had to shrink the image to 75% dimensions to make the file size small enough for upload here.
A complete one-liner solution with increased delay for main frames and last-to-first transition, with the same settings OP asked:
convert -loop 0 *.jpg -morph 10 -set delay '%[fx:(t%11!=0)?6:200]' -duplicate 1,-2-1 out.gif
-morph 10 inserts 10 intermediate frames between every original frame, so we need to increase the delay time for every 11th frame, so the fx part sets different delay time for them.
-duplicate 1,-2-1 does the transition for last to first frame

imagemagick: Remove all occurrences of a stray pixel surrounded by transparency

Hello! I'd like to remove all occurrences of stray pixels from a transparent image.
Below is an example image, enlarged for convenience:
Following is that same image but how I desire it to look after processing:
The best way I can think to describe what I'm looking to achieve is that every pixel whose surrounding pixels are fully transparent should be removed. Think of the selector as a 3x3 grid, with the middle of the grid being the pixel operated on.
I took a look at Morphology in the IM manual, but it doesn't appear to provide a fine enough method for this.
Is this possible with ImageMagick? Is there any other command line software that could achieve this if not?
In Imagemagick, you can use -connected-components to remove those isolated blocks of pixels. They seem to be 5x5 pixels on a side. So we threshold the area to keep at 26. We remove those blocks in the alpha channel and then replace that in the image. (Note we need to use 8-connected vs 4-connected region detection to preserve your other regions).
Since you say the your image was enlarged, so I presume your isolated regions were 1x1 pixels. So change the area-threshold to 2, to remove single pixel regions.
Input:
Unix Syntax:
convert img.png \
\( -clone 0 -alpha extract -type bilevel \
-define connected-components:mean-color=true \
-define connected-components:area-threshold=26 \
-connected-components 8 \) \
-alpha off -compose copy_opacity -composite \
result.png
Windows Syntax:
convert img.png ^
( -clone 0 -alpha extract -type bilevel ^
-define connected-components:mean-color=true ^
-define connected-components:area-threshold=26 ^
-connected-components 8 ) ^
-alpha off -compose copy_opacity -composite ^
result.png
See -connected-components
ADDITION:
If you only want to remove the small isolated color pixels and not any transparent pixels inside the color ones, then there is no trivial way to do that. That is an enhancement I would like to have. However, it can be done.
Here is your image modified so that the top left red block has a single transparent center pixel. I added a red line to its right to be sure it was larger than 25 pixels when the center was turned transparent and so that you could see which pixel has the transparent center. You will have to download and zoom in on this image to see the missing pixel.
4x Zoom:
The method is to find all white regions in the alpha channel and then make a list of all regions that are less than 26 pixels. Then reprocess the image to remove those regions by ID.
Get ID List
id_list=""
OLDIFS=$IFS
IFS=$'\n'
arr=(`convert img2.png -alpha extract -type bilevel \
-define connected-components:mean-color=true \
-define connected-components:verbose=true \
-connected-components 8 null: | grep "gray(255)" | sed 's/^[ ]*//'`)
echo "${arr[*]}"
num=${#arr[*]}
IFS=$OLDIFS
for ((i=0; i<num; i++)); do
id=`echo "${arr[$i]}" | cut -d' ' -f1 | sed 's/[:]*$//'`
count=`echo "${arr[$i]}" | cut -d' ' -f4`
if [ $count -lt 26 ]; then
id_list="$id_list $id"
fi
done
echo "$id_list"
Here is what is printed
12: 5x5+120+70 122.0,72.0 25 gray(255)
14: 5x5+30+85 32.0,87.0 25 gray(255)
15: 5x5+110+85 112.0,87.0 25 gray(255)
16: 5x5+75+90 77.0,92.0 25 gray(255)
17: 5x5+40+100 42.0,102.0 25 gray(255)
18: 5x5+110+110 112.0,112.0 25 gray(255)
19: 5x5+140+110 142.0,112.0 25 gray(255)
21: 5x5+15+130 17.0,132.0 25 gray(255)
22: 5x5+40+140 42.0,142.0 25 gray(255)
23: 5x5+85+140 87.0,142.0 25 gray(255)
24: 5x5+120+140 122.0,142.0 25 gray(255)
2: 5x5+55+5 57.0,7.0 25 gray(255)
5: 5x5+100+20 102.0,22.0 25 gray(255)
7: 5x5+65+30 67.0,32.0 25 gray(255)
8: 5x5+125+30 127.0,32.0 25 gray(255)
9: 5x5+105+50 107.0,52.0 25 gray(255)
11: 5x5+25+65 27.0,67.0 25 gray(255)
12 14 15 16 17 18 19 21 22 23 24 2 5 7 8 9 11
Reprocess to remove regions by ID
convert img2.png \
\( -clone 0 -alpha extract -type bilevel \
-define connected-components:mean-color=true \
-define connected-components:remove="$id_list" \
-connected-components 8 -background black -flatten +write tmp.png \) \
-alpha off -compose copy_opacity -composite \
result2.png
4x Zoom:
fmw42's excellent answer uses connected regions, but I think it is possible with just a morphology. Use:
0 0 0
0 1 0
0 0 0
As the structuring element with erode and it'll detect 8-way connected isolated pixels. Now EOR that with your alpha and it'll make those pixels fully transparent (ie. remove them).
I don't know IM well enough to make you a command to do this :-( But with the libvips command-line it would be this to make a test image:
size=256
# sparse speckles for the alpha
vips gaussnoise t1.v $size $size
vips relational_const t1.v a.v more 200
# RGB noise
vips gaussnoise r.v $size $size
vips gaussnoise g.v $size $size
vips gaussnoise b.v $size $size
# assemble and save
vips bandjoin "r.v g.v b.v a.v" x.png
Then to remove your stray pixels:
# make mask.mor, a file containing our structuring element
cat >mask.mor <<EOF
3 3
0 0 0
0 255 0
0 0 0
EOF
# pull out the alpha channel
vips extract_band x.png a.v 3
# find isolated pixels
vips morph a.v t1.v mask.mor erode
# EOR with alpha to zap those pixels
vips boolean t1.v a.v t2.v eor
# extract rgb from original, then attach our modified alpha
vips extract_band x.png rgb.v 0 --n 3
vips bandjoin "rgb.v t2.v" x2.png
Here's before and after:
The libvips CLI is fast but a bit clumsy. It's neater if you use something like Python to script it.

Tools run from unix command line to decrease bit depth of grayscale images in PDFs

My workplace scanner creates exorbitantly large PDFs from low-resolution grayscale scans of hand-written notes. I currently use Acrobat Pro to extract PNG images from the PDF, then use Matlab to reduce the bit depth, then use Acrobat Pro to combine them back into PDFs. I can reduce the PDF file size by one to two orders of magnitude.
But is it ever a pain.
I'm trying to write scripts to do this, composed of cygwin command line tools. Here is one PDF that was shrunk using my byzantine scheme:
$ pdfimages -list bothPNGs.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 550 558 gray 1 2 image no 25 0 72 72 6455B 8.4%
2 1 image 523 519 gray 1 2 image no 3 0 72 72 5968B 8.8%
I had used Matlab to reduce the bit depth to 2. To test the use of unix tools, I re-extract the PNGs using pdfimages, then use convert to recombine them to PDF, specifying a bit depth in doing so:
$ convert -depth 2 sparseDataCube.png asnFEsInTstep.png bothPNGs_convert.pdf
# Results are the same regardless of the presence/absence of `-depth 2`
$ pdfimages -list bothPNGs_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 550 558 gray 1 8 image no 8 0 72 72 6633B 2.2%
2 1 image 523 519 gray 1 8 image no 22 0 72 72 6433B 2.4%
Unfortunately, the bit depth is now 8. My bit depth argument doesn't actually seem to have any effect.
What would the recommended way to reduce the bit depth of PNGs and recombine into PDF? Whatever tool is used, I want to avoid antialiasing filtering. In non-photographic images, that just causes speckle around the edges of text and lines.
Whatever solution is suggested, it will be hit-or-miss whether I have the right Cygwin packages. I work in a very controlled environment, where upgrading is not easy.
This looks like another similar sounding question, but I really don't care about any alpha layer.
Here are two image files, with bit depths of 2, that I generated for testing:
Here are the tests, based on my initial (limited) knowledge, as well as on respondent Mark's suggestions:
$ convert -depth 2 test1.png test2.png test_convert.pdf
$ pdfimages -list test_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3204B 32%
2 1 image 100 100 gray 1 8 image no 22 0 72 72 3221B 32%
$ convert -depth 2 test1.png test2.png -define png:color-type=0 -define png:bit-depth=2 test_convert.pdf
$ pdfimages -list test_convert.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3204B 32%
2 1 image 100 100 gray 1 8 image no 22 0 72 72 3221B 32%
The bit depths of images within the created PDF file are 8 (rather than 2, as desired and specified).
Thanks to Mark Setchell and Cris Luengo's comments and answers, I've come up with some tests that may reveal what is going on. Here are the 2-bit and 8-bit random grayscale test PNG's created using Matlab:
im = uint8( floor( 256*rand(100,100) ) );
imwrite(im,'rnd_b8.png','BitDepth',8);
imwrite(im,'rnd_b2.png','BitDepth',2);
The 2-bit PNGs have much less entropy than the 8-bit PNGs.
The following shell commands create PDFs with and without compression:
convert rnd_b2.png rnd_b2.pdf
convert rnd_b2.png -depth 2 rnd_b2_d2.pdf
convert rnd_b2.png -compress LZW rnd_b2_lzw.pdf
convert rnd_b8.png rnd_b8.pdf
convert rnd_b8.png -depth 2 rnd_b8_d2.pdf
convert rnd_b8.png -compress LZW rnd_b8_lzw.pdf
Now check file sizes, bit depth, and compression (I use bash):
$ ls -l *.pdf
8096 rnd_b2.pdf
8099 rnd_b2_d2.pdf
7908 rnd_b2_lzw.pdf
22523 rnd_b8.pdf
8733 rnd_b8_d2.pdf
29697 rnd_b8_lzw.pdf
$ pdfimages -list rnd_b2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3178B 32%
$ pdfimages -list rnd_b2_d2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3178B 32%
$ pdfimages -list rnd_b2_lzw.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3084B 31%
$ pdfimages -list rnd_b8.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 9.78K 100%
$ pdfimages -list rnd_b8_d2.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 3116B 31%
$ pdfimages -list rnd_b8_lzw.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 100 100 gray 1 8 image no 8 0 72 72 13.3K 136%
Essentially, convert does not create PNGs of user-specified bit depths to put into PDFs; it converts 2-bit PNGs to 8-bit. This means that PDFs created from 2-bit PNGs have much less than entropy that the maximum for 8-bit images. I confirmed this by extracting the PNGs and confirming that there are only 4 grayscale levels in the data.
The fact that rnd_b8_d2.pdf is comparable in size to the PDFs created from 2-bit PNGs reveals how convert handles -depth 2 that precedes the output file specification. It seems that it does reduce dynamic range to 2 bits at some point, but expands it out to 8 bits for incorporation into the PDF.
Next, compare files sizes with their compression ratios, taking uncompressed 8-bit random grayscales as the baseline, i.e., rnd_b8.pdf:
rnd_b2.pdf 8096 / 22523 = 36%
rnd_b2_d2.pdf 8099 / 22523 = 36%
rnd_b2_lzw.pdf 7908 / 22523 = 35%
rnd_b8.pdf 22523 / 22523 = 100%
rnd_b8_d2.pdf 8733 / 22523 = 39%
rnd_b8_lzw.pdf 29697 / 22523 = 131%
It seems that the ratio from pdfimages is the amount of space taken by the image compared to a maximum entropy 8-bit image.
It also seems that compression is done by convert regardless of whether it is specified in the switches. This is from the fact that rnd_b2*.pdf are all of similar size and ratios.
I assume that the 31% increase of rnd_b8_lzw.pdf is overhead due to the attempt at compression when no compression is possible. Does this seem reasonable to "you" image processing folk? (I am not an image processing folk).
Based on the assumption that compression happens automatically, I don't need Matlab to reduce the dynamic range. The -depth 2 specification to convert will decrease the dynamic range, and even though the image is in the PDF as 8-bits, it is automatically compressed, which is almost as efficient as 2-bit images.
There is only one big concern. According to the above logic, the following files should all look comparable:
rnd_b2.pdf
rnd_b2_d2.pdf
rnd_b2_lzw.pdf
rnd_b8_d2.pdf
The first 3 do, but the last does not. It is the one that relies on the -depth 2 specification to convert to reduce dynamic range. Matlab shows that there are only 4 grayscale levels from 0 to 255 used, but middle two levels occur twice as often as the edge levels. Using -depth 4, I found that only the minimum and maximum grayscale levels are always half of the uniform distribution among all the other grayscale levels. The reason for this became apparent when I plotted the mapping of gray levels in rnd_b8.pdf compared to the 4-bit depth counterpart:
The "bins" of 8-bit gray level values that map to the minimum and maximum 4-bit gray levels is half as wide as for the other 4-bit gray levels. It might be because the bins are symmetrically defined such that (for example), the values that map to zero include negative and positive values. This wastes half the bin, because it lies outside the range of the input data.
The take-away is that one can use the -depth specification to convert, but for small bit depths, it is not ideal because it doesn't maximize the information in the bits.
AFTERNOTE: And interesting beneficial effect that I observed, which is obvious in hindsight, especially in light of Cris Luengo's comment. If the images in the PDF do indeed have limited bit depth, e.g., 4 bits, then you can extract them with pdfimages and re-package them in PDF without worrying too much about specifyng the right -depth. In the re-packaging into PDF, I noticed that the result of -depth 5 and -depth 6 did not increase the PDF file size much over -depth 4 because the default compression squeezes out any space wasted in the 8-bit image within the PDF. Subjectively, the quality remains the same too. If I specify a -depth 3 or below, however, the PDF file size decreases more noticeably, and the quality declines noticeably too.
Further helpful observations: After the better part of a year, I had a need to package scanned files into a PDF file again, but this time, I used a scanner that created PNG files for each page. I had no desire to re-spend the time taken above to reverse-engineer the behaviour of ImageMagick tools. Not being bogged down in the weeds, I was able to to notice three helpful code idiom details, at least to me, and I hope it helps someone else. For context, assume that you want to downgrade the grayscale depth to 2 bits, which allows for 4 levels. I found this to be plenty for scanned text documents, with neglegible loss in readability.
First, if you scanned in (say) 200 dpi grayscale, and you want to downgrade to 2 bits, you need specify the -density prior to the first (input) file: convert -density 200x200 -depth 2 input.png output.pdf. Not doing so yields extremely coarse resolution, even though pdfimage -list shows 200x200. Second, you want to use one convert statement to convert a collection of PNG files to a single depth-limited PDF file. I found this out because I initially converted multiple PNG files into one PDF file, then converted to a depth of 2. The file size shrinks, but not nearly as much as it could. In fact, if when I had only 1 input file, the size actually increased by a third. So the ideal pattern for me was convert -density 200x200 -depth 2 input1.png input2.png output.pdf. Third, documents manually scanned one page at a time often need page rotation adjustments, and web searching yields the recommendation to use pdftk rather than (say) convert (well discussed here). The rationale is that convert rasterizes. Even though scans are rasterized, I elected to use pdftk to avoid the possibility of re-rasterizing, and the associated possibility of degraded fidelity. pdfjam might also do nicely, but starting code patterns for page-specific rotations were already given for pdftk. From experimentation, the pattern for me was (say) pdftk input.pdf cat 1west 2east 3east output output.pdf.
Updated Answer
I am still looking at this. One thing I have noticed is that it does appear to honour compression when writing PDFs...
# Without compression
convert -depth 2 -size 1024x768 gradient: a.pdf
pdfimages -list a.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 1024 768 gray 1 8 image no 8 0 72 72 12.1K 1.6%
# With compression
convert -depth 2 -size 1024x768 gradient: -compress lzw a.pdf
pdfimages -list a.pdf
page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
1 0 image 1024 768 gray 1 8 image no 8 0 72 72 3360B 0.4%
You can list the available types of compression with:
identify -list compress
It seems to accept the following for PDF output:
JPEG
LZW
ZIP
Note that your test images do not achieve very good compression, but then again, consider how representative they really are of your documents - they look very random and such things always compress poorly.
Initial Answer
Please try adding:
-define png:bit-depth=2
and/or
-define png:color-type=X
where X is either 0 (grayscale) or 3 (indexed, i.e. palettised)
So, specifically:
convert image1.png image2.png -define <AS ABOVE> output.pdf

How to stitch back cropped image with imageMagick?

I have a big big image, lets name it orig-image.tiff.
I want to cut it in smaller pieces, apply things on it, and stitch back together the newly created little images.
I cut it into pieces with this command :
convert orig-image.tiff -crop 400x400 crop/parts-%04d.tiff
then I'll generate many images by applying a treatment to each part-XXXX.tiff image and end up with images from part-0000.png to part-2771.png
Now I want to stitch back the images into a big one. Can imagemagick do that?
If you were using PNG format, the tiles would "remember" their original position, as #Bonzo suggests, and you could take them apart and reassemble like this:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as PNGs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.png
and reassemble:
convert tile*png -layers merge BigBoy.png
That is because the tiles "remember" their original position on the canvas - e.g. +248+248 below:
identify tile-1023.png
tile-1023.png PNG 8x8 256x256+248+248 16-bit sRGB 319B 0.000u 0:00.000
With TIFs, you could do:
# Make 256x256 black-red gradient and chop into 1024 tiles of 8x8 as TIFs
convert -size 256x256 gradient:red-black -crop 8x8 tile-%04d.tif
and reassemble with the following but sadly you need to know the layout of the original image:
montage -geometry +0+0 -tile 32x32 tile*tif BigBoy.tif
Regarding Glenn's comment below, here is the output of pngcheck showing the "remembered" offsets:
pngcheck tile-1023*png
Output
OK: tile-1023.png (8x8, 48-bit RGB, non-interlaced, 16.9%).
iMac:~/tmp: pngcheck -v tile-1023*png
File: tile-1023.png (319 bytes)
chunk IHDR at offset 0x0000c, length 13
8 x 8 image, 48-bit RGB, non-interlaced
chunk gAMA at offset 0x00025, length 4: 0.45455
chunk cHRM at offset 0x00035, length 32
White x = 0.3127 y = 0.329, Red x = 0.64 y = 0.33
Green x = 0.3 y = 0.6, Blue x = 0.15 y = 0.06
chunk bKGD at offset 0x00061, length 6
red = 0xffff, green = 0xffff, blue = 0xffff
chunk oFFs at offset 0x00073, length 9: 248x248 pixels offset
chunk tIME at offset 0x00088, length 7: 13 Dec 2016 15:31:10 UTC
chunk vpAg at offset 0x0009b, length 9
unknown private, ancillary, safe-to-copy chunk
chunk IDAT at offset 0x000b0, length 25
zlib: deflated, 512-byte window, maximum compression
chunk tEXt at offset 0x000d5, length 37, keyword: date:create
chunk tEXt at offset 0x00106, length 37, keyword: date:modify
chunk IEND at offset 0x00137, length 0
No errors detected in tile-1023.png (11 chunks, 16.9% compression).

Combining images that are "cut off" in ImageMagick?

I would like to combine 2 images, which are identical in width, but vary in height. They are identical on the bottom/top side, but it's unknown how much.
1) Identify identical parts
2) Combine the images so the identical parts match
Example:
Part 1: http://i.imgur.com/rZtAk2c.png
Part 2: http://i.imgur.com/CQaQbr8.png
1. Determine the image dimensions
Use identify to get width and height of each image:
identify \
http://i.imgur.com/rZtAk2c.png \
http://i.imgur.com/CQaQbr8.png
CQaQbr8.png PNG 701x974 720x994+10+0 8-bit sRGB 256c 33.9KB 0.000u 0:00.000
rZtAk2c.png PNG 701x723 720x773+10+46 8-bit sRGB 256c 25.6KB 0.000u 0:00.000
2. Interpret the results
The results from the above command are these:
Both images show 701 pixels wide rows.
One image shows 974 different rows.
The other image shows 723 different rows.
But both images use a different 'canvas' size.
The first image uses a 720x994 pixels canvas (offset of shown part is +10+0).
The second image uses a 720x773 pixels canvas (offset of shown part is +10+46).
3. Normalize the canvas to be identical with the shown pixels
We use the +repage image operator to normalize the canvas for both images:
convert CQaQbr8.png +repage img1.png
convert rZtAk2c.png +repage img2.png
4. Check both new images' dimensions again
identify img1.png img2.png
img1.png PNG 701x974 701x974+0+0 8-bit sRGB 256c 33.9KB 0.000u 0:00.000
img2.png PNG 701x723 701x723+0+0 8-bit sRGB 256c 25.5KB 0.000u 0:00.000
5. Learn, how to extract a single row from an image.
As an example, we extract row number 3 from img1.png (numbering starts with 0):
convert img1.png[701x1+0+3] +repage img1---row3.png
identify img---row3.png
img1---row3.png PNG 701x1 701x1+0+0 8-bit sRGB 256c 335B 0.000u 0:00.000
6. Learn, how to extract that same row in ImageMagick's 'txt' format:
convert img1.png[701x1+0+3] +repage img---row3.txt
If you are not familiar with the 'txt' format, here is an extract:
cat img---row3.txt
# ImageMagick pixel enumeration: 701,1,255,gray
0,0: (255,255,255) #FFFFFF gray(255)
1,0: (255,255,255) #FFFFFF gray(255)
2,0: (255,255,255) #FFFFFF gray(255)
3,0: (255,255,255) #FFFFFF gray(255)
4,0: (255,255,255) #FFFFFF gray(255)
5,0: (255,255,255) #FFFFFF gray(255)
6,0: (255,255,255) #FFFFFF gray(255)
7,0: (255,255,255) #FFFFFF gray(255)
8,0: (255,255,255) #FFFFFF gray(255)
9,0: (255,255,255) #FFFFFF gray(255)
[...skipping many lines...]
695,0: (255,255,255) #FFFFFF gray(255)
696,0: (255,255,255) #FFFFFF gray(255)
697,0: (255,255,255) #FFFFFF gray(255)
698,0: (255,255,255) #FFFFFF gray(255)
699,0: (255,255,255) #FFFFFF gray(255)
700,0: (255,255,255) #FFFFFF gray(255)
The 'txt' output file describes every pixel via a text line.
In each line the first column indicates the respective pixel's coordinates.
The second, third and fourth columns indicate the pixel's color in different ways (but they contain the same information each).
7. Convert each row into its 'txt' format and create its MD5 sum
This command also creates 'txt' output. But this time the 'target' file is given as txt:-. This means that the output is streamed to <stdout>.
for i in {0..973}; do \
convert img1.png[701x1+0+${i}] txt:- \
| md5sum > md5sum--img1--row${i}.md5 ; \
done
This command creates 974 different files containing the MD5 sum of the 'txt' representation for the respective rows.
We can also write all MD5 sums into a single file:
for i in {0..973}; do \
convert img1.png[701x1+0+${i}] txt:- \
| md5sum >> md5sum--img1--all-rows.md5 ; \
done
Now do the same thing for img2.png:
for i in {0..722}; do \
convert img2.png[701x1+0+${i}] txt:- \
| md5sum >> md5sum--img2--all-rows.md5 ; \
done
8. Use sdiff to determine which lines of the .md5 files match
We can use sdiff to compare the two .md5 files line by line and write the output to a log file. The nl -v 0 part of the following command automatically inserts the line number, starting with 0 into the result:
sdiff md5sum--img{1,2}--all-rows.md5 | nl -v 0 > md5sums.log
9. Check the md5sums.log for identical lines
cat md5sums.log
0 > 38c6cd70c39ffc853d1195a0da6474f8 -
1 > 85100351b390ace5a7caca11776666d5 -
2 > 66e2940dbb390e635eeba9a2944960dc -
3 > 8e93c1ed5c89aead8333f569cb768e4a -
4 > 8e93c1ed5c89aead8333f569cb768e4a -
[... skip many lines ...]
172 > f9fece874b60fa1af24516c4bcee7302 -
173 > edbe62592a3de60d18971dece07e3beb -
174 > 18a28776cc64ead860a99213644b0574 -
175 0d0753c587dc3c46078ac265895a3f6c - | 0d0753c587dc3c46078ac265895a3f6c -
176 5ecc2b5a61af4120151fed4cd2c3d305 - | 5ecc2b5a61af4120151fed4cd2c3d305 -
177 3f2857594fe410dc7fe42b4bef724a87 - | 3f2857594fe410dc7fe42b4bef724a87 -
178 2fade815d804b6af96550860602ec1ba - | 2fade815d804b6af96550860602ec1ba -
[... skip many lines ...]
719 127e6d52095db20f0bcb1fe6ff843da0 - | 127e6d52095db20f0bcb1fe6ff843da0 -
720 aef15dde4909e9c467f11a64198ba6d2 - | aef15dde4909e9c467f11a64198ba6d2 -
721 6320863dd7d747356f4b23fb7ba28a73 - | 6320863dd7d747356f4b23fb7ba28a73 -
722 2e32ceb7cc89d7bb038805e484dc7bc9 - | 2e32ceb7cc89d7bb038805e484dc7bc9 -
723 f9fece874b60fa1af24516c4bcee7302 - <
724 f9fece874b60fa1af24516c4bcee7302 - <
725 f9fece874b60fa1af24516c4bcee7302 - <
726 f9fece874b60fa1af24516c4bcee7302 - <
[... skip many lines ...]
1146 3e18a7db0aed8b6ac6a3467c6887b733 - <
1147 62866c8ef78cdcd88128b699794d93e6 - <
1148 7dbed48a0e083d03a6d731a6864d1172 - <
From this output we can conclude that rows 175 -- 722 in the sdiff-produced file all do match.
This means that there is a match in the following rows of the original images:
row 0 of img1.png matches row 175 of img2.png (begin of match).
img1.png has a total of 974 rows of pixels.
row 547 of img1.png matches row 722 of img2.png (end of match).
img2.png has a total of 723 rows of pixels.
(Remember, we used 0-based row numbering...)
10. Put it all together now
From above investigations we can conclude, that we need only the first 174 rows from img1.png and append the full img2.png below that in order to get the correct result:
convert img1.png[701x174+0+0] img2.png -append complete.png
NOTES:
There are many possible solutions (and methods to arrive there) to the problem posed by the OP. For example:
Instead of converting the rows to 'txt' format we could have used any other ImageMagick-supported format also (PNG, PPM, ...) and created the MD5 sums for comparison.
Instead of using -append to concatenate the two image parts, we could also have used -composite to superimpose them (with an appropriate offset, of course).
As #MarkSetchell says in his comment: instead of piping the 'pixel-rows' output to md5sum one could also use -format '%#' info:- to directly generate a hash value from the respective pixel-row. I had already forgotten about that option, because (years ago) I tried to use it for a similar purpose, and somehow it didn't work as I needed it. Which is why I became used to my 'piping to md5sum' approach...

Resources