Use ImageMagick to retrieve the original mask from two composited images - image-processing

There once was an image, possibly with alpha transparency, overlaid onto both a white background and black background. I have access to the two resulting images, but not the original, and I want to retrieve the original.
I have some Ruby code written up to do this, but, I think simply by nature of being in Ruby, it's not as fast as it needs to be. Here's the basic logic, iterating pixel by pixel:
if pixel_on_black == pixel_on_white
# Matching pixels indicate 100% opacity in the original.
original_pixel = pixel_on_black
elsif color_on_black == BLACK && color_on_white == WHITE
# Black on black and white on white indicate 0% opacity in the original.
original_pixel = TRANSPARENT
else
# Since it's not one of the simple cases, then we'll do some math.
# Fancy algebra tells us the following. (MAX_VALUE is the largest value
# a channel can have. So, in most cases, 255.)
# First, find the alpha value. This equation follows from the equations
# for composing on black and composing on white.
alpha = pixel_on_black.red - pixel_on_white.red + MAX_VALUE
# Now that we know the alpha value, undo its multiplicative effect on the
# pixel on black. By dividing. Ta da.
alpha_ratio = alpha / MAX_VALUE
original_pixel = Pixel.new
original_pixel.red = pixel_on_black.red / alpha_ratio
original_pixel.green = pixel_on_black.green / alpha_ratio
original_pixel.blue = pixel_on_black.blue / alpha_ratio
original_pixel.alpha = alpha
end
So that's nice, and it works and all. However, this code needs to end up running blazing-fast, and iterating pixels in Ruby is not acceptable. It looks like, unless this function already exists somewhere, it would be in my best interest to come up with a series of ImageMagick options that would do the trick.
I'm researching ImageMagick's command line tool now, because it looks really, really powerful, and it looks like either -fx or a series of fancy -function arguments would do the same thing as my code above. I'll keep trying, too, but are there any ImageMagick pros out there who already know how to put all that together?
EDIT: I have an -fx version running now :)
convert image_on_black.png image_on_white.png -matte -channel alpha -fx "u.r + 1 - v.r" -channel RGB -fx "(u.a == 0) ? 1 : (u / u.a)" output.png
Almost an exact translation of the original code, broken into channels. Iterate over the alpha channel, and set the correct alpha values. Then iterate over the RGB channels, and dividing the channel by the alpha value (unless it's zero, in which case we can set it to anything, since dividing by zero throws an error—in this case, I chose 1 for white).
Now time to work on converting these into more explicit arguments, since the -fx expression is reevaluated for each pixel, which isn't great.

Mkay, I surprised myself here, and I think I found an answer. It's four ImageMagick commands, though maybe they could be worked into one somehow…though I doubt it.
convert input_on_white.png -channel RGB -negate /tmp/convert_negative.png && \
convert input_on_black.png /tmp/convert_negative.png -alpha Off -compose Plus -composite /tmp/convert_alpha.png && \
composite plasma2.png /tmp/convert_alpha.png -alpha Off -channel RGB -compose Divide /tmp/convert_division.png && \
convert /tmp/convert_division.png /tmp/convert_alpha.png -compose CopyOpacity -composite plasma_output.png
(Obviously, when done, clean up those temporary files. Also, use an actual tempfile system, rather than using hardcoded paths.)
The strategy is to first create a grayscale image that represents the alpha mask. We'll be emulating the line of code alpha = pixel_on_black.red - pixel_on_white.red + MAX_VALUE, which can be rewritten as alpha = pixel_on_black.red + (MAX_VALUE - pixel_on_white.red).
So, line 1: we create an image that represents the second term of that equation, a negated version of the RGB channels of image-on-white. We save it as a temporary file.
Then, line 2: we want to add that negative image to the image-on-black. Use ImageMagick's Plus composition, and save that as the temporary alpha mask. The result is a grayscale image where white represents areas that should have 100% opacity in the final image, and black represents areas that will later be fully transparent.
Then, line 3: bring the image-on-black back to the original RGB colors. Since the image-on-black was created by mutiplying the RGB channels by the alpha ratio, we divide by the alpha mask image to undo that effect.
Finally, line 4: take the color-corrected image from line 3 and apply the alpha mask from line 2, using ImageMagick's CopyOpacity composition function. Ta da!
My original strategy took anywhere from 5-10 seconds. This strategy takes less than a second. Much, much, much better.
Unsurprisingly, asking for help is what drove me to find the answer myself. Regardless, I'll leave this question open for 48 hours to see if anyone finds a slightly more optimal solution. Thanks!

Related

Create fixed-size montage of images with missing files

Setting
Suppose we have a list of N elements of which an element can either be a path to an image (e.g. a.jpg) or NULL indicating that a file is missing.
Example (N = 6): a.jpg,NULL,c.jpg,NULL,NULL,f.jpg
All mentioned images (a.jpg, c.jpg, f.jpg) are guaranteed to have the same resolution.
Task
Create a fixed-width montage (e.g. out.jpg) in which NULL values are replaced with black images whose resolutions are consistent with the common resolution of a.jpg, c.jpg, f.jpg. I would like to abstain from creating an actual black.jpg and would prefer to create the image on-the-fly as needed.
Using ImageMagick's "montage" command, if your images are known dimensions so you can include that in the command, and if you can generate a text file "list.txt" of the image files and put "xc:black" on each line that has no image like this...
image00.png
image01.png
image02.png
image03.png
image04.png
xc:black
image06.png
image07.png
xc:black
xc:black
image10.png
image11.png
You can run the ImageMagick "montage" command something like this...
magick montage #list.txt -tile 3x4 -geometry 160x160+3+3! out.png
The "#" in front of the name of the text file tells IM to read the input images from there. The "-tile" describes how many columns and rows will be in the result. The "-geometry" setting is where you put the dimensions of the images and the spacing between columns and rows. The "xc:black" images are single black pixels, but the exclamation point forces them to the W and H dimensions in the "-geometry" argument.
That will create black images everywhere you have "xc:black" in the list. If you want to fill between the spaces with black also, add "-background black" to the command.
That works for me with IMv7 and "magick montage ..." For IMv6 you just use "montage". I'm pretty sure everything else about the command would work the same way.

How can I automatically determine whether an image file depicts a photo or a 'graphic'?

How can I automatically determine whether an image file depicts a photo or a 'graphic'?
For example using Imagemagick?
I am somewhat at the limits of my knowledge here, but I read a paper and have worked out a way to calculate image entropy with ImageMagick - some clever person might like to check it!
#!/bin/bash
image=$1
# Get number of pixels in image
px=$(convert -format "%w*%h\n" "$image" info:|bc)
# Calculate entropy
# See this paper www1.idc.ac.il/toky/imageProc-10/Lectures/04_histogram_10.ppt
convert "$image" -colorspace gray -depth 8 -format "%c" histogram:info:- | \
awk -F: -v px=$px '{p=$1/px;e+=-p*log(p)} END {print e}'
So, you would save the script above as entropy, then do the following once to make it executable:
chmod +x entropy
Then you can use it like this:
entropy image.jpg
It does seem to produce bigger numbers for true photos and lower numbers for computer graphics.
Another idea would be to look at the inter-channel correlation. Normally, on digital photos, the different wavelengths of light are quite strongly correlated with each other, so if the red component increases the green and the blue components tend to also increase, but if the red component decreases, both the green and the blue tend to also decrease. If you compare that to computer graphics, people tend to do their graphics with big bold primary colours, so a big red bar-graph or pie-chart graphic will not tend to be at all correlated between the channels. I took a digital photo of a landscape and resized it to be 1 pixel wide and 64 pixels high, and I am showing it using ImageMagick below - you will see that where red goes down so do green and blue...
convert DSC01447.JPG -resize 1x64! -depth 8 txt:
0,0: (168,199,235) #A8C7EB srgb(168,199,235)
0,1: (171,201,236) #ABC9EC srgb(171,201,236)
0,2: (174,202,236) #AECAEC srgb(174,202,236)
0,3: (176,204,236) #B0CCEC srgb(176,204,236)
0,4: (179,205,237) #B3CDED srgb(179,205,237)
0,5: (181,207,236) #B5CFEC srgb(181,207,236)
0,6: (183,208,236) #B7D0EC srgb(183,208,236)
0,7: (186,210,236) #BAD2EC srgb(186,210,236)
0,8: (188,211,235) #BCD3EB srgb(188,211,235)
0,9: (190,212,236) #BED4EC srgb(190,212,236)
0,10: (192,213,234) #C0D5EA srgb(192,213,234)
0,11: (192,211,227) #C0D3E3 srgb(192,211,227)
0,12: (191,208,221) #BFD0DD srgb(191,208,221)
0,13: (190,206,216) #BECED8 srgb(190,206,216)
0,14: (193,207,217) #C1CFD9 srgb(193,207,217)
0,15: (181,194,199) #B5C2C7 srgb(181,194,199)
0,16: (158,167,167) #9EA7A7 srgb(158,167,167)
0,17: (141,149,143) #8D958F srgb(141,149,143)
0,18: (108,111,98) #6C6F62 srgb(108,111,98)
0,19: (89,89,74) #59594A srgb(89,89,74)
0,20: (77,76,61) #4D4C3D srgb(77,76,61)
0,21: (67,64,49) #434031 srgb(67,64,49)
0,22: (57,56,43) #39382B srgb(57,56,43)
0,23: (40,40,34) #282822 srgb(40,40,34)
0,24: (39,38,35) #272623 srgb(39,38,35)
0,25: (38,37,37) #262525 srgb(38,37,37)
0,26: (40,39,38) #282726 srgb(40,39,38)
0,27: (78,78,57) #4E4E39 srgb(78,78,57)
0,28: (123,117,90) #7B755A srgb(123,117,90)
0,29: (170,156,125) #AA9C7D srgb(170,156,125)
0,30: (168,154,116) #A89A74 srgb(168,154,116)
0,31: (153,146,96) #999260 srgb(153,146,96)
0,32: (156,148,101) #9C9465 srgb(156,148,101)
0,33: (152,141,98) #988D62 srgb(152,141,98)
0,34: (151,139,99) #978B63 srgb(151,139,99)
0,35: (150,139,101) #968B65 srgb(150,139,101)
0,36: (146,135,98) #928762 srgb(146,135,98)
0,37: (145,136,97) #918861 srgb(145,136,97)
0,38: (143,133,94) #8F855E srgb(143,133,94)
0,39: (140,133,92) #8C855C srgb(140,133,92)
0,40: (137,133,92) #89855C srgb(137,133,92)
0,41: (136,133,91) #88855B srgb(136,133,91)
0,42: (131,124,81) #837C51 srgb(131,124,81)
0,43: (130,121,78) #82794E srgb(130,121,78)
0,44: (134,123,78) #867B4E srgb(134,123,78)
0,45: (135,127,78) #877F4E srgb(135,127,78)
0,46: (135,129,79) #87814F srgb(135,129,79)
0,47: (129,125,77) #817D4D srgb(129,125,77)
0,48: (106,105,65) #6A6941 srgb(106,105,65)
0,49: (97,99,60) #61633C srgb(97,99,60)
0,50: (120,121,69) #787945 srgb(120,121,69)
0,51: (111,111,63) #6F6F3F srgb(111,111,63)
0,52: (95,98,55) #5F6237 srgb(95,98,55)
0,53: (110,111,63) #6E6F3F srgb(110,111,63)
0,54: (102,105,60) #66693C srgb(102,105,60)
0,55: (118,120,66) #767842 srgb(118,120,66)
0,56: (124,124,68) #7C7C44 srgb(124,124,68)
0,57: (118,120,65) #767841 srgb(118,120,65)
0,58: (114,116,64) #727440 srgb(114,116,64)
0,59: (113,114,63) #71723F srgb(113,114,63)
0,60: (116,117,64) #747540 srgb(116,117,64)
0,61: (118,118,65) #767641 srgb(118,118,65)
0,62: (118,117,65) #767541 srgb(118,117,65)
0,63: (114,114,62) #72723E srgb(114,114,62)
Statistically, this is the covariance. I would tend to want to use red and green channels of a photo to evaluate this - because in a Bayer grid there are two green sites for each single red and blue site, so the green channel is averaged across the two and therefore least susceptible to noise. The blue is most susceptible to noise. So the code for measuring the covariance can be written like this:
#!/bin/bash
# Calculate Red Green covariance of image supplied as parameter
image=$1
convert "$image" -depth 8 txt: | awk ' \
{split($2,a,",")
sub(/\(/,"",a[1]);R[NR]=a[1];
G[NR]=a[2];
# sub(/\)/,"",a[3]);B[NR]=a[3]
}
END{
# Calculate mean of R,G and B
for(i=1;i<=NR;i++){
Rmean=Rmean+R[i]
Gmean=Gmean+G[i]
#Bmean=Bmean+B[i]
}
Rmean=Rmean/NR
Gmean=Gmean/NR
#Bmean=Bmean/NR
# Calculate Green-Red and Green-Blue covariance
for(i=1;i<=NR;i++){
GRcov+=(G[i]-Gmean)*(R[i]-Rmean)
#GBcov+=(G[i]-Gmean)*(B[i]-Bmean)
}
GRcov=GRcov/NR
#GBcov=GBcov/NR
print "Green Red covariance: ",GRcov
#print "GBcovariance: ",GBcov
}'
I did some testing and that also works quite well - however graphics with big white or black backgrounds appear to be well correlated too because red=green=blue on white and black (and all grey-toned areas) so you would need to be careful of them. That however leads to another thought, photos almost never have pure white or black (unless really poorly exposed) whereas graphics do have whit backgrounds, so another test you could use would be to calculate the number of solid black and white pixels like this:
convert photo.jpg -colorspace gray -depth 8 -format %c histogram:info:-| egrep "\(0\)|\(255\)"
2: ( 0, 0, 0) #000000 gray(0)
537: (255,255,255) #FFFFFF gray(255)
This one has 2 black and 537 pure white pixels.
I should imagine you probably have enough for a decent heuristic now!
Following on from my comment, you can use these ImageMagick commands:
# Get EXIF information
identify -format "%[EXIF*]" image.jpg
# Get number of colours
convert image.jpg -format "%k" info:
Other parameters may be suggested by other responders, and you can find most of that using:
identify -verbose image.jpg
Compute the entropy of the image. Artificial images usually have much lower entropy than photographs.

ImageMagick resize: Do really nothing for the "Only Shrink Larger" case

The original image:
http://www.tiaoyue.com/img/_test/original.jpg
(2,457 bytes)
Try to get a thumbnail by ImageMagick:
convert \
http://www.tiaoyue.com/img/_test/original.jpg \
-thumbnail 200x200\> \
SecondaryCompression.jpg
Or in Windows:
convert ^
http://www.tiaoyue.com/img/_test/original.jpg ^
-thumbnail 200x200^> ^
SecondaryCompression.jpg
Get the file:
SecondaryCompression.jpg
(2,452 bytes)
Can I get the target file (SecondaryCompression.jpg) without secondary compression, only copy of original image? (2,457 bytes of the image)
Reference:
http://www.imagemagick.org/Usage/resize/#shrink
The real problem with your 'convert' command is not that the file undergoes a 'secondary compression' as you called it.
The real problem is that some of the pixels get their color values changes very slightly (which in turn does allow a better or, maybe even worse, compression result for the total file).
So you should investigate how you can prevent the color changes first!
To document + verify the color changes for each single pixel, run these commands:
convert http://www.tiaoyue.com/img/_test/original.jpg original.txt
convert SecondaryCompression.jpg SecondaryCompression.txt
sdiff -sbB SecondaryCompression.txt original.txt
Hint: The TXT output format of convert is a textual representation of the coordinate position of each pixel and its respective color values (these values are given in 3 different ways: decimal RGB (or CMYK) values, hex RGB (or CMYK) values, human readable color names (when possible). If you see the format once, you'll understand it immediately.
One can establish that in total 1415 pixels have changed color values, out of a total of 7500 pixels. That's 18.86% of pixels changed.
To create a visual representation for the pixel differences, run:
compare original.jpg SecondaryCompression.jpg delta1.jpg
compare original.jpg SecondaryCompression.jpg -compose src delta2.jpg
The first image (delta1.jpg, far left) paints those pixels in red which have different color values, using the original.jpg as a light-gray background image.
The second image (delta2.jpg, second from left) paints only pixels in red which have different colors, and paints identical color values as white pixels.
The third image (second from right) is your original JPEG. The fourth one (far right) is your 'unaltered' thumbnail (in reality with some subtle changes for some pixels).
I've no time right now to investigate the reason for the slight color changes (and can't give a reason out of the top of my head), but will maybe return later to this topic.

Why is my bicubic interpolation of discrete data looking ugly?

i have a 128x128 array of elevation data (elevations from -400m to 8000m are displayed using 9 colors) and i need to resize it to 512x512. I did it with bicubic interpolation, but the result looks weird. In the picture you can see original, nearest and bicubic. Note: only the elevation data are interpolated not the colors themselves (gamut is preserved). Are those artifacts seen on the bicubic image result of my bad interpolation code or they are caused by the interpolating of discrete (9 steps) data?
http://i.stack.imgur.com/Qx2cl.png
There must be something wrong with the bicubic code you're using. Here's my result with Python:
The black border around the outside is where the result was outside of the palette due to ringing.
Here's the program that produced the above:
from PIL import Image
im = Image.open(r'c:\temp\temp.png')
# convert the image to a grayscale with 8 values from 10 to 17
levels=((0,0,255),(1,255,0),(255,255,0),(255,0,0),(255,175,175),(255,0,255),(1,255,255),(255,255,255))
img = Image.new('L', im.size)
iml = im.load()
imgl = img.load()
colormap = {}
for i, color in enumerate(levels):
colormap[color] = 10 + i
width, height = im.size
for y in range(height):
for x in range(width):
imgl[x,y] = colormap[iml[x,y]]
# resize using Bicubic and restore the original palette
im4x = img.resize((4*width, 4*height), Image.BICUBIC)
palette = []
for i in range(256):
if 10 <= i < 10+len(levels):
palette.extend(levels[i-10])
else:
palette.extend((i, i, i))
im4x.putpalette(palette)
im4x.save(r'c:\temp\temp3.png')
Edit: Evidently Python's Bicubic isn't the best either. Here's what I was able to do by hand in Paint Shop Pro, using roughly the same procedure as above.
While bicubic interpolation can sometimes generate interpolating values outside the original range (can you verify if this is happening to you?) It really seems like you may have a bug, but it is hard to say without looking at the code. As a general rule the bicubic solution should be smoother than the nearest neighbor solution.
Edit: I take that back, I see no interpolating values outside the original range in your images. Still, I think the strange part is the "jaggedness" you get when using bicubic, you may want to double check that.

Remove shapes from image with X number of pixels or less

If I have a image with, let's say squares. Is it possible to remove all shapes formed by 10 (non white) pixels or less and keep all shapes that is formed by 11 pixels or more? I want to do it programmatically or with a command line.
Thanks in advance!
Possibly an algorithm called Erosion may be useful. It works on boolean images, shrinking all areas of "true" removing one layer of their surface pixels. Apply a few times, and small areas disappear, bigger ones remain (though shrunken). De-shrink the survivors with the opposite algorithm, dilation (apply erosion to the logical complement of the image). Find a way to define a boolean images by testing if a pixel is inside an "object" however you define it, and find a way to apply the results to the original image to change the unwanted small objects to the background color.
To be more specific would require seeing examples.
Look up flood fill algorithms and alter them to count the pixels instead of filling. Then if the shape is small enough, fill it with white.
There are a couple of ways to approach this. What you are referring to is commonly called Despeckle in Document Imaging Applications. Document scanners often introduce a lot of dirt and noise into an image during scanning and so this must be removed removed to help improve OCR accuracy.
I assume you are processing B/W images here or can convert your image to B/W otherwise it becomes a lot more complex. Despeckle is done by analysing all the blobs on the page. Another way to decide on blob size is to decide on width, height and number of pixels combined.
Leptonica.com - Is an Open Source C based library that has the blob analysis functions you require. With some simple check and loops you can delete these smaller objects. Leptonica can also be compiled quite easily into a command line program. There are many example programs and that is the best way to learn Leptionica.
For testing, you may want to try ImageMagick. It has a command line option for despeckle but it has no further parameters.
http://www.imagemagick.org/script/command-line-options.php#despeckle
The other option is to look for "despeckle" algorithms in Google.
ImageMagick, starting from version 6.8.9-10, includes a -connected-components option which can be used to do what you want, however from the example provided in the official website, it is not immediately obvious how to actually obtain the original image minus the removed connected components.
I'm almost sure there is a simpler way, but I did it via a clunky script performing a series of steps:
First, I ran the command from the connected components example:
convert in.png \
-define connected-components:verbose=true \
-connected-components 8 out.png
This produces output in the following format:
Objects (id: bounding-box centroid area mean-color):
(...)
181: 9x9+1601+916 1605.2,920.2 44 gray(0)
185: 5x5+1266+923 1268.0,925.0 13 gray(0)
274: 5x5+2276+1661 2278.0,1663.0 13 gray(255)
Then, I used awk to filter only the lines containing an area (in pixels) of black components (mean-color gray(0) in my image) smaller than my threshold $min_cc_area. Note that connected-components has an option to filter components smaller than a given area, but I needed the opposite. The awk line is similar to the following:
{if ($4 < $min_cc_area && $5=="gray(0)") { print $2 }}
I then proceeded to create a command-line for ImageMagick where I drew white rectangles on top of these connected components. The -draw command expects coordinates in the form x1,y1 x2,y2, so I used awk again to compute the coordinates from the ones in the format [w]x[h]+x1+y1 given by -connected-components:
awk '{print "white fill rectangle " $3 "," $4 " " $3+$1-1 "," $4+$2-1 }'
Finally, I ran the created ImageMagick command-line to create a new image combining all the white rectangles on top of the original one.
In the end, I got the following script:
# usage: $0 infile min_cc_area outfile
infile=$1
min_cc_area=$2
outfile=$3
awk_exp="{if (\$4 < $min_cc_area && \$5==\"gray(0)\") { print \$2 }}"
draw_rects=""
draw_rects+=$(convert $infile -define connected-components:verbose=true \
-connected-components 8 null: | \
awk "$awk_exp" | tr 'x+' ' ' | \
awk '{print " rectangle " $3 "," $4 " " $3+$1-1 "," $4+$2-1 }')
convert $infile -draw "fill white $draw_rects" $outfile
Note that this solution may erase black pixels near the removed CC's, if they insersect the bounding rectangle of the removed component.
You want a connected components labeling algorithm. It will scan through the image and give every connected shape an id number, as well as assign every pixel an id number of what shape it belongs to.
After running a connected components filter, just count the pixels assigned to each object, find the objects that have less than 10 pixels, and replace the pixels in those objects with white.
If you can use openCV, this piece of code does what you want (i.e., despakle). You can play w/ parameters of Size(3,3) in the first line to get rid of bigger or smaller noisy artifacts.
Mat element = getStructuringElement(MORPH_ELLIPSE, Size(3,3));
morphologyEx(image, image, MORPH_OPEN, element);
morphologyEx(image, image, MORPH_CLOSE, element);
You just want to figure out the area of each components. So an 8-direction tracking algorithm could help. I have an API solve this problem coded in C++. If you want, send me an email.

Resources