Defining image mask based on image values G'MIC - image-processing

I am using the inpainting command in GMIC, which takes in both an image and a mask which indicates which part of that image to inpaint. Values that are 255 on the mask are then filled in.
http://gmic.eu/reference.shtml
The input images I am using have huge black portions (the value of the pixels are 0 here). I want to define the mask to be exactly the pixels of the original image which are black.
Of course, I can preprocess all these masks in matlab, python, etc, but this will take a long time as I am processing on the order of 1 million images. GMIC has a fast piping interface which does everything in memory, and a mathematical interpreter, so I should be able to do this all with the GMIC command line and save a lot of time.
The answer I need does this entirely in GMIC using it's mathematical interpreter. Thanks in advance!

Something like this probably :
$ gmic input.png --select_color 0,0,0,0 -inpaint[0] [1],.... -keep[0] -o output.png
(where you must set your inpaint parameters according to your needs).

Related

Imagemagick resized pictures are different using a single command or two commands

I can't understand why those two scripts seem to produce a different result, given that the second one is like the first one but separated into two commands.
First script:
convert lena_std.tif -compress None -resize 160x160 -compress None -resize 32x32 test1.bmp
Second script:
convert lena_std.tif -compress None -resize 160x160 test2.bmp
convert test2.bmp -compress None -resize 32x32 test3.bmp
I use the following command to check the difference between the results:
convert test1.bmp test3.bmp -metric AE -compare diff.bmp
I use Imagemagick on Ubuntu 22.04. My convert -version indicates: Version: ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25.
Because when you scale you interpolate pixels.
Roughly, the code considers the pixel at (x,y) in the result, and computes where it comes from in the source. This is usually not an exact pixel, more like an area, when you scale down, or part of a pixel, when you scale up. So to make up the color of the pixel at (x,y) some math is applied: if you scale down, some averaging of the source area, and if you scale up, something that depends on how close the source is to the edge of the pixel and how different the color of neighboring pixels are.
This math can be very simple (the color of the closest pixel), simple (some linear average), a bit more complex (bi-cubic interpolation) or plain magic (sinc/Lanczos), the more complex forms giving the better results.
So, in one case, you obtain a result directly from the source to the pixel you want, and in the other you obtain the final result from an approximation of what the image would look at the intermediate size.
Another way to see it is that each interpolation has a spatial frequency response (like a filter in acoustics), and in one case you apply a single filter and in the other one you compose two filters.

Compare scanned image (label) with original

There is an original high quality label. After it's been printed we scan a sample and want to compare it with original to find errors in printed text for example. Original and scanned images are almost of the same size (but a bit different).
ImageMagic can do it great but not with scanned image (I suppose it compares it bitwise but scanned image contains to much "noise").
Is there an utility that can so such a comparison? Or may be an algorithm (implemented or easy to implement) - like the one that uses Cauchy–Schwarz inequality in signal processing?
Adding sample pics.
Original:-
Scanned:-
Further Thoughts
As I explained in the comments, I think the registration of the original and scanned images is going to be important as your scans are not exactly horizontal nor the same size. To do a crude registration, you could find some points of high-contrast that are hopefully unique in the original image. So, say I wanted one on the top-left (called tl.jpg), one in the top-right (tr.jpg), one in the bottom-left (bl.jpg) and one in the bottom-right (br.jpg). I might choose these:
[]
[]3
I can now find these in the original image and in the scanned image using a sub-image search, for example:
compare -metric RMSE -subimage-search original.jpg tl.jpg a.png b.png
1148.27 (0.0175214) # 168,103
That shows me where the sub-image has been found, and the second (greyish) image shows me a white peak where the image is actually located. It also tells me that the sub image is at coordinates [168,103] in the original image.
compare -metric RMSE -subimage-search scanned.jpg tl.jpg a.png b.png
7343.29 (0.112051) # 173,102
And now I know that same point is at coordinates [173,102] in the scanned image. So I need to transform [173,102] to [168,103].
I then need to do that for the other sub images:
compare -metric RMSE -subimage-search scanned.jpg br.jpg result.png
8058.29 (0.122962) # 577,592
Ok, so we can get 4 points, one near each corner in the original image, and their corresponding locations in the scanned image. Then we need to do an affine transformation - which I may, or may not do in the future. There are notes on how to do it here.
Original Answer
It would help if you were able to supply some sample images to show what sort of problems you are expecting with the labels. However, let's assume you have these:
label.png
unhappy.png
unhappy2.png
I have only put a red border around them so you can see the edges on this white background.
If you use Fred Weinhaus's script similar from his superb website, you can now compute a normalised cross correlation between the original image and the unhappy ones. So, taking the original label and the one with one track of white across it, they come out pretty similar (96%)
./similar label.png unhappy.png
Similarity Metric: 0.960718
If we now try the more unhappy one with two tracks across it, they are less similar (92%):
./similar label.png unhappy2.png
Similarity Metric: 0.921804
Ok, that seems to work. We now need to deal with the shifted and differently sized scan, so I will attempt to trim them to only get the important stuff and blur them to lose any noise and resize to a standardised size for comparison using a little script.
#!/bin/bash
image1=$1
image2=$2
fuzz="10%"
filtration="-median 5x5"
resize="-resize 500x300"
echo DEBUG: Preparing $image1 and $image2...
# Get cropbox from blurred image
cropbox=$(convert "$image1" -fuzz $fuzz $filtration -format %# info:)
# Now crop original unblurred image and resize to standard size
convert "$image1" -crop "$cropbox" $resize +repage im1.png
# Get cropbox from blurred image
cropbox=$(convert "$image2" -fuzz $fuzz $filtration -format %# info:)
# Now crop original unblurred image and resize to standard size
convert "$image2" -crop "$cropbox" $resize +repage im2.png
# Now compare using Fred's script
./similar im1.png im2.png
We can now compare the original label with a new image called unhappy-shifted.png
./prepare label.png unhappy-shifted.png
DEBUG: Preparing label.png and unhappy-shifted.png...
Similarity Metric: 1
And we can see they compare the same despite being shifted. Obviously I cannot see your images, how noisy they are, what sort of background you have, how big they are, what colour they are and so on - so you may need to adjust the preparation where I have just done a median filter. Maybe you need a blur and/or a threshold. Maybe you need to go to greyscale.

Threshold to amplify black lines

Given an image (Like the one given below) I need to convert it into a binary image (black and white pixels only). This sounds easy enough, and I have tried with two thresholding functions. The problem is I cant get the perfect edges using either of these functions. Any help would be greatly appreciated.
The filters I have tried are, the Euclidean distance in the RGB and HSV spaces.
Sample image:
Here it is after running an RGB threshold filter. (40% it more artefects after this)
Here it is after running an HSV threshold filter. (at 30% the paths become barely visible but clearly unusable because of the noise)
The code I am using is pretty straightforward. Change the input image to appropriate color spaces and check the Euclidean distance with the the black color.
sqrt(R*R + G*G + B*B)
since I am comparing with black (0, 0, 0)
Your problem appears to be the variation in lighting over the scanned image which suggests that a locally adaptive thresholding method would give you better results.
The Sauvola method calculates the value of a binarized pixel based on the mean and standard deviation of pixels in a window of the original image. This means that if an area of the image is generally darker (or lighter) the threshold will be adjusted for that area and (likely) give you fewer dark splotches or washed-out lines in the binarized image.
http://www.mediateam.oulu.fi/publications/pdf/24.p
I also found a method by Shafait et al. that implements the Sauvola method with greater time efficiency. The drawback is that you have to compute two integral images of the original, one at 8 bits per pixel and the other potentially at 64 bits per pixel, which might present a problem with memory constraints.
http://www.dfki.uni-kl.de/~shafait/papers/Shafait-efficient-binarization-SPIE08.pdf
I haven't tried either of these methods, but they do look promising. I found Java implementations of both with a cursory Google search.
Running an adaptive threshold over the V channel in the HSV color space should produce brilliant results. Best results would come with higher than 11x11 size window, don't forget to choose a negative value for the threshold.
Adaptive thresholding basically is:
if (Pixel value + constant > Average pixel value in the window around the pixel )
Pixel_Binary = 1;
else
Pixel_Binary = 0;
Due to the noise and the illumination variation you may need an adaptive local thresholding, thanks to Beaker for his answer too.
Therefore, I tried the following steps:
Convert it to grayscale.
Do the mean or the median local thresholding, I used 10 for the window size and 10 for the intercept constant and got this image (smaller values might also work):
Please refer to : http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm if you need more
information on this techniques.
To make sure the thresholding was working fine, I skeletonized it to see if there is a line break. This skeleton may be the one needed for further processing.
To get ride of the remaining noise you can just find the longest connected component in the skeletonized image.
Thank you.
You probably want to do this as a three-step operation.
use leveling, not just thresholding: Take the input and scale the intensities (gamma correct) with parameters that simply dull the mid tones, without removing the darks or the lights (your rgb threshold is too strong, for instance. you lost some of your lines).
edge-detect the resulting image using a small kernel convolution (5x5 for binary images should be more than enough). Use a simple [1 2 3 2 1 ; 2 3 4 3 2 ; 3 4 5 4 3 ; 2 3 4 3 2 ; 1 2 3 2 1] kernel (normalised)
threshold the resulting image. You should now have a much better binary image.
You could try a black top-hat transform. This involves substracting the Image from the closing of the Image. I used a structural element window size of 11 and a constant threshold of 0.1 (25.5 on for a 255 scale)
You should get something like:
Which you can then easily threshold:
Best of luck.

Convert to grayscale and reduce the size

I am trying to develop an OCR in VB6 and I have some problems with BMP format. I have been investigating the OCR process and the first step is to convert the image in "black and white" with a threshold. The conversion process is easy to understand and I have done it. However, I'm trying to reduce the size of the resulting image because it uses less colors (each pixel only has 256 possible values in grayscale). In the original image I have 3 colors (red, green and blue) but now I only need one color (the value in grayscale). In this moment I have achieved the conversion but the resulting grayscale images have the same size as the original color image (I assign the same color value in the three channels).
I have tried to modify the header of the BMP file but I haven't achieved anything and now I don't understand how it works. For example, if I convert the image with paint, the offset that is specified in the header changes its value. If the header is constant, why does the offset change its value?.
The thing is that a grey-scale bitmap image is the same size as a color bitmap image because the data that is used to save the grey colors takes just as much space as the color.
The only difference is that grey is just 3 times that same value. (160,160,160) for example with color giving something like (123,200,60). The grey values are just a small subset of the RGB field.
You can trim down the size after converting to grey-scale by converting it from 24 bit to 16 bit or 8-bit for example. Although it depends on what you are using to do the conversion whether that is already supplied to you. Otherwise you'll have to make it yourself.
You can also try using something else than BMP images. PNG files are lossless too, and would even save space with the 24 bit version. Image processing libraries usally give you several options as output formats. Otherwise you can probably find a library that does this for you.
You can write your own conversion in a "lockbits" method. It takes a while to understand how to lock/unlock bits correctly, but the effort is worth it, and once you have the code working you'll see how it can be applied to other scenarios. For example, using an lock/unlock bits technique you can access the pixel values from a bitmap, copy those pixel values into an array, manipulate the array, and then copy the modified array back into a bitmap. That's much faster than calling GetPixel() and SetPixel(). That's still not the fastest image manipulation code one can write, but it's relatively easy to implement and maintain the code.
It's been a while since I've written VB6 code, but Bob Powell's often has good examples, and he has a page about lock bits:
https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm
In a pinch you could create a new Bitmap of the appropriate format and call SetPixel() for every pixel:
Every pixel (x,y) in your 24-bit color image will have a color value (r,g,b)
After conversion to a 24-bit gray image, each pixel (x,y) will have a three equal values for each color channel; that can be expressed as (n,n,n) as Willem wrote in his reply. If all three colors R,G,B have the same value, then you can say that color value is the "grayscale" value of that pixel. This is the same shade of gray that you will see in your final 8-bit bitmap.
Call SetPixel for each pixel (x,y) in a newly created 8-bit bitmap that is the same width and height as the original color image.

Photoshop's RGB levels with ImageMagick

I'm attempting to convert some effects created in Photoshop into code for use with php/imagemagick. Right now I'm specifically interested in how to recreate Photoshop's RGB levels feature. I'm not really familiar with the Photoshop interface, but this is the info that I am given:
RGB Level Adjust
Input levels: Shadow 0, Midtone 0.92, Highlight 255
Output levels: Shadow 0, Highlight 255
What exaclty are the input levels vs. the output levels? How would I translate this into ImageMagick? Below you can see what I have tried, but it does not correctly render the desired effect (converting Photoshop's 0-255 scale to 0-65535):
$im->levelImage(0, 0.92, 65535);
$im->levelImage(0, 1, 65535);
This was mostly a stab in the dark since the parameter names don't line up and for output levels the number of parameters don't even match. Basically I don't understand exactly what is going on when photoshop applies the adjustment. I think that's my biggest hurdle right now. Once I get that, I'll need to find corresponding effects in ImageMagick.
Can anyone shed some light on what's going on in Photoshop and how to replicate that with ImageMagick?
Shadows, Midtones and Highlights are colors that fall within a certain range of luminosity. For example, shadows are the lower range of the luminosity histogram, midtones are colors in the middle and highlights are the ones up high. However - you can't use a hard limit on these values, which is why you will have to use curves like these that weight the histogram (a pixel may lie in multiple ranges at the same time).
To adjust shadows, midtones and highlights separately, you will need to create a weighted sum per pixel that uses the current shadow, midtone and highlight values to create a resultant value.
I don't think you can do this directly using ImageMagick API's - perhaps you could simply write it as a filter.
Hope this helps.
So I stumbled across this website: http://www.fmwconcepts.com/imagemagick/levels/index.php
Based on the information there, I was able to come up with the following php which seems pretty effective at simulating what Photoshop does with input and output and all that.
function levels($im, $inshadow, $midtone, $inhighlight, $outshadow, $outhighlight, $channel = self::CHANNEL_ALL) {
$im->levelImage($inshadow, $midtone, $inhighlight, $channel);
$im->levelImage(-$outshadow, 1.0, 255 + (255 - $outhighlight), $channel);
}
This assumes that the parameters to levelImage for blackpoint and whitepoint are on a scale of 0-255. They might actually be 0-65535 on your system. If that's they can it's easy enough to adjust it. You can also check what value your setup uses with $im->getQuantumRange(). It will return an array with a string version and a long version of the quantum. From there it should be easy enough to normalize the values or just use the new range.
See the documentation: The first value is the black point (shadow) input value, the middle is a gamma (which I'm guessing is the same as Photoshop's midpoint), and the last is the white point (highlight) input value.
The output values are fixed at the quantum values of the image type, there's no need to specify them.

Resources