Detect skewed image with imagemagick - imagemagick

I previously posted how to auto fix a skewed image, is there a way to simply detect that it is skewed with image magick? I.e. is there a command that I could run on two images, one skewed and one not, and use that output as the determinant of whether it's skewed?
Thanks for any help,
Kevin

Correction to my comment above. There is a way to determine the skew angle in Imagemagick if you have regular lines of text.
Input:
convert img.jpg -deskew 60% -format "%[deskew:angle]" info:
2.18111
See https://imagemagick.org/script/escape.php

Related

imagemagick Crop Accuracy

I am using -crop with a geometry object to crop an image with success. However, it seems to only round to the near 0.5.
Is there a way to force imagemagick to use greater accuracy, such as up to 5 decimal places?
If not, are there any other tools that can help me achieve this via the command line?

Find "fraction of bright pixels" in image (thresholding?)

I have a large number of grayscale images that show bright "fibers" on a darker background. I am trying to quantify the "amount" of fibers. Since they overlap almost everywhere it will be impossible to count the number of fibers, so instead I want to resort to simply calculating how large the area fraction of the white fibers is compared to the full image (e.g. this one is 55% white, another one with less fibers is only 43% white, etc). In other words, I want to quantify the density of the fibers in the image.
Example pictures:
High density: https://dl.dropboxusercontent.com/u/14309718/f1.jpg
Lower density: https://dl.dropboxusercontent.com/u/14309718/f2.jpg
I figured a simple (adaptive) threshold filter would do the job nicely by just converting the image to purely black/white and then counting the fraction of white pixels. However, my answer seems to depend almost completely and only on the threshold value that I choose. I did some quick experiments by taking a large number of different thresholds and found that in all pictures the fraction of white pixels is almost exactly a linear function of the threshold value. In other words - I can get any answer I want between roughly 10% and 90% depending on the threshold I choose.
This is obviously not a good approach because my results are extremely biased with how I choose the threshold and therefore completely useless. Furthermore I have about 100 of these images and I'm not looking forward to trying to choose the "correct" threshold for all of them manually.
How can I improve this method?
As the images are complex and the outlines of the fibers are fuzzy, there is little hope of getting an "exact" measurement.
What matters then is to achieve repeatability, i.e. ensure that the same fiber density is always assigned the same measurement, even in varying lighting conditions if possible, and different densities are assigned different measurements.
This rules out human intervention in adjusting a threshold.
My best advice is to rely on Otsu thresholding, which is very good at finding meaningful background and foreground intensities and is fairly illumination-independent.
Enhancing the constrast before Otsu should be avoided because binarization commutes with contrast enhancement (so that there is no real benefit), but contrast enhancement can degrade the image by saturating at places.
Just echoing #YvesDaoust' thoughts really - and providing some concrete examples...
You can generate histograms of your images using ImageMagick which is installed on most Linux distros and is available for OSX and Windows. I am just doing this at the command-line but it is powerful and easy to run some tests and see how Yves' suggestion works for you.
# Make histograms for both images
convert 1.jpg histogram:h1.png
convert 2.jpg histogram:h2.png
Yes, they are fairly bimodal - so Otsu thresholding should find a threshold that maximises the between-class variance. Use the script otsuthresh from Fred Weinhaus' website here
./otsuthresh 1.jpg 1.gif
Thresholding Image At 44.7059%
./otsuthresh 2.jpg 2.gif
Thresholding Image At 42.7451%
Count percentage of white pixels in each image:
convert 1.gif -format "%[fx:int(mean*100)]" info:
50
convert 2.gif -format "%[fx:int(mean*100)]" info:
48
Not that brilliant a distinction! Mmmm... I tried adding in a median filter to reduce the noise, but that didn't help. Do you have your images available as PNG to avoid the nasty artefacts?

Imagemagick, clear noise from QR-code image

I have QR-code scanned from document, when I trying do decode it with online decoder, like http://zxing.org/w/decode.jspx or other, they not find QR-code, but when I decode with camera on smartphone, it's decode correct text. I think it's because of small noises in image, how can I clear it with ImageMagick?
There can be another QR-codes.
You can try a median filter like this, but you would probably be better off extracting the image from the PDF the way Kurt suggested in your previous question in order to retain more quality:
convert qr.png -median 3 result.png
I don't have the Imagemagick command available, but what you usually do to remove noise is to
blur the image by 1 or 2 pixels
lighten it up a bit
add contrast again
Imagemagick should be able to do all this.

Using Image Magick to measure "Cloudiness" of the Sky

I came across another thread:
In a digital photo, how can I detect if a mountain is obscured by clouds?
about analysing images
but I couldn't work out how to go from that to what I would like to do, which seems to be somewhat similar.
Basically, I want to take an image of the sky (only 640 x 480) and measure how "blue" it is - or how grey/cloudy. I have plenty of comparison images I could use and am not sure whether to try and use convolution or just some type of histogram measurement.
Ideally, I'd like to come with a percentage figure which approximates to the "blueness" of the image.
Any thoughts/ideas or example commands/scripts would be wonderful.
Thanks for reading.
Andrew
I'm not an image processing specialist, but I would imagine that if you convert the image to the HSV color space, all of the pixels representing a unobstructed part of the sky will have a high saturation and a hue close to blue. I would ignore the value channel because brightness changes with time of day. just set your hue and saturation threshold appropriately, and count pixels, and see how many cloud vs sky pixels you get.
Not sure how well it will work but it's an idea.
I had the same question last month after adding a camera module to my raspberry pi based weather station.
I tested this imagemagick command :
convert file.jpg -colors 8 -format "%c" histogram:info:
to get the 8 most represented colors of this sky part in RGB format.
I figured out that you can evaluate a pourcentage from those 3 numbers and a grey image (cloudy) is something as 33% 33% 33%. I didn't do it, but with awk command or some perl, you might be able to do that.
But here (in Congo), a blue sky is not so far from a grey sky. So it's not the perfect method to compute a cloudiness factor. Instead, it will be a not so bad idead to compute the variance to see if you have clouds or uniform colored sky. Or maybe compute both and see what append. If you do that several time by hour, you can track some evolution ?
After reading EternityForest answer, I figured out that you can use
-colorspace HSB
or
-colorspace HSL
in my previous command. If you put 1 instead of 8, you will get the most represented value.
Some discussions about this on imagemagick forum
I will stay connected to this thread cause I'm really interested solving this cloudiness question too !
Good luck,
Greg.

Making text more readable Imagemagick

I have this image here:
http://imgur.com/QFSimZX
That when looking at it, a human can see that it says PINE (N) on the top line and PI on the second line. The problem I have is that when using tesseract-ocr to read what the text says it has pretty bad outputs. I have a lot of images like this and need to automate this process, so doing it manually is not idea. I have used imagemagick to get it in the current state, but would like to know if there is any way to make this image more readable by possibly connecting the close areas of black. I know almost nothing about image manipulation so I don't know where to begin searching. If anyone knows a method for making this more readable I would greatly appreciate it.
This is a pretty tricky problem, and the solution that works best will depend sensitively on characteristics of the image - what scale is the type? how degraded is the image? The boundary between details that you want to keep and degradation that you want to fix is something that only the human operator can decide, so there is no automated one-size-fits-all solution for this problem, and you should expect to do some experimentation.
The basic technique is that you want to adjust the value of each pixel in the image to be similar to the pixels that surround it. Put in those terms, you might realise that this is just a blur operation. After you blur the image though, you are left with letters with fuzzy edges, so to get crisp letters again, that's a threshold operation - you set a threshold level of gray, and everything lighter than that shade of gray becomes white and everything darker than the threshold becomes black. The blur plus threshold combination gives you a wide range of effects that you can use to make text more (or less) legible. For the example image given, I had pretty good results with a blur radius of 5 and a threshold level of 70%.
convert QFSimZX.jpg -blur 5 -threshold 70% output.png
You can get more sophisticated than this if needed, by implementing a custom blur function with the -fx operator. Fx is powerful but somewhat complex, and you can read about it here: http://www.imagemagick.org/script/fx.php . I tried a quick fx expression that filled in a pixel based first on its above and below neighbors, then on its left and right neighbors. This technique really allows you to fine tune which pixels are considered in computing the blur:
convert QFSimZX.jpg -monochrome \
-fx 'p[0,-1]+p[0,1] >= 2 ? 1 : 0' \
-fx 'p[-1,0]+p[1,0] >= 2 ? 1 : 0' \
output.png

Resources