I am new to ImageJ software. I want to extract the boundary of the lake which I am analyzing. I am outlining the method that I saw in a article. The author says:
" As an example we take the Noanamakki Reservoir whose image is shown in Fig. 1. We first extract the boundary of the lake one is analyzing. In order to achieve this, the water body is first colored black and then the binary image consisting only the water body is extracted (Fig. 2).Then the edge detection algorithm is used to obtain the boundary of the lake as depicted in Fig. 3"
I wonder how I achieve the above method using ImageJ.
Fig1:
Fig2:
Fig3:
It may help to apply the magic wand tool to an image that results from a colour space transformation. I tried it with the 8bit b*-channel after Lab-transformation. Setting the wand tool to Legacy, and using a tolerance of about 30 works quite well. However, some lake regions may need retouching. Here are the binary images of "Lake Powell" (with the "Glen Canyon"-dam at the bottom) that correspond to the images you've shown:
The original image is from the link posted in my comment.
If you are looking to calculate the length and fractal dimension of the boundary between land and water, you could try using the Marching Squares algorithm. It provides you with data about said boundary.
I have an input image as follows and wish to segment the parts into regions. I also want the segmented parts to not been just the pixels which contribute to the solid color but also the edge anti-aliasing between the edge of the region and the next region.
Does there exist any filter or method to segment the image in this way? The important part is that the end result segmented part must contain the edge anti-aliasing between it and the next regions. A correct solution is shown in yellow.
In these two images I zoomed the pixels to be large so the edge anti-aliasing between region edges can be seen clearly.
An example output that I want for the yellow region is shown.
For a definition of "edge anti-aliasing" see https://markpospesel.wordpress.com/2012/03/30/efficient-edge-antialiasing/
I'm not sure what exactly you want. For example, would some pixels belong to two segments? If that is the case, then I'm relatively sure you have to do something on your own. Otherwise, the following might work:
Opening and Closing
Opening and closing are two morphological operations which will smooth borders
Clustering
There are many clustering algorithms. They are what you want for non-semantic segmentation (for semantic segmentation, you might want to read my literature survey). One example is
P. F. Felzenszwalb, “Graph based image segmentation.”
I would simply give those algorithms a try and see if one directly works.
Other clustering algorithms:
K-means
DB-SCAN
CLARANS
AGNES
DIANA
I'm trying to implement a license plate detection algorithm, so far I have narrowed it down to a few interest regions:
My next step would be to classify each interest region and ignore the false regions. I was thinking maybe I can check each region for character. If the region contains some characters, then it is a plate, otherwise, it's a false region. How would I go about checking for characters?
Another approach I can think of is to use PCA to determine if a region contains plate, but I have no idea how to do it in OpenCV.
Text detection is not an easy task at all. It may be harder than the whole plate detection you are building. I can suggest you a simple tricky approach:
Find contours inside each region.
Find bounding rectangle around each contour.
delete very small or very larg rectangle.
Check weather these contours are represent a straight line. The region with contours that are arranged in linear way is the plate.
There is a technique called MSER (Maximally stable extremal regions). It detects connected blobs of similar color (which individual letters usually are). Then you classify these blobs to tell if they are letters or not. Then it's called CSER.
See OpenCV doc, Wikipedia
Also there is a Quasi-Linear version of the algorithm, if you feel like implementing it yourself.
So what I need to do is measuring a foot length from an image taken by an ordinary user. That image will contain a foot with a black sock wearing, a coin (or other known size object), and a white paper (eg A4) where the other two objects will be upon.
What I already have?
-I already worked with opencv but just simple projects;
-I already started to read some articles about Camera Calibration ("Learn OpenCv") but still don't know if I have to go so far.
What I am needing now is some orientation because I still don't understand if I'm following right way to solve this problem. I have some questions: Will I realy need to calibrate camera to get two or three measures of the foot? How can I find the points of interest to get the line to measure, each picture is a different picture or there are techniques to follow?
Ps: sorry about my english, I really have to improve it :-/
First, some image acquisition things:
Can you count on the black sock and white background? The colors don't matter as much as the high contrast between the sock and background.
Can you standardize the viewing angle? Looking directly down at the foot will reduce perspective distortion.
Can you standardize the lighting of the scene? That will ease a lot of the processing discussed below.
Lastly, you'll get a better estimate if you zoom (or position the camera closer) so that the foot fills more of the image frame.
Analysis. (Note this discussion will directed to your question of identifying the axes of the foot. Identifying and analyzing the coin would use a similar process, but some differences would arise.)
The next task is to isolate the region of interest (ROI). If your camera is looking down at the foot, then the ROI can be limited to the white rectangle. My answer to this Stack Overflow post is a good start to square/rectangle identification: What is the simplest *correct* method to detect rectangles in an image?
If the foot lies completely in the white rectangle, you can clip the image to the rect found in step #1. This will limit the image analysis to region inside the white paper.
"Binarize" the image using a threshold function: http://opencv.willowgarage.com/documentation/cpp/miscellaneous_image_transformations.html#cv-threshold. If you choose the threshold parameters well, you should be able to reduce the image to a black region (sock pixels) and white regions (non-sock pixel).
Now the fun begins: you might try matching contours, but if this were my problem, I would use bounding boxes for a quick solution or moments for a more interesting (and possibly robust) solution.
Use cvFindContours to find the contours of the black (sock) region: http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#findcontours
Use cvApproxPoly to convert the contour to a polygonal shape http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#approxpoly
For the simple solution, use cvMinRect2 to find an arbitrarily oriented bounding box for the sock shape. The short axis of the box should correspond to the line in largura.jpg and the long axis of the box should correspond to the line in comprimento.jpg.
http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#minarearect2
If you want more (possible) accuracy, you might try cvMoments to compute the moments of the shape. http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#moments
Use cvGetSpatialMoment to determine the axes of the foot. More information on the spatial moment may be found here: http://en.wikipedia.org/wiki/Image_moments#Examples_2 and here http://opencv.willowgarage.com/documentation/structural_analysis_and_shape_descriptors.html#getspatialmoment
With the axes known, you can then rotate the image so that the long axis is axis-aligned (i.e. vertical). Then, you can simply count pixels horizontally and vertically to obtains the lengths of the lines. Note that there are several assumptions in this moment-oriented process. It's a fun solution, but it may not provide any more accuracy - especially since the accuracy of your size measurements is largely dependent on the camera positioning issues discussed above.
Lastly, I've provided links to the older C interface. You might take a look at the new C++ interface (I simply have not gotten around to migrating my code to 2.4)
Antonio Criminisi likely wrote the last word on this subject years ago. See his "Single View Metrology" paper , and his PhD thesis if you have time.
You don't have to calibrate the camera if you have a known-size object in your image. Well... at least if your camera doesn't distort too much and if you're not expecting high quality measurements.
A simple approach would be to detect a white (perspective-distorted) rectangle, mapping the corners to an undistorted rectangle (using e.g. cv::warpPerspective()) and use the known size of that rectangle to determine the size of other objects in the picture. But this only works for objects in the same plane as the paper, preferably not too far away from it.
I am not sure if you need to build this yourself, but if you just need to do it, and not code it. You can use KLONK Image Measurement for this. There is a free and payable versions.
This was bugging me over the weekend: What is a good way to solve those Where's Waldo? ['Wally' outside of North America] puzzles, using Mathematica (image-processing and other functionality)?
Here is what I have so far, a function which reduces the visual complexity a little bit by dimming
some of the non-red colors:
whereIsWaldo[url_] := Module[{waldo, waldo2, waldoMask},
waldo = Import[url];
waldo2 = Image[ImageData[
waldo] /. {{r_, g_, b_} /;
Not[r > .7 && g < .3 && b < .3] :> {0, 0,
0}, {r_, g_, b_} /; (r > .7 && g < .3 && b < .3) :> {1, 1,
1}}];
waldoMask = Closing[waldo2, 4];
ImageCompose[waldo, {waldoMask, .5}]
]
And an example of a URL where this 'works':
whereIsWaldo["http://www.findwaldo.com/fankit/graphics/IntlManOfLiterature/Scenes/DepartmentStore.jpg"]
(Waldo is by the cash register):
I've found Waldo!
How I've done it
First, I'm filtering out all colours that aren't red
waldo = Import["http://www.findwaldo.com/fankit/graphics/IntlManOfLiterature/Scenes/DepartmentStore.jpg"];
red = Fold[ImageSubtract, #[[1]], Rest[#]] &#ColorSeparate[waldo];
Next, I'm calculating the correlation of this image with a simple black and white pattern to find the red and white transitions in the shirt.
corr = ImageCorrelate[red,
Image#Join[ConstantArray[1, {2, 4}], ConstantArray[0, {2, 4}]],
NormalizedSquaredEuclideanDistance];
I use Binarize to pick out the pixels in the image with a sufficiently high correlation and draw white circle around them to emphasize them using Dilation
pos = Dilation[ColorNegate[Binarize[corr, .12]], DiskMatrix[30]];
I had to play around a little with the level. If the level is too high, too many false positives are picked out.
Finally I'm combining this result with the original image to get the result above
found = ImageMultiply[waldo, ImageAdd[ColorConvert[pos, "GrayLevel"], .5]]
My guess at a "bulletproof way to do this" (think CIA finding Waldo in any satellite image any time, not just a single image without competing elements, like striped shirts)... I would train a Boltzmann machine on many images of Waldo - all variations of him sitting, standing, occluded, etc.; shirt, hat, camera, and all the works. You don't need a large corpus of Waldos (maybe 3-5 will be enough), but the more the better.
This will assign clouds of probabilities to various elements occurring in whatever the correct arrangement, and then establish (via segmentation) what an average object size is, fragment the source image into cells of objects which most resemble individual people (considering possible occlusions and pose changes), but since Waldo pictures usually include a LOT of people at about the same scale, this should be a very easy task, then feed these segments of the pre-trained Boltzmann machine. It will give you probability of each one being Waldo. Take one with the highest probability.
This is how OCR, ZIP code readers, and strokeless handwriting recognition work today. Basically you know the answer is there, you know more or less what it should look like, and everything else may have common elements, but is definitely "not it", so you don't bother with the "not it"s, you just look of the likelihood of "it" among all possible "it"s you've seen before" (in ZIP codes for example, you'd train BM for just 1s, just 2s, just 3s, etc., then feed each digit to each machine, and pick one that has most confidence). This works a lot better than a single neural network learning features of all numbers.
I agree with #GregoryKlopper that the right way to solve the general problem of finding Waldo (or any object of interest) in an arbitrary image would be to train a supervised machine learning classifier. Using many positive and negative labeled examples, an algorithm such as Support Vector Machine, Boosted Decision Stump or Boltzmann Machine could likely be trained to achieve high accuracy on this problem. Mathematica even includes these algorithms in its Machine Learning Framework.
The two challenges with training a Waldo classifier would be:
Determining the right image feature transform. This is where #Heike's answer would be useful: a red filter and a stripped pattern detector (e.g., wavelet or DCT decomposition) would be a good way to turn raw pixels into a format that the classification algorithm could learn from. A block-based decomposition that assesses all subsections of the image would also be required ... but this is made easier by the fact that Waldo is a) always roughly the same size and b) always present exactly once in each image.
Obtaining enough training examples. SVMs work best with at least 100 examples of each class. Commercial applications of boosting (e.g., the face-focusing in digital cameras) are trained on millions of positive and negative examples.
A quick Google image search turns up some good data -- I'm going to have a go at collecting some training examples and coding this up right now!
However, even a machine learning approach (or the rule-based approach suggested by #iND) will struggle for an image like the Land of Waldos!
I don't know Mathematica . . . too bad. But I like the answer above, for the most part.
Still there is a major flaw in relying on the stripes alone to glean the answer (I personally don't have a problem with one manual adjustment). There is an example (listed by Brett Champion, here) presented which shows that they, at times, break up the shirt pattern. So then it becomes a more complex pattern.
I would try an approach of shape id and colors, along with spacial relations. Much like face recognition, you could look for geometric patterns at certain ratios from each other. The caveat is that usually one or more of those shapes is occluded.
Get a white balance on the image, and red a red balance from the image. I believe Waldo is always the same value/hue, but the image may be from a scan, or a bad copy. Then always refer to an array of the colors that Waldo actually is: red, white, dark brown, blue, peach, {shoe color}.
There is a shirt pattern, and also the pants, glasses, hair, face, shoes and hat that define Waldo. Also, relative to other people in the image, Waldo is on the skinny side.
So, find random people to obtain an the height of people in this pic. Measure the average height of a bunch of things at random points in the image (a simple outline will produce quite a few individual people). If each thing is not within some standard deviation from each other, they are ignored for now. Compare the average of heights to the image's height. If the ratio is too great (e.g., 1:2, 1:4, or similarly close), then try again. Run it 10(?) of times to make sure that the samples are all pretty close together, excluding any average that is outside some standard deviation. Possible in Mathematica?
This is your Waldo size. Walso is skinny, so you are looking for something 5:1 or 6:1 (or whatever) ht:wd. However, this is not sufficient. If Waldo is partially hidden, the height could change. So, you are looking for a block of red-white that ~2:1. But there has to be more indicators.
Waldo has glasses. Search for two circles 0.5:1 above the red-white.
Blue pants. Any amount of blue at the same width within any distance between the end of the red-white and the distance to his feet. Note that he wears his shirt short, so the feet are not too close.
The hat. Red-white any distance up to twice the top of his head. Note that it must have dark hair below, and probably glasses.
Long sleeves. red-white at some angle from the main red-white.
Dark hair.
Shoe color. I don't know the color.
Any of those could apply. These are also negative checks against similar people in the pic -- e.g., #2 negates wearing a red-white apron (too close to shoes), #5 eliminates light colored hair. Also, shape is only one indicator for each of these tests . . . color alone within the specified distance can give good results.
This will narrow down the areas to process.
Storing these results will produce a set of areas that should have Waldo in it. Exclude all other areas (e.g., for each area, select a circle twice as big as the average person size), and then run the process that #Heike laid out with removing all but red, and so on.
Any thoughts on how to code this?
Edit:
Thoughts on how to code this . . . exclude all areas but Waldo red, skeletonize the red areas, and prune them down to a single point. Do the same for Waldo hair brown, Waldo pants blue, Waldo shoe color. For Waldo skin color, exclude, then find the outline.
Next, exclude non-red, dilate (a lot) all the red areas, then skeletonize and prune. This part will give a list of possible Waldo center points. This will be the marker to compare all other Waldo color sections to.
From here, using the skeletonized red areas (not the dilated ones), count the lines in each area. If there is the correct number (four, right?), this is certainly a possible area. If not, I guess just exclude it (as being a Waldo center . . . it may still be his hat).
Then check if there is a face shape above, a hair point above, pants point below, shoe points below, and so on.
No code yet -- still reading the docs.
I have a quick solution for finding Waldo using OpenCV.
I used the template matching function available in OpenCV to find Waldo.
To do this a template is needed. So I cropped Waldo from the original image and used it as a template.
Next I called the cv2.matchTemplate() function along with the normalized correlation coefficient as the method used. It returned a high probability at a single region as shown in white below (somewhere in the top left region):
The position of the highest probable region was found using cv2.minMaxLoc() function, which I then used to draw the rectangle to highlight Waldo: