I'm trying to fill the contourn with cvDrawContourns after cvFindContourns in cvCanny detection, but i'm not success in this
How can I fill the whole contourn?
Thanks
In C++, you request the function to fill the contours by passing the value CV_FILLED (which is equal to -1 if I remember correctly) to the thickness argument. This is probably the same for the Java API.
Related
The documentation for OpenCVs getRectSubPix() function:
C++: void getRectSubPix(InputArray image, Size patchSize,
Point2f center, OutputArray patch, int patchType=-1 )
contains the statement:
While the center of the rectangle must be inside the image,
parts of the rectangle may be outside. In this case,
the replication border mode (see borderInterpolate() )
is used to extrapolate the pixel values outside of the image.
But I can't see a way to set the borderInterpolate mode in getRectSubPix. Many other OpenCV functions (boxFilter, copyMakeBorder, ...) allow you to pass in the borderInterpolate enum, but not getRectSubPix.
Is this just a documentation error?
The statement "replication border mode (see borderInterpolate() ) is used to extrapolate the pixel values", clearly states that it uses a predefined mode known as BORDER_REPLICATE to estimate the pixels outside the image boundary, You cannot use other Border methods like BORDER_REFLECT, BORDER_WRAP, BORDER_CONSTANT, etc.
I tryed to apply to the image the following code in octave:
sq = imread("Square BW.jpg");
figure(1), imshow(Square);
cont1 = edge(sq,"Sobel");
figure(2), imshow(cont1);
The image I get is:
And a similar image appears if I use the Prewitt function. Can anyone explain to me what is happening? The problem is that I can't visualize the process only the result, so I can't understand why the code isn't working.
The problem seems to be how threshold is computed in Octave. You can see how Octave does it by looking at its source by entering type edge at the Octave prompt, or online (I'm not copying the exact code since the code is GPL -- although quite simple)
To get the border, you will need to set the threshold yourself (hopefully, in future versions of Octave's image package this will be fixed but at the moment it's Matlab incompatible since Matlab documentation on their default is unclear).
There's definitely a problem with the way the threshold is computed, however I wasn't able to find the correct value to use in this picture. After many attempts I found this code that seems to work perfectly:
sq = imread("Square BW.jpg");
maskSobel = fspecial("sobel");
mSobel = uint8(zeros(size(BW)));
for i = 0:3
mSobel += imfilter(sq, rot90(maskSobel, i));
end
figure(1), imshow(mSobel);
First we create the Sobel matrix/operator and a zero matrix the same size of the image Square BW. Then we rotate the Sobel matrix four times (by 90 degrees), in order filter the image in all directions (left-right, up-down, right-left and down-up), always adding the result to the mSobel matrix that was created.
Here's the final result:
I'm following this tutorial
The goal is to be able to spit out either:
a. the center of each labeled object
b. all pixels associated with each labeled object
in a way that I have an array of either 'a.' for each object, or 'b.' for each object
I'm really not sure how to go about this. Are there matlabl tools to help extract these set of pixels or centers - per - label?
Update
I did manage to circle 80% of what I wanted using reigionprops, however it doesn't capture label precisely, just sets a circle around them while capturing the background as well, is that really unavoidable? I'm just not sure how to access the set of pixel per each circled item.
r=regionprops(L, 'All'); imshow(imagergb); areas={r.Area}; Bboxes={r.BoundingBox};
for k=2:numel(r)
if areas{k}>50 && areas{k} < 1100
rectangle('Position',Bboxes{k}, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
So what I'm trying to do is for example.
I thought it might just be
r = regionprops(L, 'PixelIdxList')
then
element1 = r(1).PixelIdxList
but couldn't figure out how to get the position of each pixel
I also tried
Z= bwlabel(L);
but imshow(Z==1) spits out all labels and imshow(Z==2) spits out background, all labels and background. couldn't test bwlabeln since I'm not exactly sure what to enter for r and c arguments.
Using regionprops(L, 'PixelIdxList') is correct. It gives you lists of pixel indices for each label. You can then convert them to [x,y] coordinates using (for the first label, for example)
[y,x] = ind2sub(size(L), r(1).PixelIdxList)
You can get label centers by using regionprops(L, 'Centroid'). This already gives you [x,y] coordinates for each label. Note that these are subpixel coordinates, so you may need to round them if you want to use them as indices.
I'm working on a project with EmguCV (.NET-version of OpenCV) and I'm using the probabilistic Hough Transformation to find lines.
So at first I was performing the canny-operator. Afterwards doing the Hough-transformation.
Gray cannyThreshold = new Gray(50);
Gray cannyThresholdLinking = new Gray(300);
Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking);
LineSegment2D[] linesFound_temporary = cannyEdges.HoughLines
(
cannyThreshold, // 1. Parameter
cannyThresholdLinking, // 2. Parameter
1, // 3. Parameter
Math.PI / 360.0, // 4. Parameter
gray.Width * 0.2, // 5. Parameter
gray.Width * 0.4, // 6. Parameter
gray.Width * 0.1 // 7. Parameter
)[0];
Later I realised that the HoughLines-Method already integrated the canny edge detection.
Nevertheless, my results in line-detection are better and more steady when I use the additional canny detection instead of leaving it out.
Can anyone explain to me, why this happens? Or has anyone experienced the same?
I experienced the same while doing one of my project. I think it dépends on the parameter given to both function. If the first canny remove too much information and no lines, the second function will suck. If you do a "first pass", removing much of the information but leaving very apparent lines, then the Hough Line has little to do. But I discovered that by tweaking the parameter of the Hough Line in the first time could achieve almost the same result.
Hope it helps!
I have a binary image and I want to perform closing on that image with the line as structuring element.
The openCv api has a function getStructuringElement that takes the following parameters
Shape
Size
Anchor Point
I can pass CV_SHAPE_CUSTOM in the first parameter to create a new shape but where do I
pass the size and the values of my structuring element.
My line will be 10 pixels wide and 1 pixels in length basically {1,1,1,1,1,1,1,1,1,1}.
There is an old function createStructringElementEx but I don't want to use that as it involves a lot of conversion of datatype.
Is this what you want?
Size = Size(10,1)
Anchor Point = Point(-1,-1)
Got it . Thanks to the comment from Niko.
Create a matrix as
Mat line = Mat::ones(1,10,CV_8UC1);
//now apply the morphology close operation
morphologyEx(img, img, MORPH_CLOSE, line,Point(-1,-1));
This solved my problem.