Point space:
Dilated:
I used dilation (morphological process) but this is not the result i expected. I want more smooth regions, not squares. How to define contours and fill in them? Maybe i can use k-means like algorithms to het regions from points. Help please.
Related
I'm trying to extract the geometries of the papers in the image below, but I'm having some trouble with grabbing the contours. I don't know which threshold algorithm to use (here I used static threshold = 10, which is probably not ideal.
And as you can see, I can get the correct number of images, but I can't get the proper bounds using this method.
Simply applying Otsu just doesn't work, it doesn't capture the geometries.
I assume I need to apply some edge detection, but I'm not sure what to do once I apply Canny or some other.
I also tried sobel in both directions (+ve and -ve in x and y), but unsure how to extract these contours from there.
How do I grab these contours?
Below is some previews of the images in the process of the final convex hull results.
**Original Image** **Sharpened**
**Dilate,Sharpen,Erode,Sharpen** **Convex Of Approximated Polygons Hulls (which doesn't fully capture desired regions)**
Sorry in advance about the horrible formatting, I have no idea how to make images smaller or title them nicely in SOF
I would like to measure the horizontal lengths of multiple ROI. I tried Feret's diameter, but it only gives the longest distance between any two points along the selection boundary. I tried bounding rectangle, but I suppose the rectangles are tilted to obtain the minimum bounding rectangle.
Does anyone have another idea? Because clearly, the selection boundaries fit nicely to the ROI - so how could I extract that information, i.e. the xy-coordinates of the fits? Thanks in advance
PS: I did not write ROIs because 'Region of Interests' makes no sense
I want to get the density of the foreground.To be specific,first I need to to get the region of the foreground,inside the blue curve.Then use pixels inside the region to compute density.Obviously it cannot be solved by threshold or contour methods.It is a part of a Chinese character,so OCR may be useful,I don't know.Any advice?Thanks.
Now I have some idea.Randomly select 100 dots or more,than compute the average pixels around these dots,say radius is 100 or other.Hope this would be a estimate of the density.Is there some algorithm to achieve this?
Original Image
Result expected
Dilation works really well for your application like #Mark Setchell already pointed out in the comments.
First, use the dilate function to fill the gap in between your components. I used a quadratic kernel of size 35:
Next, use the threshold function to obtain a binary image:
[
Finally, use the findContours function to calculate the image contours and draw them using drawContours. The result will look very similar to your desired output:
You may have to change some parameters (mainly the dilation kernel size) depending on your input, but this should generally be the best approach to your problem.
I am trying to find a reliable method to calculate the corner points of a container. From these corner point’s idea is to calculate the center point of the container for the localization of robot, it means that the calculated center point will be the destination of robot in order to pick the container. For this I am looking for any suggestions to calculate the corner points or may be if any possibility to calculate the center point directly. Up to this point PCL library C/C++ is used for the processing of the 3D data.
The image below is the screenshot of the container.
thanks in advance.
afterApplyingPassthrough
I did the following things:
I binarized the image (black pixels = 0, green pixels = 1),
inverted the image (black pixels = 1, green pixels = 0),
eroded the image with 3x3 kernel N-times and dilated it with same kernel M-times.
Left: N=2, M=1;Right: N=6, M=6
After that:
I computed contours of all non-zero areas and
removed the contour that surrounded entire image.
This are the contours that remained:
I do not know how "typical" input image looks like in your case. Since I only have access to one sample image, I would rather not speculate about "general solution" that will be suitable for you. But to solve this particular case, you could analyze every contour in the following way:
compute rotatated rectangle that fits best around your contour (you need something similar to minAreaRect from OpenCV)
compute areas of rectangle and contour interior
if the difference between contour area and the area of the rotated bounding rectangle is small, the contour has approximately rectangular shape
find the contour that is both rectangular and satisfies some other condition (for example: typical area of the container). Assume that this belongs to container and compute its center.
I am not claiming that this is a solution that will work well in real world scenarios. It is also not fast. You should view it as a "sketch" that shows how to extract some useful information.
I assume the wheels maintain the cart a known offset from the floor and you can identify the floor. Filter out all points which are too close to the floor (this will remove wheels and everything but cart which will help limit data and simplify later steps.
If you isolate the cart, you could apply a simple average point (centroid), alternately, if that is not precise, you could try finding the bounding box of the isolated cart (min max in primary directions) and then take the centroid of that bounding box (this should be more accurate, but will still need a slight vertical offset due to the top handles).
If you can not isolate the cart or the other methods are not working well, you could try using PCL sample consensus specifically SACMODEL_LINE. This will be an involved strategy, but will give very solid results, basically run through and find each line and subtract its members from the cloud so as to find the next best line. After you have your 4 primary cart lines, use their parameters to find your centroid. *this would also be robust against random items being in or on the cart as well as carts of various sizes (assuming they always had linear perpendicular walls)
Actually, I want five external bounding boxes for the "white" pixels on the following binary image. Desired zones are highlighted with red color.
To get 5th bounding box I'd dilate or blur it. However, dilation will merge zone 3 with zones 1 and 2, so I'll get a bounding box which covers almost entire image. (If I don't dilate or blur it, then cv::findContours + cv::boundingRect will produce a big number of small rectangles.)
In other words, I want only "big enough" bounding boxes.
It's just a sample pattern. Positions of the zones may vary. Is there a way to solve the problem in a general way?
Dilation is done at a per-pixel basis, without regard for the size of the component to which the pixel belongs.
If you want to apply dilation only to small blobs, then you need to remove big blobs before applying the dilation.
So, extract all contours with findContours, then store all contours that are 'big enough' in a list, and paint them black in your source image. Then dilate the modified source and extract the remaining contours.
Note that to get the correct size of the boundingBox, what you probably want is morphological closing (dilation followed by the same amount of erosion), instead of dilation only.