I have a grid of points like a single channel picture. I find the connected components using openCV with python. I want to get polygons of the connected components as shown below
Is there anyway to get something like shapely polygons from the grid and connected components labels? Also polygons can have holes in them.
Related
Using Emgu CV I have extracted a set of closed polygons from the contours in an image of a road network. The polygons represent road outlines. The result is shown below, plotted over an OpenStreetMaps map (the polygons in 'pixel' form from Emgu CV have been converted to latitude/longitude form to be plotted).
Set of polygons representing road outlines:
I would now like to compute the Voronoi diagram of this set of polygons, which will help me find the centerline of the road. But in Emgu CV I can only find a way to get the Voronoi diagram of a set of points. This is done by finding the Delaunay triangulation of the set of points (using the Subdiv2D class) and then computing the voronoi facets with GetVoronoiFacets.
I have tried computing the Voronoi diagram of the points defined by all the polygons in the set (each polygon is a list of points), but this gives me an extremely complicated Voronoi diagram, as one might expect:
Voronoi diagram of set of points:
This image shows a smaller portion of the first picture (for clarity, since it is so convoluted). Indeed some of the lines in the diagram seem to represent the road centerline, but there are so many other lines, it will be tough to find a criterion to extract the "good" lines.
Another potential problem that I am facing is that, as you should be able to tell from the first picture, some polygons are in the interior of others, so we are not in the standard situation of a set of disjoint closed polygons. That is, sometimes the road is between the outer boundary of one polygon and the inner boundary of another.
I'm looking for suggestions as to how to compute the Voronoi graph of the set of polygons using Emgu CV (or Open CV), hopefully overcoming this second problem I've outlined as well. I'm also open to other suggestions for how to acheive this without using Emgu CV.
If you already have polygons, you can try computing the Straight Skeleton.
I haven't tried it, but CGAL has an implementation. Note that this particular function license is GPL.
A possible issue may be:
The current version of this CGAL package can only construct the
straight skeleton in the interior of a simple polygon with holes, that
is it doesn't handle general polygonal figures in the plane.
Probably there are workarounds for that. For example you can include all polygons in a bigger rectangle (this way the original polygons will be holes of the new rectangle). This may not work well if the original polygons have holes. To solve that, you could execute the algorithm for each polygon with holes and then put all polygons in a rectangle, removing all holes and execute the algorithm again.
Can someone explain the difference between Connected Component labeling and Image Segmentation in image processing? I've read about these techniques and found the outcome of both is almost same
Segmentation is a problem statement: How do you assign a finite set of labels to each pixel in an image, ideally so that the labels correspond to real-world objects you're looking for?
Connected component labeling is (or can be seen as) one very simple approach to solving that problem: Simply assign the same unique label to connected sets of pixels that share some binary characteristic (e.g. brightness above some fixed threshold).
It is however by no means the only or the best approach: Just google for "Graph cut segmentation" or "Watershed segmentation" to find examples that simply aren't possible with connected component labeling, like this one:
Connected components labeling scans an image and groups its pixels into components based on pixel connectivity. Connected components, in a 2D image, are clusters of pixels with the same value, which are connected to each other through either 4-pixel, or 8-pixel connectivity.
image segmentation can use any applicable algorithm to segment the voxels of interests which satisfy the feature that the user is looking for. This means not all the segmented voxels are connected together according to connected components. Sometimes, the VOI is obtained using C.C depending on what the VOI is. But most of the times in image processing for example if you are looking for a specific shape in your image, you can not use C.C since not all the voxels in the VOI are connected together.
My map (Google Map SDK) displays a large number of locations markers. To organize them into clusters I used Distance-based Clustering -- simple. Now, the challenge is to define the right business logic to set the correct zoom level to display the marker clusters.
See the ENCLOSED picture.
I'd like the markers rendered on the map to be clustered into AS MANY groups AS to fit the bounds of the map. For example, if a single cluster is about to be created by Google Map SDK, break it up and create as many clusters as to fit the viewable map area tightly. Do not show 1 cluster, zoomed out.
In you answer please be specific, which approach would you use and why. How would you set the right zoom-level?
Grid-based Clustering
Distance-based Clustering
Viewport Marker Management
Fusion Tables
MarkerClusterer
MarkerManager
VISUAL REPRESENTATION: Show B but not A:
https://drive.google.com/file/d/0B70UaoIrLEeLTUxMZXBCTTRIY1U/edit?usp=sharing
I'm trying to detect objects and text in a hand-drawn diagram.
My goal is to be able to "parse" something like this into an object structure for further processing.
My first aim is to detect text, lines and boxes (arrows etc... are not important (for now ;))
I can do Dilatation, Erosion, Otsu thresholding, Invert etc and easily get to something like this
What I need some guidance for are the next steps.
I've have several ideas:
Contour Analysis
OCR using UNIPEN
Edge detection
Contour Analysis
I've been reading about "Contour Analysis for Image Recognition in C#" on CodeProject which could be a great way to recognize boxes etc. but my issue is that the boxes are connected and therefore do not form separate objects to match with a template.
Therefore I need some advises IF this is a feasible way to go.
OCR using UNIPEN
I would like to use UNIPEN (see "Large pattern recognition system using multi neural networks" on CodeProject) to recognize handwritten letters and then "remove" them from the image leaving only the boxes and lines.
Edge detection
Another way could be to detect all lines and corners and in that way infer the boxes and lines that the image consist of. In that case ideas on how to straighten the lines and find the 90 degree corners would be helpful.
Generally, I think I just need some pointers on which strategy to apply, not code samples (though it would be great ;))
I will try to answer about the contour analysis and the lines between them.
If you need to turn the interconnected boxes into separate objects, that can be achieved easily enough:
close the gaps in the box edges with morphological closing
perform connected components labeling and look for compact objects (e.g. objects whose area is close to the area of their bounding box)
You will get the insides of the boxes. These can be elliptical or rectangular or any shape you may find in common diagrams, the contour analysis can tell you which. A problem may arise for enclosed background areas (e.g. the space between the ABC links in your example diagram). You might eliminate these on the criterion that their bounding box overlaps with multiple other objects' bounding boxes.
Now find line segments with HoughLinesP. If a segment finishes or starts within a certain distance of the edge of one of the objects, you can assume it is connected to that object.
As an added touch you could try to detect arrow ends on either side by checking the width profile of the line segments in a neighbourhood of their endpoints.
It is an interesting problem, I will try to remember it and give it to my students to grit their teeth on.
is there a way to extract coordinates inside a polygon in Google Earth. For example I have a project in which I need to use the coordinates for every 1km^2 in an area and use them in MATLAB. How can this be done?
While you could use Google Earth to overlay polygons and a grid of coordinates and visually determine your answer, Google Earth does not provide a way to select and export features (grid points) within another feature (a polygon).
However, you could approach the problem by:
1. saving the polygon feature to a KML file;
2. parse the KML file using an XML reader in your favorite language;
3. construct a grid of coordinates that you want to test;
4. use a library (such as JTS in Java, GEOS in C, Shapely in Python) that implements a point-in-polygon algorithm. MATLAB appears to provide a point in polygon function.