In ESRI's ARCGIS software, the Spatial Join tool has various "Match Options" for joining your data. I would like to know the most rapid and reliable Match Option for the following scenario:
Target features: Point Class
Join Features: Polygon class
-The points are within the polygons
-No polygons overlap
-All points are within a polygon
-Some polygons have many points
The ones I am considering (unless a different one is better) are:
Closest, Intersect, Within, and Have Their Centers In
Once I choose the fastest of those, will it be faster to include a search radius? Or is ARC smart enough that if you leave that blank it will assume a search radius of 0.
Related
I have a project to build a 3D model of the spinal roots in order to simulate the stimulation by an electrode. For the moment I've been handled two things, the extracted positions of the spinal roots (from the CT scans) and the selected segments out of the points (see both pictures down below). The data I'm provided is in 3D and all the segments are clearly distinct although it does not look like it on the figures below as it is zoomed out.
Points and segments extracted from the spinal cord CT scans:
.
Selected segments out of the points:
I'm now trying to connect these segments so as to have the centrelines for all the spinal roots at the end. The segments are not classified, simply of different colors to differentiate them on the plot. The task is then about vertically connecting the segments that look to be part of the same root path.
I've been reviewing the literature on how I could tackle that issue. As I'm still quite new to the field I don't have much intuition on what could work and what could not. I have two subtasks to solve here, connecting the lines and classifying the roots, and while connecting the segments after classification seems no big deal, classifying them seems decently harder. So I'm not sure in which order to proceed.
Here are the few options I'm considering to deal with the task :
Use a Kalman filter to extract the vertical lines from the selected segments and the missing parts
Use of a Hough transform to detect vertical lines, by trying to express the spinal root segments in the parametric space and see how they cluster and see if anything can be inferred from there.
Apply some sort of SVM classification algorithm on the segments to classify them by roots. I could characterize each segment by its orientation and position, and classify them based on similarities in the parameters I'm selecting, and then connect the segments. Or use the endpoint position of each segment and connect it to one of the nearest neighbours if their orientation/position is matching.
I'm open to any suggestions, any piece of advice, or any other ideas on how to deal with the current problem.
I am working with the Uber H3 library. Using the poly fill function, I have populated an area with H3 indexes for a specific resolution. But I don’t need all the indexes. I want to identify and remove those indexes which are getting plotted on isolated areas like jungles, lakes, ponds, etc.
Any thoughts on how that can be achieved?
I thought that if I can map all the buildings in a city in their respective indexes, I can easily identify those indexes in which no buildings are mapped.
I’d maintain a Hashmap of h3 index as the key and a list of coordinates which lie in that index as the value.
In order to address this, you'll need some other dataset(s). Where to find this data depends largely on the city you're looking at, but a simple Google search for footprint data should provide some options.
Once you have footprint data, there are several options depending on the resolution of the grid that you're using and your performance requirements.
You could polyfill each footprint and keep the resulting hexagons
For coarser data, just using geoToH3 to get the hexagon for each vertex in each building polygon would be faster
If the footprints are significantly smaller than your hex size, you could probably just take a single coordinate from each building.
Once you have the hexagons for each building, you can simply do a set intersection with your polygon hexes and your building hexes to get the "good" set. But it may be easier in many cases to remove bad hexagons rather than including good ones - in this case you'd need a dataset of non-building features, e.g. water and park features, and do the reverse: polyfill the undesired features, and subtract these hexagons from your set.
Our core aim is:
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software)
to use Image Processing to read/scan an architectural Floor Plan Image (exported from a CAD software) extract the various lines and curves, group them into Structural Entities like walls, columns, beams etc. – ‘Wall_01’, ‘Beam_03’ and so on
extract the dimensions of each of these Entities based on the scale and the length of the lines in the Floor Plan Image (since AutoCAD lines are dimensionally accurate as per the specified Scale)
and associate each of these Structural Entities (and their dimensions) with a ‘Room’.
We have flexibility in that we can define the exact shapes of the different Structural Entities in the Floor Plan Image (rectangles for doors, rectangles with hatch lines for windows etc.) and export them into a set of images for each Structural Entity (e.g. one image for walls, one for columns, one for doors etc.).
For point ‘B’ above, our current approach based on OpenCV is as follows:
Export each Structural Entity into its own image
Use Canny and HoughLine Transform to identify lines within the image
Group these lines into individual Structural Elements (like ‘Wall_01’)
We have managed to detect/identify the line segments using Canny+HoughLine Transform with a reasonable amount of accuracy.
Original Floor Plan Image
Individual ‘Walls’ Image:
Line Segments identified using Canny+HoughLine:
(I don't have enough reputation to post images yet)
So the current question is - what is the best way to group these lines together into a logical Structural Entity like ‘Wall_01’?
Moreover, are there any specific OpenCV based techniques that can help us group the line segments into logical Entities? Are we approaching the problem correctly? Is there a better way to solve the problem?
Update:
Adding another image of valid wall input image.
You mention "exported from a CAD software". If the export format is PDF, it contains vector data for all graphic elements. You might be better off trying to extract and interpret that. Seems a bit cumbersome to go from a vector format to a pixel format which you then try to bring back to a numerical model.
If you have clearly defined constraints as to what your walls, doors, etc will look like in your image, you would use exactly those. If you are generating the CAD exports yourself, modify the settings there so as to facilitate this
For instance, the doors are all brown and are closed figures.
Same for grouping the walls. In the figures, it looks like you can group based on proximity (i.e, anything within X pixels of each other is one group). Although, the walls to the right of the text 'C7' and below it may get grouped into one.
If you do not have clear definitions, you may be looking at some generic image recognition problems, which means A.I or Machine Learning. This would require a large variety of inputs for it to learn from, and may get very complex
I'm trying to do a particle analysis of cross section of nerve cells. In essence, each nerve fiber has an outer and inner radius and I want to calculate the annular region. It's fairly simple to convert the image to a binary one, and then analyze particles, but it only gives the area of the outer region (inner region included). I want to somehow automate finding the outer region (marked by the outer edge of the black band) area less the inner region (marked off by the inner edges of the black band). Picture is related to what I'm talking about (the image is a sample and has already been converted to binary).
[
You should first invert the image, the white patterns will be what you want to analyze. Then, you apply a connected component labeling in order to separate the patterns, and then you can analyze them one by one.
Unfortunately, I don't think that the connected component labeling and separation exists in ImageJ, but you can take a look to the union-find algorithm, it's pretty simple. Here is a java code.
I'm trying to detect objects and text in a hand-drawn diagram.
My goal is to be able to "parse" something like this into an object structure for further processing.
My first aim is to detect text, lines and boxes (arrows etc... are not important (for now ;))
I can do Dilatation, Erosion, Otsu thresholding, Invert etc and easily get to something like this
What I need some guidance for are the next steps.
I've have several ideas:
Contour Analysis
OCR using UNIPEN
Edge detection
Contour Analysis
I've been reading about "Contour Analysis for Image Recognition in C#" on CodeProject which could be a great way to recognize boxes etc. but my issue is that the boxes are connected and therefore do not form separate objects to match with a template.
Therefore I need some advises IF this is a feasible way to go.
OCR using UNIPEN
I would like to use UNIPEN (see "Large pattern recognition system using multi neural networks" on CodeProject) to recognize handwritten letters and then "remove" them from the image leaving only the boxes and lines.
Edge detection
Another way could be to detect all lines and corners and in that way infer the boxes and lines that the image consist of. In that case ideas on how to straighten the lines and find the 90 degree corners would be helpful.
Generally, I think I just need some pointers on which strategy to apply, not code samples (though it would be great ;))
I will try to answer about the contour analysis and the lines between them.
If you need to turn the interconnected boxes into separate objects, that can be achieved easily enough:
close the gaps in the box edges with morphological closing
perform connected components labeling and look for compact objects (e.g. objects whose area is close to the area of their bounding box)
You will get the insides of the boxes. These can be elliptical or rectangular or any shape you may find in common diagrams, the contour analysis can tell you which. A problem may arise for enclosed background areas (e.g. the space between the ABC links in your example diagram). You might eliminate these on the criterion that their bounding box overlaps with multiple other objects' bounding boxes.
Now find line segments with HoughLinesP. If a segment finishes or starts within a certain distance of the edge of one of the objects, you can assume it is connected to that object.
As an added touch you could try to detect arrow ends on either side by checking the width profile of the line segments in a neighbourhood of their endpoints.
It is an interesting problem, I will try to remember it and give it to my students to grit their teeth on.