I want to know which polygons intersect (contains, contained or overlapped) with current MKMapView screen from millions of polygon data in a geojson file. It can be handled by using QuadTree when I worked for millions of points data. But it seems this doesn't work for Polygons.
I'm wondering what the best strategy is? What is the suitable algorithm and data struct I can use?
Related
I am working with the Uber H3 library. Using the poly fill function, I have populated an area with H3 indexes for a specific resolution. But I don’t need all the indexes. I want to identify and remove those indexes which are getting plotted on isolated areas like jungles, lakes, ponds, etc.
Any thoughts on how that can be achieved?
I thought that if I can map all the buildings in a city in their respective indexes, I can easily identify those indexes in which no buildings are mapped.
I’d maintain a Hashmap of h3 index as the key and a list of coordinates which lie in that index as the value.
In order to address this, you'll need some other dataset(s). Where to find this data depends largely on the city you're looking at, but a simple Google search for footprint data should provide some options.
Once you have footprint data, there are several options depending on the resolution of the grid that you're using and your performance requirements.
You could polyfill each footprint and keep the resulting hexagons
For coarser data, just using geoToH3 to get the hexagon for each vertex in each building polygon would be faster
If the footprints are significantly smaller than your hex size, you could probably just take a single coordinate from each building.
Once you have the hexagons for each building, you can simply do a set intersection with your polygon hexes and your building hexes to get the "good" set. But it may be easier in many cases to remove bad hexagons rather than including good ones - in this case you'd need a dataset of non-building features, e.g. water and park features, and do the reverse: polyfill the undesired features, and subtract these hexagons from your set.
trying to keep this question as specific as I can to avoid being closed as too broad. My end goal is to render marine (as in nautical) maps. See image below as a reference. I've researched some of the various Apple frameworks to see what suits this best. My data input is effectively an array of arrays where each child array represents a cartographic feature (think an island or boat dock). I started w/ Core Graphics as it has a very simple API however it's performance is poor (it was taking > 100ms for a single layer of data when I can expect 10-20 layers on average).
Which brings me to my question: would SpriteKit be an effective framework for handling this workload? My preference is to avoid learning Metal but if fellow devs recommend this approach I will invest the time. SpriteKit seems to be able to handle this- I'll probably be working with a few thousand to a few hundred thousand points/vertices at a time. I dont need any complex animations as the map is static in terms of display. Any inputs appreciated!
GeoJSONMap
Build maps from GeoJSON with MapKit or SpriteKit.
SpriteKit maps can be displayed offline and/or as planes in ARKit.
I loaded a city map resulting in 256 static SpriteKit nodes made from filled GeoJSON polygons, it gives me only 3.7 FPS on iPhone XS. Perhaps some optimisation is possible, but I did not try.
I am building an app where I visualise a rather large dataset (~5 million polygons) evenly distributed over a geographic area.
Roughly 2000 polygons are displayed at once at the appropriate zoom level. When zoomed out, the data is simply hidden.
To speed up drawing of the polygons I've implemented an R*-tree that returns the polygons that overlap the area in question.
-(void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
MKCoordinateRegion region = MKCoordinateRegionForMapRect(mapRect);
NSArray *polygons = [[Polygons sharedPolygons] polygonsInRegion:region];
for(Polygon *p in polygons) {
// Draw polygon
}
}
The actual sorting once the polygons are loaded into memory seems solvable by fetching and storing only the polygons that the user sees into the R-tree. The user is only interested in features close by or in specific regions.
I have tried SQLite but it does not seem like the right choice in this case, considering the dataset in question quickly becomes fairly large (>1gb) and maybe SQLite isn't optimal for doing queries of features within specific regions?
What are some clever ways I can store this dataset in the bundle?
Are there any specific technologies you suggest I try out for this?
You will not be able to load the entire 1 GB dataset into memory.
You should store the data in an R-tree in the database so that you can make region queries directly when you load the data.
I'm trying to write a C++ program using Opencv library that will reconstruct 3d points from the corresponding 2d markers placed on human model.
But I've a question. How do commercial mocap(motion capture) industry figure out which markers belong to which bone structure?
What I mean by my last question is: lets suppose there are three markers placed on left upper arm. What method do they use to associate these three markers to left upper arm from frame to frame?
Because it could belong to right upper arm right or to any bones like front chest, femur etc.
So what process do they implement to differentiate between markers and place the right marker to proper bone structure?
Do they use optical flow, SIFT to track markers where in frame-1 the markers' are labelled for proper bones? But even if the mocap industry use this method, aren't these two methods very time consuming? I saw a video on you-tube. And there they associate and reconstruct markers in real-time.
Is it possible to kindly tell me what procedure commercial mocap industry follow to correspond points to individual parts of skeleton structure?
After all you need to do this because you have to write the xRot, yRot and zRot(rotation about x-y-z axis) of bones in .bvh file so that you can view the 2d motion in 3d.
So what's the secret?
For motion capture or tracking objects with markers in general the way to go is to keep track of the markers themselves between two frames and to keep track of the distance between the markers. A combination of this information is used to determine if a marker is the same as one close by to a marker in the previous frame.
Also these systems often use multiple cameras and have calibration objects where the position of markers is known and the correlation between the cameras can be determined. The algorithms to do this detection are highly advanced in these commercial mocap solutions.
I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.