My map (Google Map SDK) displays a large number of locations markers. To organize them into clusters I used Distance-based Clustering -- simple. Now, the challenge is to define the right business logic to set the correct zoom level to display the marker clusters.
See the ENCLOSED picture.
I'd like the markers rendered on the map to be clustered into AS MANY groups AS to fit the bounds of the map. For example, if a single cluster is about to be created by Google Map SDK, break it up and create as many clusters as to fit the viewable map area tightly. Do not show 1 cluster, zoomed out.
In you answer please be specific, which approach would you use and why. How would you set the right zoom-level?
Grid-based Clustering
Distance-based Clustering
Viewport Marker Management
Fusion Tables
MarkerClusterer
MarkerManager
VISUAL REPRESENTATION: Show B but not A:
https://drive.google.com/file/d/0B70UaoIrLEeLTUxMZXBCTTRIY1U/edit?usp=sharing
Related
I am working with the Uber H3 library. Using the poly fill function, I have populated an area with H3 indexes for a specific resolution. But I don’t need all the indexes. I want to identify and remove those indexes which are getting plotted on isolated areas like jungles, lakes, ponds, etc.
Any thoughts on how that can be achieved?
I thought that if I can map all the buildings in a city in their respective indexes, I can easily identify those indexes in which no buildings are mapped.
I’d maintain a Hashmap of h3 index as the key and a list of coordinates which lie in that index as the value.
In order to address this, you'll need some other dataset(s). Where to find this data depends largely on the city you're looking at, but a simple Google search for footprint data should provide some options.
Once you have footprint data, there are several options depending on the resolution of the grid that you're using and your performance requirements.
You could polyfill each footprint and keep the resulting hexagons
For coarser data, just using geoToH3 to get the hexagon for each vertex in each building polygon would be faster
If the footprints are significantly smaller than your hex size, you could probably just take a single coordinate from each building.
Once you have the hexagons for each building, you can simply do a set intersection with your polygon hexes and your building hexes to get the "good" set. But it may be easier in many cases to remove bad hexagons rather than including good ones - in this case you'd need a dataset of non-building features, e.g. water and park features, and do the reverse: polyfill the undesired features, and subtract these hexagons from your set.
Background : I need to make Google Ads campaigns that targets a polygon region. I receive this region (which is actually an area where people can access quickly through a way of transportation (e.g : car) a facility we are making a campaign for). Google ads having dropped support for targeting a polygon area we can really only target a group of circles.
So I thought about approximating the polygon with circles (which would be in my understanding "incircle", or "inscribed" circles of this polygon).
A quick and not perfect example, an algorithm would cover as much polygon (the green area) as possible with possibly a parameter to control the maximum number of circles to do so
I found only information about approximating a polygon that is very close to a circle with a single circle, I need multiple ones tho.
Is there any therorical algorithm/any implementation of it, any package or am I mistaken about the idea I have to resolve my challenge about Google Ads geo targeting ?
The only solution I found (who took me 2 days to fully implement) is to create with turfjs a bounding box around my polygon then create a grid of squares (turf.squareGrid) then turn it into a matrix full with either 0 or information about a square. Then a recursive function find the biggest square in this matrix, then the next one etc. until all squares are into equivalent or bigger squares. Then I made circles around these bigger squares (with radius being equal to sqrt(2) (~1.41) * (perimeterOfABigSquare / 2).
I am looking for an example using Apache Ignite where we have a bunch of geo points like example locations in a city a we do a query for points close to a certain point within a radius.
I have found only an example of POLYGON search at:
https://dzone.com/articles/geospatial-queries-with-apachereg-ignite
Thank you, kindly
Luis Oscar
Ignite uses Java Topology Suite to support spatial queries. As far as I know, circular geometry is not supported by JTS.
But you can still approximate a circle with a polygon. If you need exactly the points, that fit into a circle, you can query for points, that lay in a circumscribed regular polygon and after that filter out the points, that have greater distance, than the specified radius. The filtering may be performed in code outside the query.
As said above, you could use a "greater rectangle or square" which is the smaller rectangle containing your circle. Then loop through the results with the isWithinDistance(...) method from your orignal point to filter the point inside the rectangle but not inside your circle.
However as ignite uses JTS, which is a planar representation, according to my tests, you can't make accurate search with spherical(or ellipsoid) projection (like WGS84) and many results would be wrong. Most likely the Polygon search is not accurate and the isWithinDistance won't filter correctly, unless you are searching something at less than 10meters from the original point.
What are the ways to identify a particular object in a room and the position of the user for indoor navigation with AR. I understand that we can use beacon and marker to identify an object or the location of user in a room.
Without using them what are the other alternatives for finding user location and identifying an object for AR experience. I am exploring on AR for indoor navigation with iOS devices(currently focusing on using ARKit). If we use core location for user positioning, the accuracy is low. In a small shop if we use core location or any map related services we will face user/product miss positioning leading to not-a-good experience for users. Any other ways/solutions to solve this?
The obvious alternative way to detect objects visually in a scene would be to use CoreML framework with ARKit. A basic app is already available on Github.
CoreML-in-ARKit
You can also obtain the worldPosition of those objects relative to a starting origin & plot the x,z coordinate system (indoor map) based on the SCNNode label position. It’s not going to be that accurate... but it’s a basic object identification and positioning system.
Edit:
One limitation of using an out-of-the-box CoreML image classifiers like
Inceptionv3.mlmodel is it only detects the dominant generic objects from a set of generic categories such as trees, animals, food, vehicles, people, and more.
You mention doing an object recognition (image classifying) inside a retail shop. This will need a custom image classifier that can for example discriminate different types of iphone models (iphone7, iphone 8 or iphone X) rather than merely determining its a smartphone.
To create your own object recogniser (image classifier) for ARkit follow this tutorial written by Hunter Ward.
https://medium.com/#hunter.ley.ward/create-your-own-object-recognizer-ml-on-ios-7f8c09b461a1
code is available on Github:
https://github.com/hanleyweng/Gesture-Recognition-101-CoreML-ARKit
Note: If you need to create a custom classifier for 100’s of items in a retail shop... Ward recommends around 60 images per class... which would total around 60 x 100 = 6000 images. To generate the Core ML model, Ward uses a Microsoft Cognitive Service called “Custom Vision”... which currently has a limit of 1000 images. So if you need to do more than 1000 images you will have to find another way to create the model.
I'm trying to write a C++ program using Opencv library that will reconstruct 3d points from the corresponding 2d markers placed on human model.
But I've a question. How do commercial mocap(motion capture) industry figure out which markers belong to which bone structure?
What I mean by my last question is: lets suppose there are three markers placed on left upper arm. What method do they use to associate these three markers to left upper arm from frame to frame?
Because it could belong to right upper arm right or to any bones like front chest, femur etc.
So what process do they implement to differentiate between markers and place the right marker to proper bone structure?
Do they use optical flow, SIFT to track markers where in frame-1 the markers' are labelled for proper bones? But even if the mocap industry use this method, aren't these two methods very time consuming? I saw a video on you-tube. And there they associate and reconstruct markers in real-time.
Is it possible to kindly tell me what procedure commercial mocap industry follow to correspond points to individual parts of skeleton structure?
After all you need to do this because you have to write the xRot, yRot and zRot(rotation about x-y-z axis) of bones in .bvh file so that you can view the 2d motion in 3d.
So what's the secret?
For motion capture or tracking objects with markers in general the way to go is to keep track of the markers themselves between two frames and to keep track of the distance between the markers. A combination of this information is used to determine if a marker is the same as one close by to a marker in the previous frame.
Also these systems often use multiple cameras and have calibration objects where the position of markers is known and the correlation between the cameras can be determined. The algorithms to do this detection are highly advanced in these commercial mocap solutions.