Google Fusion Tables Polygons to Points - google-fusion-tables

I have a Google Fusion Table that contains some very small polygons over a very large area. I'd like to create an event that switches from polygons to points when the user zooms to a certain level. Currently, the points are only generated at the maximum-most zoom (the entire world). In this example the polygons turn to points when you zoom out by just one level and I'd like to do something similar. Any advise would be greatly appreciated.

The rendering of polygons as points is not a selectable feature, it usually will happen when there are too many features(or when a feature may not be rendered properly).
What you can do: create another-geometry-column, where you store the desired points(e.g. the center of the polygon), then you'll be able to choose which column should be used to render the geometry.

Related

Getting the current visible entities in RealityKit

Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera

Isometric Depth Sorting With SpriteKit

I am making a relatively simple isometric map using SpriteKit. I've tried both using the editor as well as creating it through code, and each time, it seems to have some "weighting" between the various tiles even though they should overlap gracefully given that I'm just setting the styling of a tile.
Here is an example of me using the tiles from https://kenney.nl. The green is just a standard grass patch and the road is the same exact size as it.
When I create this map in the XCode UI or if i iterate through in code and paint them, this continues to occur.
However, if I was to do something like flip the tiles around and paint it all with roads with grass in the middle, it then seems to sort whichever tile there are "more of" like in this example:
If i go and make more of one tile group over another, it seems to overpower it.
So my question is, how can I keep them from using this behavior? I've tried different tilemaps together, nested them inside of eachother etc... But at the end of the day, I cant get different tiles to exist at the same "plane". I've tried with code, the UI, etc. I'd like to use the SKTileMap if possible to use the downstream features as opposed to doing all of the math myself, like in the approach in this article (http://bigspritegames.com/isometric-tile-based-game-part-1/)

What are the ways to create a custom shape touch detection?

Hi I'm making a baseball app and I want from users to input strike zone on a grid like this:
How can I build the visual part so that I won't have too much trouble with implementation of a tap gesture recogniser. It'll be great if I can make this resizable so it will look great on many different devices.
With the touch gesture I need to handle so kind of recognizing position in two ways.
In which section the touch was detected.
What are the approximate coordinates inside this section.
This data will be saved on a cloud and can be later used to show the dot on this grid on other devices.
Is there a way to detect touch in a non rectangular shapes? Maybe with Bezier Paths.
Do you have a suggestion how ti appear on screen without using whole grid as a image. I'd rather divide it in Outer grid and the Inner grid somehow and than create all the pieces in each of this grids. 8 pieces in outer and 9 in inner grid.
You can start with UIBezierPath and its method containsPoint:, although this method "does not take into account the line width used to stroke the path." (link), it is also true for UIBezierPath.
Refer further to this article.

Advice for library with GeoSpatial Mapping that allows users to place moving objects on a 2D map

I'm looking for a library/framework/toolkit that will allow me to render a 2D map from real GeoSpatial data and draw objects on the 2D map.
Requirements:
Map Tiling (when I zoom into the map, i want a more detailed image)
Pan (ability to use the mouse to move around the map)
Read various Geospatial images (satellite, street, etc)
Ability to draw objects onto the map (based on lat/longs) and have them move. For example, I want to be able to put an image of a bird on the map and have it move and rotate correctly.
Primitive shapes. It would be nice if it had built in ability to draw lines, circles, etc.
Complex drawing. For example, I want to draw a compass and have it show the heading of the current heading of the bird.
Mouse input. I want to be able to right click on the map and have a context menu appear. I want to click and hold an shape I draw on the map and drag it easily.
What I have looked at:
OpenSceneGraph with osgEarth. It's great, and fulfills my reqs, but is really slow and I had to do a lot of weird things to get things to work (especially with dragging objects on the map).
Cesium: looks promising, but somewhat slow, and I need it to work as a desktop application. I've seen online that some have managed to use Cesium inside Qt's Webkit, but I'm not sure I would want to take that risk.
EDIT:
I really want to stay away from a web-based framework if possible.
http://imgur.com/52DaJtQ
Here is a primitive picture of what I'm want to achieve. The aircraft icon should move and the degree circle along with it. I want to be able to drag the green waypoints and have the lines redraw as I move a waypoint. The red sensor footprint should adjust to what the aircraft can see.
http://imgur.com/52DaJtQ
Google Maps, Open Street Map, Bing Maps.
I use OpenSceneGraph/osgEarth extensively and am not dissatisfied with its performance.
What kind of weird things did you need to do?
If you want, you can contact me privately to troubleshoot your situation. Me website is AlphaPixel.com and there's a contact form there.

How can I implement the custom drawing search tool used in the Realtor iPad app?

The Realtor iPad app has done a very good job of implementing a custom drawing tool on top of mapkit that they use to query an area for homes. I am familiar with mapkit and its associated classes but I am unaware of how I could do some custom drawing with my finger and have it translate to a geospatial query. How to do it?
I'm not sure how far along you've made it with this, but your basic algorithm should look like this:
Draw a polygon overtop your map, then translate the coordinates of that polygon to "map" coordinates. In order to do that you would probably need to listen for gestures on a view other than the MKMapKit instance. With my limited knowledge of the MapKit's touch event handling you might have to overlay a different transparent view on the map when you want to draw, so touch events won't go through to the MapKit (if that makes any sense). You use your finger to pinch, zoom, pan and you won't want that functionality if you're trying to draw. In that view, you'll draw the shape tracing the user's finger, then translate the points drawn into map points.
The docs indicate that you can translate screen points to map points using the convertPoint:toCoordinateFromView: method on MKMapView.
Check this link for information on that: Trouble converting MapKit user coordinates to screen coordinates
This post provides a link that might help you with drawing the polygon:
To draw polygon on google map with MapKit framework
After you've drawn your polygon you'll want to "spatially" query your data. You could do that in several ways. Locally on the device or through a web service are two options. If your data is local to the device you'll have to do the cartographic math on your device. You'll also need to ensure that your point data (the X,Y's) are in the same projection and coordinate space as your polygon's information. Polygon intersection math is relatively straight forward to do, when your projections and coordinate systems line up.
Here's a link that can help you with the math.
https://math.stackexchange.com/questions/237/how-do-you-determine-if-a-point-sits-inside-a-polygon
Alternatively you could set up some web service that takes your polygon data and performs the same cartographic math on a server and returns the results to the device. Either way the same math needs to be performed. You'll take that polygon data and determine which records in your data intersect with that polygon.
This is pretty high-level, I know, but it should be all you need to do.
Another consideration is if your data is spatially enabled with spatialite compiled for SQLite on your device or SQL Server Spatial on your server. You should be able to query the data using that polygon data. You would have to format the query properly, though.
Lastly, I would encourage you to look into the ESRI SDK for iOS. ESRI provides drawing and sketching tools out of the box. Its not too difficult to use but one downside is that you would have to learn a new API:
http://resources.arcgis.com/en/communities/runtime-ios-sdk/

Resources