What is snap interaction in Openlayer3? - openlayers-3

technically what happens with snap interaction in openlayers.
referance http://openlayers.org/en/v3.8.2/examples/snap.html

From the API documentation of the Snap interaction:
Handles snapping of vector features while modifying or drawing them.
The features can come from a ol.source.Vector or ol.Collection Any
interaction object that allows the user to interact with the features
using the mouse can benefit from the snapping, as long as it is added
before.
The snap interaction modifies map browser event coordinate and pixel
properties to force the snap to occur to any interaction that them.
See: http://openlayers.org/en/v3.8.2/examples/snap.html
In other words, the Snap interaction listens to the browser events that occur on the map and modifies its inner coordinates to "snap" to the closest vertex or segment of a nearby feature. Any other interaction that are added to the map before a snap interaction will use the updated coordinates, because the "top-most, i.e. last" interactions are handled first.

In openlayer, snap interaction will help you move to vertex/edge of polygon more precise. When you mouse near the vertex/edge, snap will move your pointer stand on the vertex/edge. Without snap, you must do it by yourself
You can try here . Comment last line map.addInteraction(snap);, then draw polygon, and move pointer around it, you will see the difference
Snap also is popular term in map system. You can take a look more at here

Related

Konva object snapping with transformer jitters

I'm trying to make an editor using Konva.js.
In the editor I show a smaller draw area which becomes the final image. For this I'm using a group with a clipFunc. This gives a better UX since the transform controls of the transformer can be used "outside" of the canvas (visible part for the user) and allow the user to zoom and drag the draw area around (imagine a frame in Figma).
I want to implement object snapping based on this: https://konvajs.org/docs/sandbox/Objects_Snapping.html. (just edges and center for now) However I want it to be able to work when having multiple elements selected in my Transformer.
The strategy I'm using is basically calculating the snapping based on the .back element created by the transformer. When I know how much to snap I apply it to every node within the transformer.
However when doing it, the items starts jittering when moving the cursor close to the snapping lines.
My previous implementation was having the draw area fill the entire Stage, which I did manage to get working with the same strategy. (no jitter)
I don't really know what the issue is and I hope some of you guys can point me in the right direction.
I created a codepen to illustrate my issue: https://codesandbox.io/s/konva-transformer-snapping-1vwjc2?file=/src/index.ts

only allow snap starting or ending points of lineString in Openlayers3

I am working a Web application with Openlayers3, there is a question about snapping. I only want to snap the starting point or ending point (not snap between 2 points) of lineString, and only allow to modify on nodes (disable creating node between 2 nodes). Glad to see any help about it.
create a point vector layer and populate it with start end points of the lines you want to be used for snapping.
Then initialise your snapping control and pass the source of this vector layer to snap interaction.

Hit-testing a UIGestureRecogniser in 3d space

I'm reasonably new to iOS's SceneKit and have come across a dilemma with regards to user-interaction in a 3d scene:
I have a set of SCNNode cubes in an SCNView, and would like to be able to pin-point where a user touches the mesh of a given cube, as a 3d coordinate (so as to later manipulate the scene according to touch vectors). At present, I've been using a UIGestureRecognizer in order to achieve basic hit-testing, but this seems to be limited to returning 2d-points.
This isn't a problem when wanting to hit-test a whole node itself, as this can be achieved via a UIGestureRecognizer's hittest method in the SCNView. However, does anybody have any suggestions as to how to precisely locate where a touch landed on a node, in terms of coordinates (i.e. SCNVector3)?
Thanks!
You are on the right track with calling hitTest:options: on the SCNView. As you have probably seen it results in an array of SCNHitTestResults.
The hit test result can tell you many things about the hit, one of them being what node was hit. What you are looking for is either the localCoordinates or the worldCoordinates.
The local coordinate is relative to the node that was hit. Since you are asking "how to precisely locate where a touch landed on a node" this is probably the one you are looking for.
The world coordinate is relative to the root node.

Advice for library with GeoSpatial Mapping that allows users to place moving objects on a 2D map

I'm looking for a library/framework/toolkit that will allow me to render a 2D map from real GeoSpatial data and draw objects on the 2D map.
Requirements:
Map Tiling (when I zoom into the map, i want a more detailed image)
Pan (ability to use the mouse to move around the map)
Read various Geospatial images (satellite, street, etc)
Ability to draw objects onto the map (based on lat/longs) and have them move. For example, I want to be able to put an image of a bird on the map and have it move and rotate correctly.
Primitive shapes. It would be nice if it had built in ability to draw lines, circles, etc.
Complex drawing. For example, I want to draw a compass and have it show the heading of the current heading of the bird.
Mouse input. I want to be able to right click on the map and have a context menu appear. I want to click and hold an shape I draw on the map and drag it easily.
What I have looked at:
OpenSceneGraph with osgEarth. It's great, and fulfills my reqs, but is really slow and I had to do a lot of weird things to get things to work (especially with dragging objects on the map).
Cesium: looks promising, but somewhat slow, and I need it to work as a desktop application. I've seen online that some have managed to use Cesium inside Qt's Webkit, but I'm not sure I would want to take that risk.
EDIT:
I really want to stay away from a web-based framework if possible.
http://imgur.com/52DaJtQ
Here is a primitive picture of what I'm want to achieve. The aircraft icon should move and the degree circle along with it. I want to be able to drag the green waypoints and have the lines redraw as I move a waypoint. The red sensor footprint should adjust to what the aircraft can see.
http://imgur.com/52DaJtQ
Google Maps, Open Street Map, Bing Maps.
I use OpenSceneGraph/osgEarth extensively and am not dissatisfied with its performance.
What kind of weird things did you need to do?
If you want, you can contact me privately to troubleshoot your situation. Me website is AlphaPixel.com and there's a contact form there.

How can I implement the custom drawing search tool used in the Realtor iPad app?

The Realtor iPad app has done a very good job of implementing a custom drawing tool on top of mapkit that they use to query an area for homes. I am familiar with mapkit and its associated classes but I am unaware of how I could do some custom drawing with my finger and have it translate to a geospatial query. How to do it?
I'm not sure how far along you've made it with this, but your basic algorithm should look like this:
Draw a polygon overtop your map, then translate the coordinates of that polygon to "map" coordinates. In order to do that you would probably need to listen for gestures on a view other than the MKMapKit instance. With my limited knowledge of the MapKit's touch event handling you might have to overlay a different transparent view on the map when you want to draw, so touch events won't go through to the MapKit (if that makes any sense). You use your finger to pinch, zoom, pan and you won't want that functionality if you're trying to draw. In that view, you'll draw the shape tracing the user's finger, then translate the points drawn into map points.
The docs indicate that you can translate screen points to map points using the convertPoint:toCoordinateFromView: method on MKMapView.
Check this link for information on that: Trouble converting MapKit user coordinates to screen coordinates
This post provides a link that might help you with drawing the polygon:
To draw polygon on google map with MapKit framework
After you've drawn your polygon you'll want to "spatially" query your data. You could do that in several ways. Locally on the device or through a web service are two options. If your data is local to the device you'll have to do the cartographic math on your device. You'll also need to ensure that your point data (the X,Y's) are in the same projection and coordinate space as your polygon's information. Polygon intersection math is relatively straight forward to do, when your projections and coordinate systems line up.
Here's a link that can help you with the math.
https://math.stackexchange.com/questions/237/how-do-you-determine-if-a-point-sits-inside-a-polygon
Alternatively you could set up some web service that takes your polygon data and performs the same cartographic math on a server and returns the results to the device. Either way the same math needs to be performed. You'll take that polygon data and determine which records in your data intersect with that polygon.
This is pretty high-level, I know, but it should be all you need to do.
Another consideration is if your data is spatially enabled with spatialite compiled for SQLite on your device or SQL Server Spatial on your server. You should be able to query the data using that polygon data. You would have to format the query properly, though.
Lastly, I would encourage you to look into the ESRI SDK for iOS. ESRI provides drawing and sketching tools out of the box. Its not too difficult to use but one downside is that you would have to learn a new API:
http://resources.arcgis.com/en/communities/runtime-ios-sdk/

Resources