I am trying to map some habitat sites in Devon, I created coordinate points from when I was in the field (e.g. 50530384, -4.230573). When I go to map these in QGIS, using the OS standard map, the points are either completely off the map or they are about 10 miles north of where they are meant to be.
When I look at the coordinates QGIS is giving along the bottom, it is more like 6539061, -470779, which is quite different to my coordinates. However if I use these coordinates instead it still doesn't fix the issue.
I have checked the CRS multiple times, and tried changing it. It auto sets to WGS 84 / Pseudo Mercator (normally I would set it to WGS84, but when I set it to this it deleted my map entirely).
Any other fixes?? I think my coordinates are right and it is an issue with the map but I simply cannot figure it out.
Related
I'm working on developing iOS AR application using ARKit + Core location. And the points which are displayed on the map using coordinates move from place to place when I go. But I need they are displayed on the same place.
Here you can see the example of what I mean:
https://drive.google.com/file/d/1DQkTJFc9aChtGrgPJSziZVMgJYXyH9Da/view?usp=sharing
Could you help to handle with this issue? How can I have fixed places for points using coordinates? Any ideas?
Thanks.
Looks like you attach objects to planes. However, when you move the ARKit extends the existing planes. As a result if you put points, for example, at the center of the plane, then the center is always updated. You need to recalculate the coordinates of the point and place objects correctly.
The alternative is not to add objects to planes (or in relation to them). If you need to "put" object on a plane, then the best way is to wait, until the plane will be directed enough (it will not change his direction significantly if you will move), then select a point on the plane where you want to put your object, then convert this point coordinate to global coordinates (as a result if plane will change his size the coordinate you have will not be changed at all), and finally put object in root (or another object that it's not related to the plane).
In GeoGebra, you can easily construct scenes with the GUI and the tools available in the Graphics view. I now have two functions and created some objects around them using that tools (Their intersection point, a circle tangent to both etc.). The whole depends on 5 parameters I defined as sliders for testing.
Now I want to know the coordinates of the point. It is defined as Intersect[l, h] which doesn't help me. I can access its coordinates too (0.8, 3.98) but I want to know how to calculate them depending on the parameters. (I'd expect it to be something like (3a, 7+b-2a)). I know GeoGebra can do this because it must have done it internally to be able to draw the whole image. But I don't know how to access this information.
If you want to get the current position of a Point P you can use the x and y commands. These will update whenever the position of P changes so that you don't have to recalculate where the point should be by hand.
I have a Google Fusion Table that contains some very small polygons over a very large area. I'd like to create an event that switches from polygons to points when the user zooms to a certain level. Currently, the points are only generated at the maximum-most zoom (the entire world). In this example the polygons turn to points when you zoom out by just one level and I'd like to do something similar. Any advise would be greatly appreciated.
The rendering of polygons as points is not a selectable feature, it usually will happen when there are too many features(or when a feature may not be rendered properly).
What you can do: create another-geometry-column, where you store the desired points(e.g. the center of the polygon), then you'll be able to choose which column should be used to render the geometry.
I'm displaying several user's locations on a map simultaneously as circles of different colors.
I can do this using an annotation and then when the user's location updates use UIView:animateWithDuration: to move to their new location.
However there is a requirement that the size of the circles reflects the accuracy of the location i.e. very accurate equals a circle of size 10 meters, rough accuracy is represented as a circle of size 500 meters etc.
However there are two problems using annotations for this - the first is how to transform meters into a CGRect on the correct size to draw on the map. And the second is the annotations need to be resized if the user zooms the map.
So I was looking at using an overlay instead as that already has a radius and automatic resizing during zooming built in so it handles those two problems.
However it looks like overlays are meant to be static and their coordinate property is read only.
Is there some way I can make the overlays move as the user's location moves? (other than completely remove it and re-add it?)
[THis is for iOS 7 only]
Can anyone please help me in calculating center of rotation and position of a X3D object?
I've noticed that aopt tool by InstantReality adds something like:
<Viewpoint DEF='AOPT_CAM' centerOfRotation='x y z' position='x y z'/>
The result is nice, object is properly zoomed, centrated and center
of rotation is somehow perfectly "inside" the object (x,y,z, center).
I must avoid using aopt, how can I obtain that, (i.e. via JavaScript)
pheraphs looping trough XML Coordinate point and doing some calculations...?
I'm using X3DOM to render the object.
Many thanks.
"AOPT_CAM" is the name of the Viewpoint. The centerOfRotation and position values are automatically computed by the Browser (InstantReality in your case).
In order to compute these values by yourself you need to know your object size (BoundingBox) and do some math to compute where the Viewpoint should be located ('position' attribute) in your local coordinate system. You also need to know the object displacement in the coordinate system. If not specified this should be (0,0,0)