GPS ground coverage - geolocation

Here is my idea to track my sprayer coverage on the farm with an android app.
Use get??Location to provide GPS Coordinates
Use Coordinates to plug into polyline with Maps API v2
Set polyline width according to boom width. (conversion will require pixel to distance conversion at different zoom levels.
How would I display ground coverage with a polyline if the footage on the map will change with zoom level? Correct me if I'm wrong, but the polyline uses pixels for a defined width. My idea would require the user to input the width of the sprayer in feet and then the program would have to then calculate a polyline width based on the zoom/pixel ratio.

You should not draw a poly line, because your spray path forms a closed polygon.
So you must draw a polygon with line width = 0 (or minimum line width);
Fill the polygon.
For such precision farming usuallay better GPS devices are used with centimeter accuracy. (using RTK)

Related

How can I convert GPS coordinate to pixel on the screen in OpenCV?

I'm writing an application in c++ which gets the camera pose using fiducial markers and also as input get a lat/lon coordinate in the real world and as output streams a video with X marker which shows the location of the coordinate on the screen.
When I move my head , the X stays in the same place spatially (because I know how to move it on the screen based on the camera pose or even hide it when I look away.
My only problem is to convert the coordinate from real life to coordinate on the screen.
I know my own gps coordinate and the target gps coordinate.
I also have the screen size (height / width) .
How can I in openCV translate all these to x,y pixel on the screen ?
In my point, your question isn't so clear.
The opencv is an image processing library
You can't convert your needs with opencv. You've need a solution with your own algorithms. So I have some advices and some experiments to explain somethings.
You can simulate to show your real life position on screen with any programming language. Imagine it, you want to develop a measurement software, it can measure a house plan image on screen with drawing lines to edges of all walls (You know some length of walls owing to an image like below)
If you want to measure wall of WC at bottom, you must know how much pixels are how ft, so firstly you should draw a line from start to end of known length for how much pixel width it. For example, If 12'4"" ft equals 9 pixels width. no longer, you can calculate length wall of WC at bottom with use basic proportion. Of course this is basic ratio for you.
I know this is not your need but this answer is helpful for you, I hope it will give some ideas.

Recreate the 3D outlines of a City street in iOS SceneKit with OSM XML data

What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.

Coordinate length to pixel length mapping in ImageCanvas

I want to draw circles in an image canvas. I'm able to get pixel values from coordinate values by calling map.coordinateToPixel.
For radius, how can I map a coordinate distance to pixel length?
For instance, if my radius is 60 arc minutes, it goes from 50 degrees to 51 degrees. In a vector layer, the underlying framework manages the translation to pixels depending on the zoom level. However, for an ImageCanvas, I need to specify that myself. Is there a method to do that? I know I might have to dig into the code, but I was wondering if there's an inherent solution somebody already knows of.
An alternate option I've considered is:
Get the coordinate at pixel (0,0)
Get the coordinate at (radiusLogitude, 0)
Find the diff between the #2 - #1 on the Longitude and use that as my radius
Maybe this example can help you: http://acanimal.github.io/thebookofopenlayers3/chapter03_04_imagecanvas.html it draw a set of random pie charts (but without taking into account pixel ratio).
Note, the canvasFunction you use receives five parameters that can help you determine the pixel size: function(extent, resolution, pixelRatio, size, projection)

RGeo Projected Buffer Polygon too small

I have a rails app that is using rgeo 0.3.19 with proj4 support connecting to a PostGIS 1.5 database with the rgeo-activerecord 0.4.5 gem.
My app has a model called Region which contains a geographic point, a radius, and a polygon shape. When a new region is about to save it uses the region's geofactory's buffer function to create a polygon using the radius and the geographic point.
Here is the geofactory that is being used for the region model
GEOFACTORY = RGeo::Geographic.projected_factory(:buffer_resolution => 8, :projection_proj4 => '+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=#null +wktext +no_defs', :projection_srid => 3857)
The projection_srid I am using is that of Apple and Google maps mercator projection 3857.
The problem is that the buffer that is being created is not the same size as the one I am drawing in either apple maps or google maps. For example, if I use the built in MapKit function MKCircle
[MKCircle circleWithCenterCoordinate:self.coordinate radius:50];
The circle will draw and overlay like this.
But if I take the coordinates that were created form the buffer function that make up the polygon shape in the database and plot them on google maps I get this.
As you can see, the polygon that was created using the same projection system is smaller than it should be. This problem exponentially grows out of control based on the size of the radius defined. I have also tried to use the simple_mercator factory as defined in RGeo which yielded the same results.
Hopefully somebody has some insight into why, when a longitude,latitude projected point is buffered it creates an incorrectly sized polygon.
What you're observing here is Mercator distortion. A distance of "50" in a mercator projection doesn't correspond to 50 meters on the real planet surface, unless you're at the Equator.
The circle drawn by your iOS map is correct: that's the 50 meter radius. What I suspect you did to create your second image was to project the point into a Mercator projection (according to the Proj4 you provided). Then you proceeded to create a buffer with radius 50 in the projected coordinate system. However, 50 Mercator units at latitude 40.61 corresponds to only about 37.96 meters in surface of the earth distance. So when you project that polygon back into latitude and longitude and plot it, that's what you see: a 38-meter circle.
One way to visualize this is to look at the full world map on Google Maps. Draw a circle of radius 50 pixels at the equator. And then draw another circle of radius 50 pixels over Greenland. On the map (in Mercator coordinates), those circles are the same size. But, if you know your Mercator projection, you know it distorts Greenland because Greenland is far away from the Equator, so your circle over Greenland is actually much smaller in real life than your circle above the equator. At 40 degrees latitude, the distortion isn't as severe, but it still is there.
If you want to correct this, it's pretty easy. The size distortion caused by the Mercator projection is proportional to the secant of the latitude. That is, 50 mercator units on the equator equals 50 meters, but 50 mercator units at latitude x (in radians), corresponds to 50 / sec(x) meters. So if you want a radius of 50 meters, multiply 50 by sec(latitude), and use that number as the radius in mercator coordinates. In RGeo-speak:
p_lonlat = GEOFACTORY.point(40.610355377197266, -75.38220214843749)
p_proj = p_lonlat.projection
buf_proj = p_proj.buffer(50.0 * (1 / Math.cos(p_lonlat.y / 180.0 * Math::PI)))
buf_lonlat = GEOFACTORY.unproject(buf_proj)

How to calculate coordinates of center of image from an aerial camera whose FOV, attitude and position are given

I have a problem that involves a UAV flying with a camera mounted below it. Following information is provided:
GPS Location of the UAV in Lat/Long
GPS Height of the UAV in meters
Attitude of the UAV i.e. roll, pitch, and yaw in degrees
Field of View (FOV) of the camera in degrees
Elevation of the camera w.r.t UAV in degrees
Azimuth of camera w.r.t UAV in degrees
I have some some images taken from that camera during a flight and my task is to compute the locations (in Lat/Long) of 4 corners points and the center points of the image so that the image can be placed on the map at proper location.
I found a document while searching the internet that can be downloaded at the following link:
http://www.siaa.asn.au/get/2411853249.pdf
My maths background is very weak so I am not able to translate the document into a working solution.
Can somebody provide me a solution to my problem in the form of a simple algorithm or preferable in the form of code of some programming language?
Thanks.
As I see, it does not related to image-processing, because you need to determine coordinates of center of image (you even do not need FOV). You have to find intersection of camera principal ray and earth surface (if I've understood your task well). This is nothing more then basic matrix math.
See wiki:Transformation.

Resources