Google earth GEOdata? - geolocation

Question: Is it possible to use/retrieve Geodata from Google-Earth ?
What I want to do is take a little area, get terrain information (coordinates, height, elevation) and simulate how the selected area would be flooded at specified amounts of rain for a specified amount of hours.

Free terrain data for specified areas can be retrieved from NASA's SRTM mission, which used radar mapping from the space shuttle. I think Google Earth uses this same data.

Related

How to evaluate the remoteness of a location given its coordinates?

I need to be able to evaluate how remote a location is given its geographical coordinates. I rate remoteness based off of a few key metrics, so far, I am only able to calculate a subset of all the required metrics:
The cellular reception at the given coordinate. More specifically, the density of cell towers around the coordinate. This can be found using opencellid.org.
Elevation. This can be found using Google's Elevation API
How can one find these remaining metrics for remoteness?
The type of natural feature the coordinate is in. (eg. Lake, River, Glacier, Ocean, Island, Mountain)
Distance to the nearest road. (Google's Snap Road API and Nearest Road API only work if the coordinate is within 50m of a road, that will not work as some coordinates are hundreds of km from the nearest road).
About land type
For your first question it has already been answered here, except it is only for land/water.
My approach would be the following:
Using maps static, you get the image at your coordinate, you get the pixel at the center of your image (your coordinates) and you use a hashmap/dictionary that contains all the different possible colors and their land type, would be very quick to implement. But you can find out different ideas by reading the first link provided.
For strength of cellular signal
As for your second question, you can use Google API to detect the closest cell towers object, using the locationAreaCode that you can obtain through the coordinates:
An example cell tower object is below.
{
"cellTowers": [
{
"cellId": 170402199,
"locationAreaCode": 35632,
"mobileCountryCode": 310,
"mobileNetworkCode": 410,
"age": 0,
"signalStrength": -60,
"timingAdvance": 15
}
]
}
What is the purpose I wonder? You could take a sampling of coordinates around the fix and if they are mostly on a hill or in water it is definitive, it seems people know how to figure out this kind of stuff with google apis.
Would this be good enough?
Get Lat/Lon and range from a sources like this: https://my.opencellid.org/dashboard/login?ref=opencellid for free. Use a formula to determine the distance between the gps locations like this: https://nathanrooy.github.io/posts/2016-09-07/haversine-with-python/. Then make your own determination on strength based on "range" and terrain. perhaps create a DB table of say 500 zip codes with label for terrain type rating. If 10 or something it's the worst terrain and you drop the strength by something that makes sense.

How to get nearby city or state name of a geopoint in water in ios?

I am developing a location-based application in which I need to get nearby location name of any geopoint selected by user. I'm using Google Places API which is working fine for me.
Only problem is the service returns null for geopoints in water. Is there any way that I can retrieve nearby locations for a geopoint in water or ocean?
AFAIK the API has no way to do that.
So, you've got two options, in order of the effort it takes:
When user taps water just throw a dialog saying "Please select a
point on land". Next to no effort and will slightly annoy the user.
Try to find the closest land geopoint yourself and use it to run the API request on
(instead of the original point). Below are some ideas on that.
A good approach can be based on this answer: basically you can get a KML file with land polygons. For performance reasons, you can simplify the polygons to the extent that makes sense for your zoom levels. Now if your point is in one of those polygons -- it's sea. And you can simply iterate over all polygon edges and pick the one that's closest to your point, then pick a point on it - again closest to your point - and do one little epsilon-sized step towards the outside of the polygon to get a land point you can do a geocode request on. Also, the original author suggests you can use Haversine formula to determine neares land point -- I'm not really familiar with the appliance of that one.
The downside is, you have to deal with KML, iterate over a lot of polygons and optimize them (and lose precision doing that, in addition to possible differences between marineregions.org data and Google Places data)
Another cool trick you could try is using Sobel Filter [edge detection] on the visible map fragment to determine where coastline is (although you will get some false positives there), then trace it (as in raster->vector) to get some points and edges to calculate the closest land position with, in a manner similar to the former approach. Here's a clumsy drawing of the idea
For Sobel edge detection, consider GPUImage lib -- they have the filter implemented and it's probably going to work crazy fast since the lib does all the calculations on GPU.
UPD Turns out there's also a service called Koordinates that has coastline data available, check the answer here

Blackberry cache reverse geocode address info with proximity

Most people are limited to about 5 or 6 locations on a daily basis (work, home, school, store, etc). I want to speed up address display by caching a few of these most visited locations. I've been able to get the address info using both google maps GPS and JSON and Locator.reverseGeocode. What would be the best way to cache this information and to check proximity quickly? I found this GPS distance calculation example and have it working. Is there a faster way to check for proximity?
Please see similar question first: Optimization of a distance calculation function
There are several things we can change in distance calculations to improve performance:
Measure device speed and decrease or increase period of proximity test accordingly
Trigonometric calculations takes most of performence, but it may done much faster. First make bold distance calculations using lookup table method, then if distance is less than proximity limit + uncertainty limit, use CORDIC method for more precise calculation.
Use constants for Math.PI/180.0 and 180.0/Math.PI
several links that may be helpful:
Very useful explanations of CORDIC, especially doc from Parallax for dummies
Fast transcendent / trigonometric functions for Java
Cordic.java at Trac by Thomas B. Preusser
Cordic.java at seng440 proj
Sin/Cos look-up table source at processing.org by toxi

Game Terrain Database Model

I am developing a game for the web. The map of this game will be a minimum of 2000km by 2000km. I want to be able to encode elevation and terrain type at some level of granularity - 100m X 100m for example.
For a 2000km by 2000km map storing this information in 100m2 buckets would mean 20000 by 20000 elements or a total of 400,000,000 records in a database.
Is there some other way of storing this type of information?
MORE INFORMATION
The map itself will not ever be displayed in its entirety. Units will be moved on the map in a turn based fashion and the players will get feedback on where they are located and what the local area looks like. Terrain will dictate speed and prohibition of movement.
I guess I am trying to say that the map will be used for the game and not necessarily for a graphical or display purposes.
It depends on how you want to generate your terrain.
For example, you could procedurally generate it all (using interpolation of a low resolution terrain/height map - stored as two "bitmaps" - with random interpolation seeded from the xy coords to ensure that terrain didn't morph), and use minimal storage.
If you wanted areas of terrain that were completely defined, you could store these separately and use them where appropriate, randomly generating the rest.)
If you want completely defined terrain, then you're going to need to look into some kind of compression/streaming technique to only pull terrain you are currently interested in.
I would treat it differently, by separating terrain type and elevation.
Terrain type, I assume, does not change as rapidly as elevation - there are probably sectors of the same type of terrain that stretch over much longer than the lowest level of granularity. I would map those sectors into database records or some kind of hash table, depending on performance, memory and other requirements.
Elevation I would assume is semi-contiuous, as it changes gradually for the most part. I would try to map the values into set of continuous functions (different sets between parts that are not continues, as in sudden change in elevation). For any set of coordinates for which the terrain is the same elevation or can be described by a simple function, you just need to define the range this function covers. This should reduce much the amount of information you need to record to describe the elevation at each point in the terrain.
So basically I would break down the map into different sectors which compose of (x,y) ranges, once for terrain type and once for terrain elevation, and build a hash table for each which can return the appropriate value as needed.
If you want the kind of granularity that you are looking for, then there is no obvious way of doing it.
You could try a 2-dimensional wavelet transform, but that's pretty complex. Something like a Fourier transform would do quite nicely. Plus, you probably wouldn't go about storing the terrain with a one-record-per-piece-of-land way; it makes more sense to have some sort of database field which can store an encoded matrix.
I think the usual solution is to break your domain up into "tiles" of manageable sizes. You'll have to add a little bit of logic to load the appropriate tiles at any given time, but not too bad.
You shouldn't need to access all that info at once--even if each 100m2 bucket occupied a single pixel on the screen, no screen I know of could show 20k x 20k pixels at once.
Also, I wouldn't use a database--look into height mapping--effectively using a black & white image whose pixel values represent heights.
Good luck!
That will be awfully lot of information no matter which way you look at it. 400,000,000 grid cells will take their toll.
I see two ways of going around this. Firstly, since it is a web-based game, you might be able to get a server with a decently sized HDD and store the 400M records in it just as you would normally. Or more likely create some sort of your own storage mechanism for efficiency. Then you would only have to devise a way to access the data efficiently, which could be done by taking into account the fact that you doubtfully will need to use it all at once. ;)
The other way would be some kind of compression. You have to be careful with this though. Most out-of-the-box compression algorithms won't allow you to decompress an arbitrary location in the stream. Perhaps your terrain data has some patterns in it you can use? I doubt it will be completely random. More likely I predict large areas with the same data. Perhaps those can be encoded as such?

Is there a formula to convert from Thomas Bros Map page & grid to a latitude/longitude?

I'm working on a project that contains Thomas Brothers Map page and grid numbers. Is there a way to programatically convert from this map page to a latitude & longitude?
An Example would be for the intersection of the US101 & I405 freeways.
ThomasBrothers: 561-3G (page-grid)
Not that I know of, but I don't have a lot of experience with Thomas bros maps. Are you talking about printed version of the maps or is there a link somewhere to an online map?
If you just need a few lat/longs, then you can look up the locations that correspond to the grid and get the lats and longs manually at many websites, including http://itouchmap.com/latlong.html
If you provide a link to a Thomas bros map that you are using, I might be able to help further.
By looking at the link above, you can determine that US 101 and I-405 has a latitude of 34.16073390017978 and a longitude of -118.46952438354492.
Your best source would be the map publisher. If they choose to help, someone there can tell you exactly what you need to know. If they won't help you, it's unlikely that they've released the information to anyone else.
If that's the case, you could do some work by hand to correlate one point from the map grid to your target coordinate system. Effectively, you could reverse engineer a mapping "datum" for each page. You'd also have to know what map projection was used to render the maps, so that you can calculate the transform from the map coordinates to the geographic coordinates as you move away from your "origin". Finally, you'll need to establish the orientation of the map, since different notions of "north" exist.
It sounds like the Thomas maps use a new grid for every page, rather than bleeding the grid continuously from page to page. If that's the case, you'll have to correlate one point on each map. For example, find a spot where a map grid intersection coincides with a notable road intersection. Then you can find the coordinates of the road intersection using a map with latitude and longitude (a topographic map, TerraServer, etc.). Doing this with two points on the same vertical grid line should help you establish the north used on the map as well.
The short answer is that each of the nine regions has a grid derived from a Lambert conformal conic projection with custom parameters, so you cannot write a conversion program without the parameters.
I've also got ThomasBros. pages that I would like to convert to lat/long for lookup against Google Maps API. They also provided something called TBXY ... not sure what this is -- perhaps some notation for GPS/lat/long?
<Area>"El Cajon"</Area>
<ThomasBrothers>"1297 5E"</ThomasBrothers>
<TBXY>"6481390:1827008"</TBXY>
Thomas Brothers Maps invested a lot when developing their GIS system to create their digital mapping system. Though the first "digitally produced" map was Sacramento County-1990, the development began back in 1986. I expect that their map projection equations are a well guarded trade-secret, which Rand McNally now owns. I'd don't know those equations, but would also like to know them.
There are 9 projections covering the 48 states. If you know the equations for Los Angeles, it is valid across California & Nevada. Oregon & Washington have their own projection. Arizona, New Mexico, Colorado, and Utah share another projection.
I do know this...
As many know, the page grid is an exact 1/2 mile square, or 2640 feet by 2640 feet. The coordinate measurement unit is 1 foot.
To determine the Thomas Brothers XY Coordinate, get one or more of the Thomas Guide CD- ROM maps, which were recently discontinued. The last ones produced for certain California counties were the 2008 edition. Last editions for Seattle, Portland, Las Vegas, and Phoenix/Tucson were the 2007 edition. Each is still available on the Rand McNally website for $20.
When you geo-code a group of addresses, you'll see an output file with the TGXY coordinates and Lat/Lon for the addresses you specified, and the page # and grid that point is in. Once that file is open, you can click on the map to add additional geo-coded points, which will also provide both the coordinates. The output file is saved in an Access database ".mdb" file.
If you know a lot about map projections or solid geometry, the set of corresponding TGXY and Lat/Lon coordiantes will provide you some good data for testing.
As you mentioned San Diego Page 1297, I'll provide its bordering coordinates.
West x=3062760
East x=3086520
North y=0985040
South-y=0966560
This is not in range of the "TBXY" you found on Google. Maybe it's the same projection, with a relocated origin.

Resources