Is it possible to calculate average passerby density only using programming and maps API (Google Maps, Yandex Maps etc.)? - geolocation

I have coordinates of restaurants and have to measure average human density around each of them. I couldn't find anything about this on the web.
I want to be able to input a coordinate, run the script for a day and see how many people passed by. If there is a faster way that would be even better.

Related

Is there a way to visualize Google keyword planner historical search frequencies on a map (as some form of Geodata)?

I am rather new in the field and now I am faced by the following challenge:
I would like to use historical and current Google search frequency data from the keyword planner (e.g. how often are people in the UK (but also in different UK regions) search for mountainbikes) and then visualize this data on a global map (as a means to visualize market potential). Ideally automated and through an API, but could also be through manual csv import.
Additionally I am thinking of visualizing the number and location of sport retailers by region (or city) - and also other indicators where I can get Geolocation data from (income levels, population density, ...).
Any ideas on how to best approach this from a technical and tool side?

Real Time Camera/Image Based Large-Scale Slam Algorithm

I want to use an already implemented SLAM algorithm for mapping my college campus.
I have found some algorithms on OpenSLAM.org and some other independent ones such as LSD-SLAM and Hector SLAM, which show some promises but they have limitation such as they use LIDAR, or don't extend to large dataset etc.
SLAM has been an active topic for many years and some groups have also mapped an entire town. Can someone point me to such efficient algorithm?
My requirements are:
It must use RGB camera/cameras.
Preferably produce (somewhat) dense map of area.
It should be able to map large area (I have seen some algo which can only map up to a desk table or room, but they usually lose track if there is jerk in camera motion (Observed in LSD SLAM) or take very few landmarks which is only useful for study purposes).
Preferably a ROS implementation.

Identify mostly visited places when coordinates (latitude and longitude) are given

I'm working on a project where the locations visited by people are captured in terms of latitude and longitude and analyze all these coordinates to identify the mostly visited places.
I finished up to retrieving the all the coordinates visited by the people and sending those data to a database as well as writing them to a text file. I tried to cluster the data by retrieving them from the text files. Since I'm totally new to machine learning, I'm finding it hard to figure out what to do exactly with the data.
So can anyone please help me to figure out a correct approach to identify the mostly visited places by analyzing the coordinates that I'm having ?
As stated, there is quite a bit of missing information for this question but I will have a go from what I understand.
I can think of two scenarios, and the approach to solving each is not something I would really consider as machine learning. The first scenario is that you have already attributed the lat/long to a definitive location e.g. “visitors of Buckingham Palace”, which would have a set lat/long coordinate associated with it. You’d then be able to generate a list of (Monument_lat, Monument_lon, weight) where weight is the number of visitors attributed to that location. Then it would simply be a case of sorting that list by weight, as has been suggested. I’m not clear on why you don’t think this is the most efficient way (list sorting is trivial and fast).
The other scenario involves raw lat/long data from a phone where you might have extremely similar lat/long pairs, but not exactly the same. You want to group these to single locations. You could divide the region of interest into small rectangular zones where you store the lat/long data for each of the corners of the zones. You then run a ray-casting algorithm to solve the point-in-polygon problem, thereby attributing the raw lat/long data to a zone, and you find the centre coordinate of each zone to apply the "weight".
I don’t know what language you are using, but there is an open-source ray casting algorithm for Python. Depending on the scope of your problem, there could be slight alterations that you might want to make. Firstly, if you are defining the location by a monument name and you don’t have too many, you could go on Google Maps and define your own lat/long corners of zones, to store as a list. If you’re not interested in classifying in a monument-name fashion, you simply divide the whole area into even rectangles. If you wanted, say, 10 metre precision across an entire country then you need to have layers of different sized zones to minimise the computational effort. You might divide the country into 10x10km squares and do a ray cast on that scale to give a rough sorting stage, before doing another ray cast on a 10x10m scale within the 10x10km zone.

How do I generate a random 2D map for an iOS application?

I would like to be able to generate random terrain and store it in a file, but I'm not sure of the file type or how to generate it. I'm not really sure where to start and would appreciate any advice. I would not like to use any third party programs to do it, because I'd like to understand it fully. Any ideas?
As I am not sure what is meant by "2D map" by the OP (could be a geographical map with roads and stuffs, a map of tiles for a game like in Andrey's answer, or a 2D elevation map like an heightfield to generate a terrain for 3D applications), I will focus on elevation maps which are, IMHO, harder than tile-based 2D maps for 2D games and easier than geographical maps.
For elevation maps, several options:
generate a set of random values and low-pass filter them. Using an FFT to do the low pass filtering, you'll obtain a tileable heightfield.
use Perlin noise.
based on Perlin noise and fractional brownian motions, several variations are described in "Texturing and Modeling: a Procedural Approach" (Perlin and Musgrave). Namely, for example, hetero-terrains, ridged multifractals, warped ridged multifractals etc.
You should take a look at this tutorial. It covers in detail and in plain language how to make a map
Use Cocous2D and it has Cocous2dBuilder. I think it will help you to build terrain

Factual API vs Google Places API in terms of Distance Matrix (distance and time)

I need enough accuracy in my app but Google Places seems to be poorly accurate filtering by category. So I'm considering migrating to Factual API. Do you guys have used it? What do you think about its accuracy?
In the other hand, I NEED to know the distance to a place and the estimated travel time. I'm getting this info with Google Distance Matrix API, but I don't know if Factual has this functionality or not.
Thanks in advance.
I used Factual's api for one app and the result is worse than Google Place's, at least for the super-market/grocery category
If the Factual API allows you to display the data on a Google Map, you can use the Factual data with the Distance Matrix.
Factual provides distance in query results(in meters from search center). It has a much better category tree system. Factual allows "IncludeAny(Category ids)" (Google only has single level types and does not allow multiple types search). What I do is use Factual for initial search and Google Places for detail on a particular place. Google places has photo[s], reviews(3)and openNow(boolean).
The quality of data is slightly better in Google. (Both need work)

Resources