Factual API vs Google Places API in terms of Distance Matrix (distance and time) - ios

I need enough accuracy in my app but Google Places seems to be poorly accurate filtering by category. So I'm considering migrating to Factual API. Do you guys have used it? What do you think about its accuracy?
In the other hand, I NEED to know the distance to a place and the estimated travel time. I'm getting this info with Google Distance Matrix API, but I don't know if Factual has this functionality or not.
Thanks in advance.

I used Factual's api for one app and the result is worse than Google Place's, at least for the super-market/grocery category

If the Factual API allows you to display the data on a Google Map, you can use the Factual data with the Distance Matrix.

Factual provides distance in query results(in meters from search center). It has a much better category tree system. Factual allows "IncludeAny(Category ids)" (Google only has single level types and does not allow multiple types search). What I do is use Factual for initial search and Google Places for detail on a particular place. Google places has photo[s], reviews(3)and openNow(boolean).
The quality of data is slightly better in Google. (Both need work)

Related

Is there a way to visualize Google keyword planner historical search frequencies on a map (as some form of Geodata)?

I am rather new in the field and now I am faced by the following challenge:
I would like to use historical and current Google search frequency data from the keyword planner (e.g. how often are people in the UK (but also in different UK regions) search for mountainbikes) and then visualize this data on a global map (as a means to visualize market potential). Ideally automated and through an API, but could also be through manual csv import.
Additionally I am thinking of visualizing the number and location of sport retailers by region (or city) - and also other indicators where I can get Geolocation data from (income levels, population density, ...).
Any ideas on how to best approach this from a technical and tool side?

Is it possible to calculate average passerby density only using programming and maps API (Google Maps, Yandex Maps etc.)?

I have coordinates of restaurants and have to measure average human density around each of them. I couldn't find anything about this on the web.
I want to be able to input a coordinate, run the script for a day and see how many people passed by. If there is a faster way that would be even better.

Recommender system result curation

so I want to ask if there's some sort of curation algorithm that arranges/sends results from a recommender system to a user.
For example, how Twitter recommends feeds to users. Is there some sort of algorithm that does that or Twitter just sorts it by highest number of interactions with that tweet (based on time posted too).
No, there is nothing like that.
Actually the recommendation system model is made in such a way, where it sort it based on Content Based filtering or Collaborative filtering according to the view stats of the user.
There are some algorithms like calculating co-relation between the view stats of the user and the content which is in twitter, and then recommend it.
Or Cosine Similarity and Cosine distance can also be used to calculate distance between view stats and content of twitter to recommend.
You must explore also other recommendation system, which is based on other algo's like Pearson Correlation, Weighted Average,etc.

Does google prediction api work for image recogniton

I read the official documentation for the api, but I wanted to make sure that it's possible for it to perform object recognition in images. More specifically, my idea is to provide a lot of images of parking lots with the number of parking spots currently available. I wanna get a model to predict how many spots are available given an image of the parking lot.
Does anybody have previous experience with using the API for a similar goal?
No i don't think google prediction api will works for image recognition.
because prediction api knows only numeric and string.
for image recognition Google Vision Api is the best , i think it cant able to recognize humans or persons but it is recognize place like eiffel tower and all.
even it can able to read the image written strings.

Identify mostly visited places when coordinates (latitude and longitude) are given

I'm working on a project where the locations visited by people are captured in terms of latitude and longitude and analyze all these coordinates to identify the mostly visited places.
I finished up to retrieving the all the coordinates visited by the people and sending those data to a database as well as writing them to a text file. I tried to cluster the data by retrieving them from the text files. Since I'm totally new to machine learning, I'm finding it hard to figure out what to do exactly with the data.
So can anyone please help me to figure out a correct approach to identify the mostly visited places by analyzing the coordinates that I'm having ?
As stated, there is quite a bit of missing information for this question but I will have a go from what I understand.
I can think of two scenarios, and the approach to solving each is not something I would really consider as machine learning. The first scenario is that you have already attributed the lat/long to a definitive location e.g. “visitors of Buckingham Palace”, which would have a set lat/long coordinate associated with it. You’d then be able to generate a list of (Monument_lat, Monument_lon, weight) where weight is the number of visitors attributed to that location. Then it would simply be a case of sorting that list by weight, as has been suggested. I’m not clear on why you don’t think this is the most efficient way (list sorting is trivial and fast).
The other scenario involves raw lat/long data from a phone where you might have extremely similar lat/long pairs, but not exactly the same. You want to group these to single locations. You could divide the region of interest into small rectangular zones where you store the lat/long data for each of the corners of the zones. You then run a ray-casting algorithm to solve the point-in-polygon problem, thereby attributing the raw lat/long data to a zone, and you find the centre coordinate of each zone to apply the "weight".
I don’t know what language you are using, but there is an open-source ray casting algorithm for Python. Depending on the scope of your problem, there could be slight alterations that you might want to make. Firstly, if you are defining the location by a monument name and you don’t have too many, you could go on Google Maps and define your own lat/long corners of zones, to store as a list. If you’re not interested in classifying in a monument-name fashion, you simply divide the whole area into even rectangles. If you wanted, say, 10 metre precision across an entire country then you need to have layers of different sized zones to minimise the computational effort. You might divide the country into 10x10km squares and do a ray cast on that scale to give a rough sorting stage, before doing another ray cast on a 10x10m scale within the 10x10km zone.

Resources