In the Maps app for iOS, if you tap on a point of interest it displays relevant information about that POI. For example if you tap on the train station icon, it displays the name of the station.
Is it possible to replicate this behaviour using MKMapKit? The icons for the POIs are there on the map, but of course tapping on them does nothing. I've tried reverse geocoding based on where the user tapped, but this just gives me a street address, no data about the POI.
Another strategy I've tried is to use Google Places API, to load POIs within the visible map region, and then setup annotations for each of those POIs. The problem is that this API has a limit of 20 results, so it doesn't work too well in a dense area.
Anyone have any ideas on how I could achieve this?
Related
I am developing an App that shows a map with annotations. The annotations are in thousands and growing. There are data associated with the annotation pins. I get the data from web service. Should I download all the data for all the pins upfront? That obviously is bad. But how should I do it in an efficient way that when the user zooms in and out the map, there is no lag.
Thanks in advance!
Show pins according particular region zoom, instead of showing all pins at one time.
I am building an iOS map with local search functionality similar to Yelp.
The user will see a local map generated from Google Maps SDK from iOS, and local points of interest returned by the search, will be displayed on the map as custom markers. Simultaneously, the user can also see these same local search results in a list form on another screen.
My issue is as follows: to optimize loading time, I would like the app to load only the search results within the area displayed by the map, given the coordinates of the center of the map, and the zoom level. This would be more efficient than loading, say, all search results within a 30 km radius, most of which may not be visible on the map. Is there a simple way to know exactly what are the coordinates of the area displayed by the map on the user's phone (bottom left corner, top right corner)?
I have seen other posts online, explaining how the coordinates of the area can be calculated using zoom level, latitude, and screen resolution. However I am wondering if there is a simpler way commonly used by other apps that display local points of interest.
Thank you!
I am writing an iPhone application that uses google maps SDK to display a city. I need to add multiple markers on the map to identify certain locations.
I could loop through and add each marker upon the map load, but I don't believe this is a efficient technique(it seems very unnecessarily and resource heavy!!)?
Or is there some "Lazy loading" technique I could use to pull in the markers that are currently in view?
For that you can add it in map region wise, you have to add marker only for current displaying map region, but for this also you need to run loop.
So you can avoid other markers that would added to map.
I'm working on an app that lets a user select locations on a map. The entire map is subdivided into irregular regions (administrative boundaries), and when a user touches a point on a map, I need to be able to figure out which region the point belongs to. Just to clarify, there is no finite set of points for a user to choose from, they just tap anywhere on the map.
What is the best way to achieve this? I have been looking at MKPolygon class but cannot really figure out if this is the way to go. If it is, would I be using intersectsMapRect: method of the MKOverlay protocol to check for a match? Are there any good tutorials on this kind of map operations?
A good approach here might be the MapBox iOS SDK and it's RMInteractiveSource, which is designed for this. Check out this sample app which shows interactive regions.
This is done by a space-optimized, offline-capable key-value store of sorts that keys pixels at varying zoom levels to arbitrary content values (region name, data, imagery, etc.)
In MapKit proper, you'll need some sort of spatial analysis (maybe Spatialite?) to determine intersections between points touched and irregularly-shaped regions.
I'm building an iPad app that will present a screen-by-screen walkthrough of directions sourced from Google's Directions API. I'd like to track the user's progress through physical space using CoreLocation and update the screens to follow the user, similar to most directions applications.
My initial idea is something along these lines:
For each step in the directions, grab the corresponding polyline
When CoreLocation updates, check whether the lat/long pair are within some delta of some point on the polyline (ie, iterate over all the points on the polyline).
If the location is within the polyline, stay on the same screen
If not on the polyline, check whether the user is within the same delta of some subset of the polyline for the next step (say 10 points) and, if so, advance to the next screen.
If not on the next polyline, alert the user that they've left the route.
This seems inefficient and not particularly accurate... Are there better ways to do this?