I have an App I am developing for iOS, and the app does the following
Load and set annotations and launch corelocation and zoom to location.
There are a lot of annotations on the map, the loading from data doesn't take long, but the actual rendering of them them to the map takes a while.. so the user interface sort of stalls for a little bit, and then finally gets the corelocation and zooms to it.
While this is functional, it is less than ideal user experience.. I could invert the order, than do the corelocation zoom first and then call the add annotations, but this would cause a pause to the UI as well since annotations are added in the UI thread, and not to mention that corelocation could take a little time to get its location first too.
So, the question I guess I am asking is what is the best way to handle this? Is there some way I am unaware of to have the annotations render to the map without tying up the UI? I could show some sort of Splash Screen I guess over the map while this is going on, but that seems a cop out, and I personally hate splash screens.
Maybe the best way to do this is to show the BUSY/WORKING spinner over the map until its completed?
What is generally considered best practice?
You could use OCMapView to cluster all annotations. As already mentioned, the map can handle a bunch of annotations but the performance goes down with the number of drawn views and MKannotationViews don't make a difference.
OCmapView clusters all annotations and displays them merged in a single annotationView. try it out, it's free.
https://github.com/yinkou/OCMapView
I wound up simply doing the same thing I did with the android version, while iphone can handle the larger number of annotations far better than droid, the lazy loading approach certainly is a better overall experience for both platforms.
Thanks for the help guys.
I think your best solution would be to simply not add "a lot" of annotations to the map. MKMapView has a lot going on and it doesn't take a ton of annotations to bog it down. There are a number of creative ways that you can go about reducing the number of annotations.
If you have a lot of annotations that are grouped tightly together, consider aggregating them into a single annotation which then split apart into separate annotations when the user reaches a satisfactorily close zoom level.
Another consideration would be to default the user to a tighter zoom level and only add annotations that are currently on screen at that zoom level and position.
Do either of those options sound viable or get you thinking about another creative way to help your situation?
Related
I'm looking for a way to draw "Lines" above a UIWebView.
I have a UIWebView that display a PDF file, the user should be able to add "Lines" and "Sketches" (simple one color lines etc) for sure this could be done with a UIView on top of the UIWebView but i m running into 2 logical problems.
First can the UIView where the drawing is, be transparent beside the lines - so you can view the pdf through it?
How could i handle the zooming in the PDF, if a user zoom the WebView, the UIView have to zoom "with each other" - so the drawing stays at the same spot/zoom level?
Is there any other way to display a PDF and add drawings/annotations to it? Currently i m using a QLPreviewController where i see no way to add any kind of annotations?
Is three any best practice for this?
PSPDFKit handles this (and many other hard PDF problems) very well. Using a web view for this kind of problem is likely to have many little corner cases. Any commercial product that has non-trivial needs around PDFs should definitely start there. For open source projects I don't have a great answer beyond "yeah, PDFs are a pretty tough; good luck."
That said, here are some starting points that may help you.
You can turn off zooming with webView.scalesPageToFit = false
You can get the current zoom scale using webView.scrollView.zoomScale
I believe you can KVO observe zoomScale to track it while it changes, but you may only get the target value (which will cause you to lag).
You can disable zooming (scalesPageToFit) and then re-implement it yourself with a UIPinchGestureRecognizer and scrollView.setZoomScale(_:animated:). That way you could track the zoom changes better. You could also try to handle the animation yourself with a CABasicAnimation so that you could keep it in sync.
My experience with scroll views, web views, and PDF is that there are a lot of little funny interactions that will surprise you. Getting something that "kind of" works isn't that hard, but getting it really clean, smooth, and beautiful can be a nightmare. That's why I typically recommend PSPDFKit to clients. You'll generally spend much less on the license than on the custom development.
We've integrate Apple native map in our iOS application. But, we're facing some performance issue when playing with Map.Can we get some better performance if we will replace Apple native map with Google map OR Do we have any other way ??
I don't think any map is going to like having 2000 annotations added to it, personally I think you need to sit down and work out a way of greatly reducing that. Perhaps only show visible annotations then add and remove them as the user scrolls around. Or if they zoom out 200 groups into 1.
It will be a job but worth it.
I want to make a a sliding up like FourSquare app.
Like this:
What I want to achieve are:
The UITableView goes all the way up to UINavigationBar.
It drags along with my finger's position.
My app also have a GMSMapView below (Google Map's API, similiar to FourSquare), I don't want the map responses to my gestures on the UITableView, I want it stays still.
Works both in iOS 6 and 7 and iPhone 4,5.
Does anybody have a framework, github's link ... that can help me fulfill this ?
Thank you.
I have been working on something very similar in the past days. This answer is actually quite good, but you will suffer a bit in terms of performance. After more tweaking, I used parts of this library. You don't need to use everything, but keep in mind the following when choosing a library:
Libs that base the movement of the map on the map.centerCoordinate are less performant than libs that base the movement on the map's frame.
You can also read a bit from this tweets exchange I had.
My thoughts about what FourSquare actually did, is that in the beginning they are using a screenshot of the map, so they are not really using a MKMapView, but an UIImageView. Once you touch it and you animate it, they switch between one and another and they start using a map. I will be using Reveal App plus this to know exactly what they are doing.
Maybe I'm just searching for the wrong term, but I've been able to find very little information on this subject, and I think it could be a problem for my app.
A while back, there was an article on the accuracy of the touch screens on iOS devices, and it seemed quite poor compared to other phones. Here is a link a posting about it:
http://forums.macrumors.com/showthread.php?t=1660713
Anyway, many of the commenters referred to "perspective compensation" as a cause for the inaccuracy. Basically, they are saying that iOS intentionally registers touches above the actual point of contact to compensate for the typical viewing angle of the user or for the angle of their finger or something like that. I have found that there is some credibility to that claim myself by doing as one of the commenters suggested and trying to use my iPhone upside down. I did find that it was difficult to touch things in some cases, and I have also noticed this problem in one of the apps I'm developing.
So, in case you want to skip all that rambling above, here is why it's a problem for me:
I am developing an app that is intended to be used by two people at the same time. The iPhone or iPad is placed on a surface between two people who are sitting across from one another, and they are instructed to quickly and accurately touch items on their respective halves of the screen competitively. What the article's comments made me suspect might happen, and what I have also found in practice is that the person using the phone upside down will have trouble touching buttons and dots on their first try. I've also tested slowly with a stylus and found that the touchable area of a button does indeed extend below a button, or above the button for the person using the phone upside down, hence the discrepancy and problem/disadvantage for that person.
So finally, if you want to skip that also, here is my question: Can "perspective compensation"(if that's what it's called) be disabled programmatically, and can it be done for specific views of an app? Have any of you noticed this and dealt with it in an app of yours?
While I have found that "perspective compensation" does seem to be occurring, I have not found any official documentation of it, and therefore have no idea how or if it can be disabled. When I search for "perspective compensation," the only results I find are links to the same article and comments.
I can't help but expect that this may have been asked before or is solvable with a simple checkbox, but perhaps for lack of the correct term to use, I have been unable to find any leads.
Thanks in advance for any of your solutions or suggestions!
This can't be done with the current SDK. All we have access to is the touch location, which is at a single point. Other search terms you might try are "digitizer" or "raw touch data", but there is definitely no check box or simple option.
To implement this, you will have to compensate for the touch location yourself. You'll need to play around with a compensating offset value for the upside-down buttons. Hit testing on views is probably the best place to do this, then your buttons can just respond to events as normal.
I'm working on a timeline app that needs to have a Garageband-type interface. I'm not a new developer, and I have a background in CGI and have been a Mac dev for over 20 years, but I'm a little stuck on what kind of objects to make that would represent the objects in the timeline. Are they UIViews? Drawn with QuartzCore? I Googled the heck out of the concept and looked at some books and came up empty. Any ideas on how to make these objects? I'd rather ask then start in one direction and realize there could be a better way down the road. Thanks.
Given that they allow user interaction, they're probably implemented as custom UIView subclasses. Since views are layer-backed, and since the timeline isn't flying all over the place and doing crazy complex animations, there's not really a good reason to have the UI be built directly from layers.