Fast Application Switching Is Slow Application Switching in Mango - windows-phone-7.1

I have an issue I'm hoping someone can help with. I have an application that, for all intents and purposes, is working great. It's basically a picture viewer type of application - for something very specific. What it is is about 500 pictures.
I have all pictures set as Content and I load/unload one at a time. For the 500 pictures, I have a class that is used as data about each picture. So things like "place taken" , "index", "short description", etc. I never need to insert or delete from this list, but I may need to make some changes to each individual one, like "user viewed this picture on..." (date) or "favorite = true" (boolean where user marks a picture as a favorite picture).
When I deploy the app, this "picture metadata" is in xml file. It is then deserialized and saved to IsoStorage upon first run. A copy of it is maintained in memory and that is what is used to run my whole app. I have 3 different pages that all use that data, which is set as a static property in app.xaml.cs. Upon Deactivate/Closing the data is serialized back to xml - upon relaunching it is deserialized. Everything works fine and fast - everywhere. Including tombstoning.
The problem is resuming from Deactivation where the app is not tombstoned. It can take up to 10-15 seconds and definitely is returning from e.IsApplicationInstancePreserved in Application_Activated (i.e. it's not tombstoned).
Activating from brand new it takes about 3-4 seconds for the app to start. Returning from tombstoning it also takes about 3 seconds.
What I'm not understanding is why returning from e.IsApplicationInstancePreserved = true; is taking so long (and it won't allow me to pass certification). I've tested and found that if it is about 10 items in the List its incredibly snappy for FAS. If there are about 50 items in the List then it's not immediate. If there are 100 items, it's the first time you can see "Resuming..." (yep, that word coming from FAS, not tombstone). Where I have it, at 500 in the List, it is painfully slow to watch FAS, which is SAS.
It's interesting that in the emulator, FAS works perfectly fine, even with 1000 objects in memory. It's on an actual device where it is incredibly slow (Samsung Focus) in both debug and release mode.
Now I know the easy answer may be something like "why maintain a class with a list of 500 objects all the time?", but my whole architecture and user experience is based on having data about the pictures available on all three pages all the time. Linq is heavily used to report data everywhere.
Any thoughts or guidance on this situation?

Great to hear you found the problem.
"sounds a serialization/deserialization issue to me. Are you storing the list in a State 'variable' ? were the framework will serialize when leaving the app and deserialize it when you return"

Remember, your PC + Emulator is MUCH faster and more powerful than your phone's 1GHz single core ARM processor with 500MB RAM!
When your app is activated with e.IsApplicationInstancePreserved == true, then you shouldn't need to restore any state and you should be able to pick up where the user left off.
If you're saying that when your list catalogs 500 items, the performance is horribly slow when resuming the app, I wonder whether you're also keeping the 500 images in memory and your phone is having to swap all that data into your app's memory space. This will be further compounded if your app has multiple pages each containing copies of your 500 images!
I strongly encourage you to profile your app to measure its memory footprint and to see where the big memory, storage, IO and perf issues are.

Related

any downsides to writing the same file 1000's of times in iOS?

I'm considering overwriting the same small file 1,000's - 100,000's of times in an iOS app. Are there any downsides to this, given that flash memory is rated for 1000's of writes (but not, say, 100,000's)?
Will the system file cache save me if I stick to standard FileHandle operations? (without me having to implement my own such cache)
This has been addressed before: Reading/Writing to/from iPhone's Documents folder performance
Any new insights?
Update in response to some comments below: in general I agree with you that sometimes examining the choice of solution is more critical than helping with the proposed solution itself.
However, for this case, I feel the question is legit. Basically, it applies to any program where there is a small amount of very volatile data that needs to be persisted often: say, a position in a game, or a stock tick, or some counter, or the last key pressed, or something like that. It needs to be reliably read after process restart, so the app can pick up where it left off, hence the question:
Can I use the iOS file system for that? I know I can't write 10,000's of times to actual flash memory - that would burn it out. But will file system operations solve this for me, through some form of caching? Or do I need to do that myself, 'by hand'?
I sort of assume 'yes' (file system will solve) - otherwise other apps that do this (there must be some) would be burning out phones all the time! But: hard to know for sure...
Update again: asked this question on apple forums:
https://forums.developer.apple.com/thread/116740
Still no clear answer. Some answers are: just cache it yourself to avoid any such potential problems (and there can be: a file write can fail, and increasing the frequency increases the probability of failure in weird ways). Another is: iOS logs so much stuff, there's no way I can write more frequently than that, and that's fine, so no worries... I guess I'll leave this question open for now.

Memory constantly increasing in Rails app

I recently launched a new Ruby on Rails application that worked well in development mode. After the launch I have been experiencing the memory being used is constantly increasing:
UPDATED: When this screen dump (the one below) from New Relic was taken. I have scheduled a web dyno restart every hour (one out of two web dynos). Thus, it does not reach the 500Mb-crash level and it actually gets a bit of a sig saw pattern. The problem is not at all resolved by this though, only some of the symptoms. As you can see the morning is not so busy but the afternoon is more busy. I made an upload at 11.30 for a small detail, it could not have affected the problem even though it appears as such in the stats.
It could be noted as well that it is the MIN memory that keeps on increasing even though the graph shows AVG memory. Even when the graph seems to go down temporarily in the graph, the min memory stays the same or increases. The MIN memory never decreases!
The app would (without dyno restarts) increase in memory until it reached the maximum level at Heroku and the app crashes with execution expired-types of errors.
I am not a great programmer but I have made a few apps before without having this type of problem.
Troubleshooting performed
A. I thought the problem would lie within the before_filter in the application_controller (Will variables in application controller cause a memory leak in Rails?) but that wasn't the problem.
B. I installed oink but it does not give any results (at all). It creates an oink.log but does not give any results when I run "heroku run oink -m log/oink.log", no matter what threshold.
C. I tried bleak_house but it was deprecated and could not be installed
D. I have googled and read most articles in the topic but I am none the wiser.
E. I would love to test memprof but I can't install it (I have Ruby 1.9x and don't really know how to downgrade it to 1.8x)
My questions:
Q1. What I really would love to know is the name(s) of the variable(s) that are increasing for each request, or at least which controller is using the most memory.
Q2. Will a controller as the below code increase in memory?
related_feed_categories = []
#gift.tags.each do |tag|
tag.category_connections.each do |cc|
related_feed_categories << cc.category_from_feed
end
end
(sorry, SO won't re-format the code to be easily readable for some reason).
Do I need to "kill off" related_feed_categories with "related_feed_categories = nil" afterwards or does the Garbage Collector handle that?
Q3. What would be my major things to look for? Right now I can't narrow it down AT ALL. I don't know which part of the code to look deeper into, and I don't really know what to look for.
Q4. In case I really cannot solve the problem. Are there any online consulting service where I can send my code and get them to find the problem?
Thanks!
UPDATED. After receiving comments it could have to do with sessions. This is a part of the code that I guess could be bad:
# Create sessions for last generation
friend_data_arr = [#generator.age, #generator.price_low, #generator.price_high]
friend_positive_tags_arr = []
friend_negative_tags_arr = []
friend_positive_tags_arr << #positive_tags
friend_negative_tags_arr << #negative_tags
session["last_generator"] = [friend_data_arr, friend_positive_tags_arr, friend_negative_tags_arr]
# Clean variables
friend_data_arr = nil
friend_positive_tags_arr = nil
friend_negative_tags_arr = nil
it is used in the generator#show controller. When some gifts have been generated through my gift-generating-engine I save the input in a session (in case they want to use that info in a later stage). I never kill or expire these sessions so in case this could cause memory increase.
Updated again: I removed this piece of code but the memory still increases, so I guess this part is not it but similar code might causing the error?
That's unlikely our related_feed_categories provoke this.
Are you using a lot of files ?
How long do you keep sessions datas ? Looks like you have an e-commerce site, are you keeping objects in sessions ?
Basically, i think it is files, or session, or an increase in temporary datas flushed when the server crash(memcache ?).
In the middle of the night, i guess that ou have fewer customer. Can you post the same memory chart, in peak hours ?
It may be related to this problem : Memory grows indefinitely in an empty Rails app
UPDATE :
Rails don't store all the datas on client side. I don't remember the default store, bu unless you choose the cookie::store, rails send only datas like session_id.
They are few guidelines about sessions, the ActiveRecord::SessionStore seem to be the best choice for performance purpose. And you shouldn't keep large objects nor secrets datas in sessions. More on session here : http://guides.rubyonrails.org/security.html#what-are-sessions
In the 2.9 part, you have an explanation to destroy sessions, unused for a certain time.
Instead of storing objects in sessions, i suggest you store the url giving the search results. You may even store it in database, offering the possibility to save few research to your customer, and/or by default load the last used.
But at these stage we are still, not totally sure that sessions are the culprits. In order to be sure, you may try on a test server, to stress test your application, with expiring sessions. So basically, you create a large number of sessions, and maybe 20 min later rails has to suppress them. If you find any difference in memory consumption, it will narrow things.
First case : memory drop significantly when sessions expires, you know that's is session related.
Second case : The memory increase at a faster rate, but don't drop when sessions expires, you know that it is user related, but not session related.
Third case : nothing change(memory increase at usual), so you know it do not depend on the number of user. But i don't know what could cause this.
When i said stress tests, i mean a significant number of sessions, not really a stress test. The number of sessions you need, depends on your average numbers of users. If you had 50 users, before your app crashed, 20 -30 sessions may be sginificant. So if you had them by hand, configure a higher expire time limit. We are just looking for differences in memory comsuption.
Update 2 :
So this is most likely a memory leak. So use object space, it has a count_objects method, which will display all the objets currently used. It should narrow things. Use it when memory have already increased a lot.
Otherwise, you have bleak_house, a gem able to find memory leaks, still ruby tools for memory leaks are not as efficient as java ones, but it's worth a try.
Github : https://github.com/evan/bleak_house
Update 3 :
This may be an explanation, this is not really memory leak, but it grows memory :
http://www.tricksonrails.com/2010/06/avoid-memory-leaks-in-ruby-rails-code-and-protect-against-denial-of-service/
In short, symbols are keep in memory until your restart ruby. So if symbols are created with random name, memory will grow, until your app crash. This don't happen with Strings, the are GCed.
Bit old, but valid for ruby 1.9.x Try this : Symbol.all_symbols.size
Update 4:
So, your symbols are probably the memory leak. Now we still have to find where it occurs. Use Symbol.all_symbols. It gives you the list. I guess you may store this somewhere, and make a diff with the new array, in order to see what was added.
It may be i18n, or it may be something else generating in an implicit way like i18n. But anyway, this is probably generating symbols with random data in the name. And then these symbols are never used again.
Assuming category_from_feed returns a string (or a symbol perhaps), a magnitude of 300MB increase is quite unlikely. You can roughly arrive at this by profiling this:
4_000_000.times {related_feed_categories << "Loooooooooooooong string" }
This snippet would shoot the memory usage up by about 110MB.
I'd look at DB connections or methods that read a file and then don't close it. I can see that it's related to feeds which probably means you might be using XML. That can be a starting point too.
Posting this as answer because this looks bad in comments :/

Best way to store and retrieve thousands of places on iOS

My situation:
I have an app that needs to store 10,000 - 30,000 locations in some sort of storage method, which are then displayed on a MKMapView as individual pins. I also have a server that needs to be able to add to the database through pushing out changes.
Through grouping pins I've eliminated all issues with the MKMapView, my biggest focus is now on speed, storage and being able to add and replace the storage contents. What I'm currently doing is I have a text file of currently 1,000 locations as JSON-formatted, then they're just read as an array and sent to my custom map view (no issues there). My only issue is how I could update that text file (rather than downloading massive amounts of data), and store almost 30,000 locations.
Is this even feasible? It seems my current setup could scale pretty much perfectly, it's just this updating system that is causing me a headache.
Your current setup won't scale forever because you have to load the entire file into memory in one chunk. Eventually it will get to large to manage and will eat up to much memory. Unable to purge memory in the event of system low-memory, the system will shut your app down i.e. it won't be able to stay in the background but will have to reboot each time the user switches back to it.
To update, you will have to load in the entire file, parse the JSON, figure out how to update the resulting data structure, then write it all to file. One error anywhere in the process could corrupt the entire file.
You really need to look at using Core Data or even SQL. Core Data has a learning curve but once you master it, it makes implementing designs like your app trivial. You also get automatic scaling and efficient memory management.

Best way to load distant data

I'm currently developping an iPhone app and need your opinion.
First, I am developping it for a Football (soccer) Club. It contains many tabs (at least these ones) :
News (Where I am displaying last news posted, obviously)
Shop (Where the user can buy stadium seats, and maybe various goodies)
Don't know yet exactly (But it will be related to Facebook/Twitter or stuff like that)
For every of theses tabs, I need to download XML data (using initWithContentOfURL). Right. But, that's where my problem is. Should I :
Load every needed xml pages at application start-up, and display a nice loading screen ?
Load every needed xml pages at the exact instant the user needs it in the application ?
In the first case, I get a slower application startup, but then, a faster navigation between tabs.
In the second case, my application starts relatively faster (still needs to load News XML, that's the welcome tab), but switching between tabs won't be as fluent as the first case (only the first time the tab is opened, of course).
Any advice?
Take a look at ASIHttpRequest which does provide some pre built caching mechanisms for you that may be appropriate and generally make interacting with web services easier
Load only what you need when you need it.
Furthermore, I wouldn't use initWithContentsOfUrl. It's a synchronous call, and it will lock your app. Instead, use an NSURLConnection to get the data asynchronously.

possible to have two working sets? 1) data 2) code

In regards to Operating System concepts... Can a process to have two working sets, one that represents data and another that represents code?
A "Working Set" is a term associated with Virtual Memory Manangement in Operating systems, however it is an abstract idea.
A working set is just the concept that there is a set of virtual memory pages that the application is currently working with and that there are other pages it isn't working with. Any page that is being currently used by the application is by definition part of the 'Working Set', so its impossible to have two.
Operating systems often do distinguish between code and data in a process using various page permissions and memory protection but this is a different concept than a "Working set".
This depends on the OS.
But on common OSes like Windows there is no real difference between data and code so no, it can't split up it's working set in data and code.
As you know, the working set is the set of pages that a process needs to have in primary store to avoid thrashing. If some of these are code, and others data, it doesn't matter - the point is that the process needs regular access to these pages.
If you want to subdivide the working set into code and data and possibly other categorizations, to try to model what pages make up the working set, that's fine, but the working set as a whole is still all the pages needed, regardless of how these pages are classified.
EDIT: Blocking on I/O - does tis affect the working set?
Remember that the working set is a model of the pages used over a given time period. When the length of time the process is blocked is short compared to the time period being modelled, then it changes little - the wait is insignificant and the working set over the time period being considered is unaffected.
But when the I/O wait is long compared to the modelled preriod, then it changes much. During the period the process is blocked, it's working set is emmpty. An OS could theoretically swap out all the processes' pages on the basis of this.
The working set model attempts to predict what pages the process will need based on it's past behaviour. In this case, if the process is still blocked at time t+1, then the model of an empty working set is correct, but as soon as the process is unblocked, it's working set will be non-empty - the prediction by the model still says no pages needed, so the predictive power of the model breaks down. But this is to be expected - you can't really predict the future. Normally. And the working set is expected to change over time.
This question is from the book "operating system concepts". The answer they are looking for (found elsewhere on the web) is:
Yes, in fact many processors provide two TLBs for this very reason. As
an example, the code being accessed by a process may retain the same
working set for a long period of time. However, the data the code
accesses may change, thus reflecting a change in the working set for
data accesses.
Which seems reasonable but is completely at odds with some of the other answers...

Resources