What's the recommended oem partition size for an Android Things product? - android-things

While creating another Android Things product with the Android Things Console i've been wondering again what oem partition size to choose. The info for this setting is "Must be between 32 and 512 MB" which is quite a range to consider.
While my apk-file in Android Studio is only around 100KB and would fit easily into the minimum size of 32MB. I searched the net and stackoverflow for a while but couldn't come up with reasonable info. I figure the apk-size gets much bigger maybe if i add a lot of resources, images and so on to my app or am i missing something?
Therefore i'd like to know if there are any recommendations for the oem partition size? Are there any drawbacks to always select 256M or 512M just to be on the safe side?
thanks in advance

The partition size you select should be large enough to account for growth in your APK over the lifecycle of your product, as it cannot be changed once you start deploying over the air (OTA) updates.
The storage space allocated for your apps is essentially shared with user data, so allocating more space than you need for apps means you'll have less space for the files/databases associated with those apps. So find the right balance that works for your device.

Related

Why would Xcode show MUCH more memory use than Instruments for SceneKit app?

I'm trying to debug why our SceneKit-based app is using so much memory but Xcode and Instruments / Allocations seem to have very different values for the amount of memory being used. When I look in Xcode I see something like 600 MB but when I transfer the same running session over to Instruments / Allocations, I see a very different number for persistent bytes, like 150 MB.
Which one is correct? Why the difference? Are they measuring different things?
(Regardless of whether I Transfer an Xcode debug session or start fresh in Instruments, it doesn't seem to make much difference.)
The reason that I care is that iOS is killing the app for excessive memory use (according to Xcode) but I can't seem to find the problem via Instruments.
I've tried turning off all GPU and Metal debug options but they don't seem to make a difference.
Which one is correct?
My intuition is: Instruments. It uses Dtrace to (sorry) instrument your code and watch actual allocations and deallocations as they happen, at the expense of performance. The Xcode debug navigator memory graph is more of an outside view designed to give a very general sense of what’s happening. That is exactly why the latter offers you a way to switch to the former — because that (Instruments) is where you’re going to get real measurements.
(However, let’s keep in mind that Instruments may fail to include in the total you’re seeing some virtual memory backing stores for graphics. There are plenty of WWDC videos discussing this topic in more detail. )
I know that this answer is quite late, but for the sake of future developers with the same problem, I would advise you to check the images in your assets folder. If any of your images have dimensions larger than 1000 x 1000 you should scale them down. With the example above, the image comprises of 1000000 pixels. Following how images are loading in (4 bytes per pixel), this means 4 MB of memory is used to load the image. Unbeknownst to me, I had an image of roughly 3600 * 4000 in my assets folder. Doing the maths, this was over 50 MB of memory usage!

Why iOS terminates background apps instead of handling lack of RAM differently?

All over this document Apple mention iOS terminates apps under certain conditions, and the most popular reason seems to be freeing up some RAM. And that causes issues for apps that do not implement state restoration - some of the content user is working on and stepped away from for a moment could be easily erased. There's even a 16 page thread on Apple forums where users complaining about that.
Is anyone aware why iOS actually terminates apps instead of moving memory occupied by them onto disc/swap?
Does termination actually provide considerable performance improvement compared to other means?
What you are describing is paging, or more accurately, page swapping. The iOS version of BSD Unix does not perform paging, for lots of reasons. Here are a few educated guesses:
It's too power-hungry for a mobile device.
Flash memory can't handle the churn involved in paging. Flash memory has a limited number of lifetime write cycles per storage location, and paging would chew through the life of the flash chip.
As the other poster pointed out, swapping to disk would use up available disk space, which is also limited. Not a problem when you have a 500 GB drive, but it is a big problem on a device with only 16 GB of HD and 1 GB of RAM.
You're not going to get an answer for this question here. Apple don't explain the inner workings of iOS and anything else is going to be guesswork.
Here's my guesswork:
iOS is a heavily resource constrained environment. Memory is limited but so is disk space - a 16GB iPhone has 1GB of RAM, so "swapping to disk" isn't really something that can be freely applied. When do you stop? How do you know this isn't already being done, but there is only a limited swap in place?
The primary goal of iOS has always been to prioritise responsiveness of the foreground app. Anything other than warning, then closing background apps would probably impact this too much. If there are 15 apps in the background then imagine the processor load on nicely swapping the memory out for each process?
Because the RAM that was saved onto the disc would be much slower. It's better to cut the program then having it run poorly. I think that answered both questions.
Thanks everybody for responses. I had to do some research to answer this question, though. So I was looking for more understanding that led into "app termination" decisions. I know, there are some smart people working in Apple, but for me it always help to understand the reason something is build "this way" rather then just following it.
It turned down into these 2 questions
Why iOS terminates apps instead of freeing memory by paging out (swapping)?
Does termination provide considerable performance win?
To understand that I dug a bit into the history of iPhone. There's a video that was accessible on iTunes, unfortunately the link does not work anymore. Anyways, the video was introducing the very first version of multitasking on iPhone 3G (or was it 3GS? Not sure which device starts to support multitasking).
Nowadays iPhone devices are quite advanced in terms of hardware. Those are actually more advanced then some desktops we had 7-10 years ago, which already have had incorporated swapping long ago. But if we look for first iPhone releases, those are not that much advanced in terms of hardware. iPhone 3G is 620 Mhz ARM and 128 RAM. iPod touch 1gen had 400mhz ARM. And multitasking was supposed to run on all the devices of that time.
If we take a look at iOS, it was always has the smoothness of animations in priority; taking look at hardware I see it would be challenging to have both snappy and responsive device along with processing swapping background applications memory, so it seems very logical and very fair to terminate apps. A year or two later Apple provided APIs to facilitate implementation state restoration.
But if we look at the current iPhones and iPads - they do have enough power in order not to terminate apps and just drop their memory on disk without any drop downs in animations and foreground app performance. Why not add that on latest devices? I assume this is common for the software industry; new features often prioritised higher then improvements on existing workflows; Apple has been releasing MobileMe, support for Retina displays, AutoLayout, iCloud - so I can understand that cool improvements of already existing features has been sacrificed.
The issue with apps that don't provide state restoration is easily solved by providing state restoration.
Just killing apps when the system runs out of memory is a huge performance gain. Consider that the system usually runs out of memory when you launch another app, and any action that is done instead of killing old apps would have to be done before launching the new app; that's about the most performance critical point in time.
And for at least five years you have been told that when your app goes to the background, you should store just enough state to come back to that state if your app is restarted.

Core data storage limit, Cache Limit, RAM limit on iphone

I am making a some complex app, in which every detail is imp. i have some questions
1. how much storage limit we have, if we plan to save big files on core data/cache.
2. Whats the RAM limit on iphone? Actually searching for some table that can give detailed info about IOS devices on this. Because i need to handle memory warnings and defend App crashes.
3. Its better to save images in cache or core data, assume you have a lot of images approx. 200-250.
Thanks
1) I am not aware of any storage limit. Obviously, you will never get 64GB or more - since no device is larger ;-). My wife's facebook app consumes >5GB at the moment... I suppose they did something wrong. The only important point is to fail gracefully (show a dialog, clean some space, ...) if the storage is full.
2) The RAM limit varies depending on the iPhone model and the currently running applications. Also there are some iPods with less memory in market. 30MB should be pretty safe. Total physical memory of the device can be retrieved as described here while retrival of the available RAM can be derived from that question.
3) Maybe this is a good starting point. I would always write image data to the file system and just store the file name inside the database, as suggested here.

Windows to embedded port: data and code memory size

I am in the process of porting a windows 7 library to an embedded platform. In order to do so my employer asks me the amount of memory (and CPU but let us concentrate on the memory for now) that my system will need once ported - so he can size the board to my needs.
I had a look on the internet and there seem not be exist much information about this question, hence my questions:
in order to get a rough idea of the memory footprint of the code in flash memory (code only without memory for data), I read on the Internet that I should sum the size of all the dlls I use. It seems that all compilers and platforms give a different size for the code footprint but overall the size of the code (without data) is often very close. Do you confirm?
in order to deal with the memory required by the data only (heap + stack but no code), I had a look at the task manager (and process explorer). It seems the overall amount of data which I use is specified in the 'peak working set'. I have a few questions about it though:
2.a. Does the 'working set' include the heap + stack memory or does it correspond to the heap only?
2.b. Does the 'working set' include the size for the code as well? (as I am on windows 7, the code is also stored in RAM and not in flash as on embedded systems), or does it only correspond to the data?
2.c. it seems the 'peak working set' reflects the maximum amount of physical memory that was actually in RAM from the time the program was started, but it does not reflect the size the program could take afterwards (if I happen to allocate memory at runtime - which would be bad ;) - the peak value would go on increasing). Do you confirm?
2.d. Hence, do you also confirm that if I do not allocate memory at runtime, the 'peak working set' should roughly be the maximum size of RAM my embedded system will need? Up to a bit of size difference due to the difference in systems technology...
Thanks,
Antoine.
Unless you are intending to run your application on Windows Embedded, then looking at the code and data usage in Windows is not going to be much of an indicator of anything useful!
1) DLLs are libraries - not all the code within them will be utilised by your code. Most embedded systems are statically linked and the linker will link only modules that are actually referenced in your by your code. So taking the sum of the DLL dependencies is likley to lead to a gross over estimation of memory requirement.
2) Windows memory management is profligate with memory use - because it can be and to do so generally improves performance of typical desktop systems. For example, an thread stack in Windows is typically of the order to 2Mb - you may seldom use that much, but Windows gives it to you in any case because it can and to do so errs on the side of safety. A thread stack in an embedded system will typically range from a few tens of bytes to a few tens of kilobytes - it depends on your application.
Windows task manager shows what Windows allocates to your process, that may not relate to what your process needs. Also your application is using Windows services - all the memory used for kernel and device services will not show up as part of your process, but your embedded system may still need those.
If you do use your Windows prototype code to assess the embedded system requirements, then your best place to start is by getting the linker to generate a map file, which will give a detailed description of memory usage in terms of statically allocated data and code size.
Code size depends not only on the performance of the compiler, but also on the efficiency of the instruction set. Some architectures achieve higher code density than others. Windows application code size is never a good indicator of embedded code size because its execution environment is likley to be so much different. For example an pre-emptive multitasking RTOS kernel on a 32bit ARM can be implemented in less than 10Kb of code, a file system perhaps another 10, and network stack anything from 10 to 30K, USB another 10. As you can see this is a different world to desktop code.
Data memory usage is more easily determined perhaps; but you do that through analysis of your application rather than observing what Windows does. There is the data your application instantiates directly, and then there is data instantiated by libraries and device drivers you might call - in Windows the latter is likley to be relatively large and out of your control. Typical embedded systems libraries for things such a s network stacks, USB, file systems etc. are fall smaller and far more deterministic in both performance and size.
Your better bet is to describe your application in terms of its general purpose, performance requirements, real-time constraints, and its hardware requirements (display, networking, I/O, mass storage etc.), and then look at comparable solutions or at the libraries you will need to implement your solution; most embedded systems are "bare board" and do not have the services you find in Windows unless you write them or use third-party solutions - Windows is seldom a comparable solution to an embedded system.
If it is just a library rather than an application, then build it for a likley target using a Windows hosted GCC cross-compiler and see how big it ends up. You don't need hardware for that or even expend any money.

windows phone diagnostics logging

is it possible to get memory and CPU usage in windows phone (sdk 7.1) application?
what would be the best way to retrieve that data from the application for further analysis?
Try the Windows Phone Performance Analysis tool, it allows you to profile CPU usage, memory usage and record garbage collection:
If you're looking to analyze performance as well, take a look at the Frame Rate counters (these are enabled by default when you are debugging your app)
You'll want to keep your FPS high, and your screen fill rate low (anything above 3 or so is cause for deep concern, since it means your screen is being redrawn 3x for every frame; not the best thing for performance)
Jeff Wilcox has a good summary on this too which is worth reading.

Resources