I'm trying to optimize battery usage when networking. If I hold all my http requests in an array for example, then I send them all (just empty out the array at once (for loop)), will the antenna turn on once to perform the 10 requests, or will it turn on and off n times? (I'm using NSURLRequest)
Is there a way to batch send requests at once? Or is this basically "batch" sending requests.
The documentation says nothing about how iDevice's hardware handles multiple NSURLRequests. It can be that handling on one model or OS version is different than on another one (e.g. iPhone 4 vs iPhone 5).
You will have to use Instruments and research it on your own using Energy Diagnostics. However, this is rather simple. Here is a short plan how to do it:
Connect the device to your development system.
Launch Xcode or Instruments.
On the device, choose Settings > Developer and turn on power logging.
Disconnect the device and perform the desired tests.
Reconnect the device.
In Instruments, open the Energy Diagnostics template.
Choose File > Import Energy Diagnostics from Device.
Moreover, have a look at Analyzing CPU Usage in Your App
The energy optimisation performed by the OS is not publicly known.
The exact handling for a particular interface depends on the interface, so some interfaces have very low set up/tear down costs ( e.g. Bluetooth LE), and others are quite cheap to run, but take time to set up and tear down ( e.g. 2G).
You generally have to take a course that gives the OS the best options possible, and then let it do what it can.
We can say a few things. It is unlikely that the connection is being powered up/down for individual packets, so the connection will be powered up when there is data to send, and kept up as long as you're trying to receive. The OS may be able to run at a lower power when it is just waiting, as it doesn't need to ACK packets, but it won't be able to power off completely.
Bottom line, if you send your requests sequentially, I believe that the OS is unlikely to cycle power in between requests, if you send them in parallel, it almost certainly won't.
Is this a worthwhile optimisation? Depends how much of it you're doing.
Possibly of interest: background downloads whereby the OS times your fetch when it knows it is going to do some other network activity anyway.
Related
I would like to synchronize (within .05 seconds) an event in the future on more than one iOS device, whether or not those devices have network access at that future time.
To do this, I believe I'll need to sync to a common clock between these devices, and the best way would be to get a precise time from an NTP server(s). I've looked at iOS-NTP, but the description says that that it determines time with 1 second accuracy, and that doesn't quite do it for me.
One alternative is to get the (accurate?) timestamp from a GPS clock, but I'm not sure that the devices will always have the ability to get a GPS signal.
Any other suggestions?
iOS doesn't give you low level access to the GPS hardware, so you can't use that method.
If the devices are close to each other, you can establish a peer to peer wireless connection using GameKit. That should give you around 35ms latency with bluetooth, and the latency should be consistent so you could measure it a few times to get the clocks in sync.
If they're not close together... then I would setup a server and use that as the "real" time. With several calls you should be able to measure the latency between the device and the server, then calculate the clock offset with reasonable accuracy.
If possible use UDP instead of TCP, that way network congestion will drop packets instead of delaying them.
You'll need to keep on re-calculating the offset though, since iOS devices change their clock frequently (I think it's part of the 3G/LTE network protocol? Jumping from one cell tower to another might update the clock).
I would like to know if it's still necessary to throttle the bandwidth when transferring multiples large files (PDF) over the cellular network.
There is no information about this in the guidelines but some old threads points out this was necessary.
http://forums.macrumors.com/archive/index.php/t-1130677.html
iPhone app rejected for "transferring excessive volumes of data"
Thanks in advance.
I believe this only applies for streaming like mp3 data.
If you have a large single file which you need to download, throttling the bandwidth is an especially bad idea. The opposite would be recommend: download as fast as possible in order to safe battery and increase reliability of the connection (the shorter the duration the less the probability that it gets interrupted).
You should however check whether the connection uses the cellular network and notify the User and ask whether your app should defer the download when Wifi is available, possibly automatically in the background without starting the app. This approach is feasible (utilizing NSURLSession).
We have a critical need to lower the latency of our UDP listener on iOS.
We're implementing an alternative to RTP-MIDI that runs on iOS but relies on a simple UDP server to receive MIDI data. The problem we're having is that RTP-MIDI is able receive and process messages around 20ms faster than our simple UDP server on iOS.
We wrote 3 different code bases in order to try and eliminate the possibility that something else in the code was causing the unacceptable delays. In the end we concluded that there is a lag between time when the iPAD actually receives a packet and when that packet is actually presented to our application for reading.
We measured this with with a scope. We put a pulse on one of the probes from the sending device every time it sent a Note-On command. We put another probe attached to the audio output of the ipad. We triggered on the pulse and measured the amount of time it took to hear the audio. The resulting timing was a reliable average of 45ms with a minimum of 38 and maximum around 53 in rare situations.
We did the exact same test with RTP-MIDI (a far more verbose protocol) and it was 20ms faster. The best hunch I have is that, being part of CoreMIDI, RTPMIDI could possibly be getting higher priority than our app, but simply acknowledging this doesn't help us. We really need to figure out how fix this. We want our app to be just as fast, if not faster, than RTPMIDI and I think this should be theoretically possible since our protocol will not be as messy. We've declared RTPMIDI to be unacceptable for our application due to the poor design of its journal system.
The 3 code bases that were tested were:
Objective-C implementation derived from the PGMidi example which would forward data received on UDP verbatim via virtual midi ports to GarageBand etc.
Objective-C source base written by an experienced audio engine developer with a built-in low-latency sine wave generator for output.
Unity3D application with Mono-based UDP listener and built-in sound-font synthesizer plugns.
All 3 implementations showed identical measurements on the scope test.
Any insights on how we can get our messages faster would be greatly appreciated.
NEWER INFORMATION in the search for answers:
I was digging around for answers, and I found this question which seems to suggest that iOS might respond more quickly if the communication were TCP instead of UDP. This would take some effort to test on our part because our embedded system lacks TCP capabilities, only UDP. I am curious as to whether maybe I could hold open a TCP connection for the sole purpose of keeping the Wifi responsive. Crazy Idea? I dunno. Has anyone tried this? I need this to be as real-time as possible.
Answering my own question here:
In order to keep the UDP latency down, it turns out, all I had to do was to make sure the Wifi doesn't go silent for more than 150ms (or so). The exact timing requirements are unknown to me at this time, however the initial tests I was running were with packets 500ms apart and that was too long. When I increased the packet rate to 1 every 150ms, the UDP latency was on par with RTPMIDI giving us total lag time of around 18ms average (vs. 45ms) using the same techniques I described in the original question. This was on par with our RTPMIDI measurements.
I have a Beaglebone running Ubuntu. We want to continuously sample from 3 on-board ATD converters at 100KS/s, and every window of samples we will run a cross correlation DSP algorithm. Once we find a correlation value above a threshold, we will send the value to a PC.
My concern is the process scheduling in Ubuntu. If our process gets swapped out and an ATD sample becomes available during this time, the process will miss the sample. We need to ensure that our process will capture every sample and save it in memory.
With this being said, is there a way to trigger interrupts on the Beaglebone so that if an ATD sample is ready, the sample will be saved in the memory of our program even if the program does not have the processor at the time?
Thanks!
You might be able to trigger the EDMA or use the PRUSS. Probably best to ask on beagleboard#googlegroups.com. There isn't a DSP per-se on the BeagleBone.
This is not exactly an answer to your question, but hopefully it explains how the process works. Since you didn't mention what hardware you are running for AD conversion, maybe this is the best that can be done:
With audio hardware, which faces the same problem, the solution comes from the hardware and the drivers working together: whenever the hardware has filled up enough of the buffer it signals the driver (via an interrupt or some similar mechanism). In some cases, it's also possible that the driver polls the hardware or something like that, but that's a less efficient solution, and I'm not sure anyone does it that way anymore (maybe on cheaper hardware?). From there, the driver process may call right into the end-user process, or it may simply mark the relevant end-user process as "runnable". Either way, control needs to be transferred to the end user process.
For that to happen, the end user process must be running at a higher priority than anything else occupying the CPUs at that moment. To guarantee that your process will always be first in the queue, you can run it at a high priority, with the appropriate permissions, you can even run in very high priorities.
The time it takes for the top priority process to go from runnable to running is sometimes called the "latency" of the OS, though I am sure there's a more specific technical term. The latency of Linux is on the order of 1 ms, but since it's not a "hard" real-time OS, this is not a guarantee. If this is too long to handle your chunks of data, you may have to buffer some of it in your driver.
What coding tricks, compilation flags, software-architecture considerations can be applied in order to keep powerconsumption low in an AIR for iOS application (or to reduce powerconsumption in an existing application that burns too much battery)?
One of the biggest things you can do is adjust the framerate based off of the app state.
You can do this by adding handlers inside your App.mxml
<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
activate="activate()" deactivate="close()" />
Inside your activate and close methods
//activate
FlexGlobals.topLevelApplication.frameRate = 24;
//deactivate
FlexGlobals.topLevelApplication.frameRate = 2;
You can also play around with this number depending on what your app is currently doing. If you're just displaying text try lowering your fps. This should give you the most bang for your buck power savings.
Generally, high power consumption can be the result of:
intensive network usage
no sleep mode for display while app idle
un-needed location services usage
continously high usage of cpu
Regarding (flex/flash) AIR I would suggest that:
First you use the Flex profiler + task-manager and monitor CPU and Memory usage. Try to reduce them as much as possible. As soon as you have this low on windows/mac they will go lower (theoretically on mobile devices)
Next step would be to use a network monitor and reduce the amount and size of the network (webservice) calls. Try to identify unneeded network activity and eliminate it.
Try to detect any idle state of the app (possible in flex, not sure about flash) and maybe put the whole app in an idle mode (if you have fireworks animation running then just call stop())
Also I am not sure about it, but will reduce for sure cpu and use more gpu: by using Stage3D (now available with air 3.2 also for mobile) when you do complex anymations. It may reduce execution time since HW accel is there, therefore power consumption may be lower.
If I am wrong about something please comment/downvote (as you like) but this is my personal impression.
Update 1
As prompted in the comments, there is not a 100% link between cpu usage on a desktop and on a mobile device, but "theoreticaly" on the low level we should have at least the same cpu usage trend.
My tips:
Profile your App in a first step with the profiler from the Flash Builder
If you have a Mac, profile your app with Instruments from XCode
And important:
behaviors of Simulator, IPA-Interpreter packages and IPA-Test builds are different.
Simulator - pro forma optimizations
IPA-Interpreter - Get a feeling of performance
IPA-Test - "real" performance behavior
And finally test the AppStore-Build, it is the fastest (in meaning of performance) package mode.
Additional we saw, that all this modes can vary.