I'm working on a calculation-intensive app that happens to listen to sensor data (acceleration, but also angular velocity). After a couple filters, these vectors are integrated to track displacement.
I have noticed that the timestamps associated to CMDeviceMotion and CMGyroData are late, because my CMMotionManager's handlers aren't fired at 100 Hz as specified by its accelerometerUpdateInterval and gyroUpdateInterval. It starts around 60 Hz and goes up and down. This affects the integrations majorly.
The same code in a stand-alone app does 100Hz like a charm.
So it looks like computation peaks from other modules of the big app make the sensor updates lag. Which surprises me, since the sensor manager is on a thread of its own and I understood from the doc that the sensor events were triggered by the hardware.
My question is: when the timestamp is unreliable as described, can the data still be used? Can it be extrapolated using another clock?
And I'm confused why big, asynchronous computation on other threads can lag the accelerator updates.
Thanks,
Antho
Bad timestamps are just as bad as inaccurate data since they have the same effect on the integration.
About 50 Hz is enough to track orientation. I was wondering how you track displacement because it is impossible with current sensors.
Related
Looking for a realtime clock for IoT project. Need a millisecond resolution for my app protocol and its loss is critical. So I wonder if there is an autonimus realtime clock (with a battery) that will loose less than 10ms per month and work for a year?
The drift parameters you're asking for here -- 10 ms / 30 days -- imply <4 ppb accuracy. This will be a very difficult target to hit. A typical quartz timing crystal of the type used by most RTCs will drift by 50 - 100+ ppm (50,000 - 100,000 ppb) just based on temperature fluctuations.
Most of the higher-quality timing options (TCXO, OCXO, etc) will not be usable within your power budget -- a typical OCXO may require as much as 1W of (continuous) power to run its heater. About the only viable option I can think of would be a GPS timing receiver, which can synchronize a low-quality local oscillator to the GPS time, which is highly accurate.
Ultimately, though, I suspect your best option will be to modify your protocol to loosen or remove these timing requirements.
Sync it with the precise clock source like GPS for example.
You can also use tiny atomic clock https://physicsworld.com/a/atomic-clock-is-smallest-on-the-market/
or in Europe DCF77 receiver.
I am fetching user location using CLLlocationManager and running webservice when lcoation is updated in background but it causes iphone heating up and battery Drains? Any one have solution for this ?
Getting your position drains power, you can do few things to avoid that:
use significant location changes (it is good if you do not need precise locations per time)
limit the accuracy (changing this can make you avoid the use of GPS that it is really a battery drainer)
I'm do not understand the heat, yes GPS make the device become hotter, but I've never experienced a restart due to heat.
Are you sure that you are not getting also into an expensive computational tasks?, you can check this by using profiler or the later versions of xcode.
You can also set the distance filter, this will continue to get the position (it will not reduce the battery drain) but will call the delagate callback only when the distance threshold is reached.
On iOS6 it has been introduced also the concept of deferring location updates in background, that probably is the best solution also for managing network traffic outgoing from your device.
In fact you have only the decision between low location accuracy (1000km) and high (3-6m).
In the first case the GPS chip is disabled, in the second it is enabled.
If it is enabled, and you need that precise locations you can do nothing.
GPS needs power, and that power last for a bit more than 8 hours full precision locations (measured on my iphone4)
warming up is no problem, however I cannot remember a warming up on my phone caused by GPS (I will check that soon). But for sure it never warms up so much that it will restart,
So your case this is a bit strange, that also could be a defect of your device.
The cause for warming up can be also that you try to comminicate very often with the server.
You can check that yourself, just download a decent GPS aplication, and let it record a track.
If it does get hot too, your device might have a problem. (Or you are living in a extremly hot environment and the sun shines strongly on your phone.)
Test also by disabling your network code.
I have an app where I need accurate location updates every K minutes -- even while in the background. Significant location-change updates are not sufficient for my needs, hence I need to use CLLocationManager's startUpdatingLocation method and keep it running forever.
I want to use as little power as possible while still getting my periodic location updates. It seems that the two options for saving power are (temporarily) setting the desiredAccuracy property of the CLLocationManager to the least-accurate setting (e.g. 3-miles), or to defer location updates via the allowDeferredLocationUpdates* method. However, these two techniques are mutually incompatible since deferred updates require a high accuracy setting (most accurate).
Does anyone know which approach saves more power, or if there is another way to minimize power usage while still getting periodic updates (even in the background).
You should be doing both deferred updates and reduce desiredAccuracy.
And every K minutes, check the current CLLocation value, if its accuracy it acceptable, then use it. If not reduce the desiredAccuracy to 30m (or Best or whatever max is acceptable) for up to 30 seconds. This will turn on the GPS chip for 30 seconds, if you get an acceptable accuracy location, use that location and immediately put the desiredAccuracy back to 3000 (kCLLocationAccuracyThreeKilometers) until the next K minute period starts. If you don't get acceptable accuracy during that 30 seconds, too bad, use the best CLLocation that you got during that 30 sec period, go back to 3000m accuracy and try again in K minutes.
Be sure to read up on how to configure deferred updates. It's not easy to get them to work, but using that will allow you to wake the CPU only 1 during your 30 second time when the GPS is on instead of 30 times, saving lots of battery there too.
Deferred updates require iPhone 5 or later and iOS 6 or later. You can use deferredLocationUpdatesAvailable to determine if a device supports it.
Deferred updates uses vastly less power, but it requires hardware support so it isn't always available. It works by caching the location data in hardware and then passing it to your app all at one time (the power saving is in not activating the app frequently). It also offers time based configuration.
Monitoring for significant changes (startMonitoringSignificantLocationChanges) again uses less power by not using the GPS (using cell towers instead), so again it requires specific hardware support.
Simply setting the desired accuracy to low doesn't necessarily use either of the above features so you should check the device capability at runtime and use whichever features are available. AFAIK there are no statistics released for which of the hardware supported options uses less power.
I'd like to load a small audio clip like a beep into memory, and schedule playback after x seconds with very low jitter. My application ideally gets less than +-1ms, but +-5ms could still be useful. The time is synchronized to a remote application without a microphone. My question is what kind of jitter can I expect from the audio APIs, and are they all equal in this regard?
I'm not familiar with the audio APIs, but from the latency discussions I've seen the number 5.8ms using remoteIO audio units. Does this mean +-3ms would be the best precision possible?
You would need to set this process as Real-Time to have a guarantee of low delay, otherwise you can get jitter in seconds because operating system can decide to make some background job.
Once you got it as real-time, you might archive lower delay.
Please check with Apple if you can make process real-time (with scheduling options). You might want to have extra permissions and kernel level support in your app to do it properly, that you can have guaranteed 1ms delay for audio app.
I'm looking to do some high precision core motion reading (>=100Hz if possible) and motion analysis on the iPhone 4+ which will run continuously for the duration of the main part of the app. It's imperative that the motion response and the signals that the analysis code sends out are as free from lag as possible.
My original plan was to launch a dedicated NSThread based on the code in the metronome project as referenced here: Accurate timing in iOS, along with a protocol for motion analysers to link in and use the thread. I'm wondering whether GCD or NSOperation queues might be better?
My impression after copious reading is that they are designed to handle a quantity of discrete, one-off operations rather than a small number of operations performed over and over again on a regular interval and that using them every millisecond or so might inadvertently create a lot of thread creation/destruction overhead. Does anyone have any experience here?
I'm also wondering about the performance implications of an endless while loop in a thread (such as in the code in the above link). Does anyone know more about how things work under the hood with threads? I know that iPhone4 (and under) are single core processors and use some sort of intelligent multitasking (pre-emptive?) which switches threads based on various timing and I/O demands to create the effect of parallelism...
If you have a thread that has a simple "while" loop running endlessly but only doing any additional work every millisecond or so, does the processor's switching algorithm consider the endless loop a "high demand" on resources thus hogging them from other threads or will it be smart enough to allocate resources more heavily towards other threads in the "downtime" between additional code execution?
Thanks in advance for the help and expertise...
IMO the bottleneck are rather the sensors. The actual update frequency is most often not equal to what you have specified. See update frequency set for deviceMotionUpdateInterval it's the actual frequency? and Actual frequency of device motion updates lower than expected, but scales up with setting
Some time ago I made a couple of measurements using Core Motion and the raw sensor data as well. I needed a high update rate too because I was doing a Simpson integration and thus wnated to minimise errors. It turned out that the real frequency is always lower and that there is limit at about 80 Hz. It was an iPhone 4 running iOS 4. But as long as you don't need this for scientific purposes in most cases 60-70 Hz should fit your needs anyway.