iOS- start receiving gyroscope updates - ios

I make use of CMMotionManager in order to access the gyroscope data for iOS. I see that there are 2 methods :
startGyroUpdates
startGyroUpdatesToQueue:withHandler:
to start receiving the gyro updates. How can we differentiate between calling these two methods. What are the situations when the either of them can be called? IS there any significance of one over the other?
Any help appreciated,

A queue is used to guarantee that all events are processed, even when the update interval you set in deviceMotionUpdateInterval is producing events at a faster rate than you can process in real time. If you don't mind missing events, it doesn't matter which one of the two you use, just discard them.
The relevant Apple doc is the Core Motion section of the Event Handling Guide:
For each of the data-motion types described above, the CMMotionManager
class offers two approaches for obtaining motion data, a push approach
and a pull approach:
Push. An application requests an update interval and implements a
block (of a specific type) for handling the motion data; it then
starts updates for that type of motion data, passing into Core Motion
an operation queue as well as the block. Core Motion delivers each
update to the block, which executes as a task in the operation queue.
Pull. An application starts updates of a type of motion data and
periodically samples the most recent measurement of motion data.
The pull approach is the recommended approach for most applications,
especially games; it is generally more efficient and requires less
code. The push approach is appropriate for data-collection
applications and similar applications that cannot miss a sample
measurement.
It's not on your question, but I wonder if you want the raw x,y,z rotation or the more useful pitch,roll,yaw. For the later use startDeviceMotionUpdatesToQueue:withHandler: instead startGyroUpdatesToQueue:withHandler:.

Edit: See Tommy's comment on this answer. My assumption of the delegate pattern was wrong.
I'm not particularly familiar with CMMotionManager, but from the naming here's my guess:
startGyroUpdates
Delivers gyroscope updates by invoking delegate methods on the main thread.
startGyroUpdatesToQueue:withHandler:
Delivers gyroscope updates by invoking the handler block on the given queue.
The first would be the pre-block style using delegates, and the second would be the blockified version based on GCD.

Related

do dart streams come with extra overhead?

I have a general efficiency question about dart streams.
I have a project that makes some use of them, but it has been proposed that we convert nearly everything (functions and data) to be dart streams. This is in order to achieve a fully reactive architecture.
I don't know how streams really work under the hood, so I don't really know if this kind of design comes with any kind of memory or computational overhead.
Thanks for your attention to this question.
There is an overhead. It's not necessarily big, but it's there.
Streams have a well-defined asynchronous behavior, and it's documented how they react to listeners being added, paused or cancelled, even if that happens while an event is being delivered (because, most often, that is when it happens).
Streams are asynchronous, which means there is a delay between adding an event to the stream (through a StreamController), and that event being received by the listener. That delay makes it necessary to store (buffer) the event, schedule a microtask, and then unbuffer the event and deliver it in that later microtask. Scheduling a microtask costs. There might be zones involved, which can cost extra.
On top of that, because the stream needs to be able to react to pause and cancel events in a timely manner, which means that each event delivery is also flanked by extra checks of whether the event handler has paused or cancelled. It's not a lot of overhead, but it's there.
For single-subscription streams, that's about it.
For broadcast streams, which can have multiple listeners, there can be a little extra overhead to handle new listeners being added while delivering the event. Again, not a lot, but it's there. The state-space for a stream is actually quite complicated.
(You can create "a synchoronous StreamController" which delivers events "immediately", but most of the time, you shouldn't. Those are not for avoiding asynchrony, they are for avoiding adding extra asynchronous delays when propagating already synchronous events, and should be used very carefully to avoid breaking code assuming that they won't get events in the middle of something else. A properly implemented reactive framework will use such controllers in their implementation, but that will not get rid of the original inherent delay of delivering the original asynchronous event.)
Now, performance is not absolute. Using streams everywhere might make your life easier, and if the performance is good enough for your application (it's not dominating the actual computations), then the increased development speed and maintainability might pay for itself. You should measure (and have repeatable benchmarks to measure) before making a decision about an implementation strategy based on performance alone.

In Dataflow with PubsubIO is there any possibility of late data in a global window?

I was going to start developing programs in Google cloud Pubsub. Just wanted to confirm this once.
From the beam documentation the data loss can only occur if data was declared late by Pubsub. Is it safe to assume that the data will always be delivered without any message drops (Late data) when using a global window?
From the concepts of watermark and lateness I have come to a conclusion that these metrics are critical in conditions where custom windowing is applied over the data being received with event based triggers.
When you're working with streaming data, choosing a global window basically means that you are going to completely ignore event time. Instead, you will be taking snapshots of your data in processing time (that is, as they arrive) using triggers. Therefore, you can no longer define data as "late" (neither "early" or "on time" for that matter).
You should choose this approach if you are not interested in the time at which these events actually happened but, instead, you just want to group them according to the order in which they were observed. I would suggest that you go through this great article on streaming data processing, especially the part under When/Where: Processing-time windows which includes some nice visuals comparing different windowing strategies.

Semaphores in NSOperationQueues

I'm taking my first swing at a Swift/NSOperationQueue based design, and I'm trying to figure out how to maintain data integrity across queues.
I'm early in the design process, but the architecture is probably going to involve one queue (call it sensorQ) handling a stream of sensor measurements from a variety of sensors that will feed a fusion model. Sensor data will come in at a variety of rates, some quite fast (accelerometer data, for example), but some will require extended computation that could take, say, a second or more.
What I'm trying to figure out is how to capture the current state into the UI. The UI must be handled by the main queue (call it mainQ) but will reflect the current state of the fusion engine.
I don't want to hammer the UI thread with every update that happens on the sensor queue because they could be happening quite frequently, so an NSOperationQueue.mainQueue.addOperationWithBlock() call passing state back to the UI doesn't seem feasible. By the same token, I don't want to send queries to the sensor queue because if it's processing a long calculation I'll block waiting for it.
I'm thinking to set up an NSTimer that might copy the state into the UI every tenth of a second or so.
To do that, I need to make sure that the state isn't being updated on the sensor queue at the same time I'm copying it out to the UI queue. Seems like a job for a semaphore, but I'm not turning up much mention of semaphores in relation to NSOperationQueues.
I am finding references to dispatch_semaphore_t objects in Grand Central Dispatch.
So my question is basically, what's the recommended way of handling these situations? I see repeated admonitions to work at the highest levels of abstraction (NSOperationQueue) unless you need the optimization of a lower level such as GCD. Is this a case I need the optimization? Can the dispatch_semiphore_t work with NSOperationQueue? Is there an NSOperationQueue based approach here that I'm overlooking?
How much data are you sending to the UI? A few numbers? A complex graph?
If you are processing all your sensor data on an NSOperationQueue (let's call it sensorQ), why not make the queue serial? Then when your timer fires, you can post an "Update UI" task to sensorQ. When your update task arrives on the sensorQ, you know no other sensor is modifying the state. You can bundle up your data and post to the main (UI) queue.
A better answer could be provided if we knew:
1. Do your sensors have a minimum and maximum data rate?
2. How many sensors are contributing to your fusion model?
3. How are you synchronizing access from the sensors to your fusion model?
4. How much data and in what format is the "update" to the UI?
My hunch is, semaphores are not required.
One method that might work here is to decouple the sensor data queue from your UI activities via a ring buffer. This effectively eliminates the need for semaphores.
The idea is that the sensor data processing component pushes data into the ring buffer and the UI component pulls the data from the ring buffer. The sensor data thread writes at the rate determined by your sensor/processing and the UI thread reads at whatever refresh rate is appropriate for your application.

CoreData - (Performance) Considerations for frequent data

Background
We have an app that receives sensor data at 100 Hz. Each sensor data contains three floats. Occasionally (max 1/s) some other metadata may be received that needs to be saved as well. The UI displays the latest 1000 sensor values in a graph. There are no undo-requirements - all received data must be saved to file. Each session lasts for at least 10 min, but may (in rare circumstances and mostly due to mistake) be up to an hour.
Current approach
Model: SensorData has a many-to-one relationship with Session. MetaData has a many-to-one relationship with Session.
CoreData: Set up a UIManagedDocument to handle CoreData. One MOC on main thread with a child MOC on a private queue. The child MOC creates the objects and add them to the object graph. Every 100th data, save child MOC. Once session ends, save main MOC to PSC.
Edit: The problem I have with the current approach is that saving in the child MOC lags behind, which means not all data has been processed when session ends and processing time increases with run time.
Questions
Is it feasible to use CoreData as storage mechanism at ~100 Hz, or should I look at some alternative (like saving to a csv-file)?
What considerations must I take to ensure proper/optimal performance?
I have had performance issues with saves taking a long time and blocking UI. How can I avoid this? I.e. what saving policy should I use?
Drawbacks and advantages of current approach?
I think Core Data can do this.
You could use Marcus Zarra's approach of three contexts to make sure the actual save also happens in the background.
RootContext (background) saves to persistent store ---> is parent of
MainContext (main thread) to update the UI ---> is parent of one or more
WorkerContext (background) to create new data from sensor
You could then actually save more frequently in the background to the persistent store directly without impacting UI responsiveness. This should also improve memory usage. Saving the worker context will push the changes to the UI which can be updated accordingly.
For performance make sure you batch save - with three floats I would estimate every 1.000 to 5.000 records or so (you need to experiment to find the optimal value).
Turn off the undo manager. (context.undoManager = nil)
Another consideration would be to maybe think hard about what you want to show in the UI and perhaps calculate values to display on the fly and send that to the UI, rather than have the UI rely on the entire session's data set to update itself.
I have come up against exactly this issue, in an elaboration of this project.
My task is to record live sensor data from (for example) Core Motion and Core Location at rates up to 100Hz whilst simultaneously running a smoothly animating interface which can inolve any of Core Graphics, Core Animation, OpenGL and live video. There are ~20-40 separate data items to track, mostly doubles but one or two strings, and they do not all arrive at the same sync rate.
Any hold-up during saves, however slight, will have an immediate hit on the interface.
I was interested to compare using Core Data against writing directly to a SQL database (using sqlite3). My personal experience so far (this is a work in progress) is that the SQL approach is much better suited to this type of problem than Core data. In fact its not really what Core Data was optimised for (which is rather to manage complex document object models with undo, persistence and efficient faulting). The Core Data model almost assumes that persistent saves will be prohibitively slow (for example, saving to iCloud), and much of it's engineering is designed to offer solutions to that problem.
I have tried various core data patterns, backgrounding, parent/child contexts, sync, async, batching saves ... and invariably i find a noticeable stutter whenever a persistent save actually occurs.
The SQL approach, on the other hand, is simple to understand, efficent and completely free of noticable glitches.
It may well be that I have not arrived at the optimal core data pattern for this problem (and I will be digging deeper into this, as it is an interesting edge case). However I would definitely suggest a look at the direct-to-SQL approach if that makes sense for you in your broader app context.
In slightly different data-streaming use-cases (for example, a 250-500Hz signal delivered over bluetooth) I have opted for the kind of signal-processing tricks used by audio interfaces - ring buffers, queues and callbacks can become very useful as your data rate goes up. At some point the data rate will get too high for a database-writing process to keep up: then - as you suggest - saving directly to file will be more efficient. You can always read the data back out of files at some later point and populate the database (or core data) when sampling is not taking place.
Matt Gallagher made a nice comparision of Core Data and Databases.
It's a fairly old piece, but the patterns haven't changed so it is still relevant. There's also useful little (and similarly-aged) discussion here on the benefits of flat file over database writing with high-frequency streams.

How to guarantee a process starts at an exact time in iOS

We are playing a metronome audio file at time intervals (bpm), while simultaneously recording an audio file. However currently the start time of the two threads are not exactly simultaneously, and there is a slight time difference, which for music, is not allowable.
What strategies can we use to guarantee that the two processes start at the exact same time (or under a few milliseconds)?
Thanks!
I can think of three ways to get this done (but obviously I never tested them).
Each of your threads should do all the initialization they can up front, then wait for an "event". A few timing events I can think of:
use a Notification - both threads an listen for some "start" notification. That should be fairly quick.
have both threads do keyValue listening - so they both are listening for changes to some property on a known object, like appDelegate (or a singleton), or any object they both know (delegate?)
have each call a delegate when there initialization is done. When both are "ready", the delegate can send each a message, one after the other (on the main thread) to "start".
You could also experiment with NSLock and friends - not sure what kind of latency you would get there. Key-Value Observing is pretty fast and lightweight, and works on any thread.
The most accurate and reliable way of achieving this is to implement audio record and metronome playback in CoreAudio audio render/input handlers rather than using higher level APIs and relying on synchronising two threads. None of the mechanisms in #David H's answer provide any guarantees about thread execution by the kernel, although they'll probably all work most of the time on a lightly loaded system.
The callbacks are called on a real-time thread managed to CoreAudio, and synchronously with the hardware audio-clock - which is probably asynchronous with the kernel's timers.
You will need to load the metronome sample into memory and convert to the output format on initialisation - probably using one of the AudioToolbox APIs. The audio render callback simply copies this to the output buffer at the appropriate time.

Resources