Using ReactiveCocoa to track UI updates with a remote object - ios

I'm making an iOS app which lets you remotely control music in an app playing on your desktop.
One of the hardest problems is being able to update the position of the "tracker" (which shows the time position and duration of the currently playing song) correctly. There are several sources of input here:
At launch, the remote sends a network request to get the initial position and duration of the currently playing song.
When the user adjusts the position of the tracker using the remote, it sends a network request to the music app to change the position of the song.
If the user uses the app on the desktop to change the position of the tracker, the app sends a network request to the remote with the new position of the tracker.
If the song is currently playing, the position of the tracker is updated every 0.5 seconds or so.
At the moment, the tracker is a UISlider which is backed by a "Player" model. Whenever the user changes the position on the slider, it updates the model and sends a network request, like so:
In NowPlayingViewController.m
[[slider rac_signalForControlEvents:UIControlEventTouchUpInside] subscribeNext:^(UISlider *x) {
[playerModel seekToPosition:x.value];
}];
[RACObserve(playerModel, position) subscribeNext:^(id x) {
slider.value = player.position;
}];
In PlayerModel.m:
#property (nonatomic) NSTimeInterval position;
- (void)seekToPosition:(NSTimeInterval)position
{
self.position = position;
[self.client newRequestWithMethod:#"seekTo" params:#[positionArg] callback:NULL];
}
- (void)receivedPlayerUpdate:(NSDictionary *)json
{
self.position = [json objectForKey:#"position"]
}
The problem is when a user "fiddles" with the slider, and queues up a number of network requests which all come back at different times. The user could be have moved the slider again when a response is received, moving the slider back to a previous value.
My question: How do I use ReactiveCocoa correctly in this example, ensuring that updates from the network are dealt with, but only if the user hasn't moved the slider since?

In your GitHub thread about this you say that you want to consider the remote's updates as canonical. That's good, because (as Josh Abernathy suggested there), RAC or not, you need to pick one of the two sources to take priority (or you need timestamps, but then you need a reference clock...).
Given your code and disregarding RAC, the solution is just setting a flag in seekToPosition: and unsetting it using a timer. Check the flag in recievedPlayerUpdate:, ignoring the update if it's set.
By the way, you should use the RAC() macro to bind your slider's value, rather than the subscribeNext: that you've got:
RAC(slider, value) = RACObserve(playerModel, position);
You can definitely construct a signal chain to do what you want, though. You've got four signals you need to combine.
For the last item, the periodic update, you can use interval:onScheduler::
[[RACSignal interval:kPositionFetchSeconds
onScheduler:[RACScheduler scheduler]] map:^(id _){
return /* Request position over network */;
}];
The map: just ignores the date that the interval:... signal produces, and fetches the position. Since your requests and messages from the desktop have equal priority, merge: those together:
[RACSignal merge:#[desktopPositionSignal, timedRequestSignal]];
You decided that you don't want either of those signals going through if the user has touched the slider, though. This can be accomplished in one of two ways. Using the flag I suggested, you could filter: that merged signal:
[mergedSignal filter:^BOOL (id _){ return userFiddlingWithSlider; }];
Better than that -- avoiding extra state -- would be to build an operation out of a combination of throttle: and sample: that passes a value from a signal at a certain interval after another signal has not sent anything:
[mergedSignal sample:
[sliderSignal throttle:kUserFiddlingWithSliderInterval]];
(And you might, of course, want to throttle/sample the interval:onScheduler: signal in the same way -- before the merge -- in order to avoid unncessary network requests.)
You can put this all together in PlayerModel, binding it to position. You'll just need to give the PlayerModel the slider's rac_signalForControlEvents:, and then merge in the slider value. Since you're using the same signal multiple places in one chain, I believe that you want to "multicast" it.
Finally, use startWith: to get your first item above, the inital position from the desktop app, into the stream.
RAC(self, position) =
[[RACSignal merge:#[sampledSignal,
[sliderSignal map:^id(UISlider * slider){
return [slider value];
}]]
] startWith:/* Request position over network */];
The decision to break each signal out into its own variable or string them all together Lisp-style I'll leave to you.
Incidentally, I've found it helpful to actually draw out the signal chains when working on problems like this. I made a quick diagram for your scenario. It helps with thinking of the signals as entities in their own right, as opposed to worrying about the values that they carry.

Related

Firebase A/B test not counting users when activation event is used on iOS

We're using the current version of the Firebase iOS framework (5.9.0) and we're seeing a strange problem when trying to run A/B test experiments that have an activation event.
Since we want to run experiments on first launch, we have a custom splash screen on app start that we display while the remote config is being fetched. After the fetch completes, we immediately activate the fetched config and then check to see if we received info about experiment participation to reconfigure the next UI appropriately. There are additional checks done before we determine that the current instance, in fact, should be part of the test, thus the activation event. Basically, the code looks like:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
} else {
<dismiss splash screen and show next UI screen>
}
}
With the approach above (which is completely straight-forward IMO) does not work correctly. After spending time with the debugger and Firebase logging enabled I can see in the log that there is a race-condition problem occurring. Basically, the Firebase activateFetched() call does not set up a "conditional user property experiment ID" synchronously inside the activateFetched call but instead sets it up some short time afterward. Because of this, our firing of the activation event immediately after activateFetched does not trigger this conditional user property and subsequent experiment funnel/goal events are not properly marked as part of an experiment (the experiment is not even activated in the first place).
If we change the code to delay the sending of the activation event by some arbitrary delay:
<code that shows splash>
…
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:7 completionHandler:^(FIRRemoteConfigFetchStatus status, NSError * _Nullable error) {
[[FIRRemoteConfig remoteConfig] activateFetched];
if (<checks that see if we received info about being selected to participate in the experiment and if local conditions are met for experiment participation>) {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[FIRAnalytics logEventWithName:#"RegistrationEntryExperimentActivation" parameters:nil];
<dismiss splash screen and show next UI screen based on experiment variation received in remote config>
}
} else {
<dismiss splash screen and show next UI screen>
}
}
the conditional user property for the experiment gets correctly setup beforehand and triggered by the event (causing experiment activation and subsequent events being correctly marked as part of the experiment).
Now, this code obviously is quite ugly and prone to possible race-conditions. The delay of 0.5 seconds is conservatively set to hopefully be enough on all iOS devices but ¯_(ツ)_/¯. I've read the available documentation multiple times and tried looking at all available API methods with no success in figuring out what the correct point of starting to send events should be. If the activateFetched method uses an asynchronous process of reconfiguring internal objects, one would expect a callback method that indicates to the caller the point in time when everything is done reconfiguring and ready for further use by the application. Seems the framework engineers didn't anticipate a use-case when someone needs to send the activation event immediatly after remote config profile activation…
Has anyone else experienced this problem? Are we missing something in the API? Is there a smarter way of letting activateFetched finish its thing?
Hope some Firebase engineers can chime-in with their wisdom as well :)
Thanks

Creating lighting within Minecraft from the server?

I'm creating a bukkit plugin and it requires lighting to be added and I want to be able to accomplish this server side only so that users don't need special plugins to see the lighting. Could this be done? If I'm not mistaken rendering lighting has been server side before? I would also like this lighting to be colored and lighting sources to be invisible (lighting from coordinates is acceptable since the map will be set)
My fear, can it be done?
You could do this using:
p.sendBlockChange(Location, Material, Byte);
Location is the location of the block
Material is the material that you want the player to see
the Byte is the data, so in the block 43:8, you would use 8. If there is none, just use 0.
So, you could do this to send the block update to all players:
Location[] invisibleBlocks; //all Invisible locations
for(Player p : Bukkit.getOnlinePlayers()){ //get all online players
for(Location l : invisibleBlocks){ //get all invisible blocks
p.sendBlockChange(l, Material.AIR, 0); //send block change of AIR to the player
}
}
The only problem is that block changes get reset when a player unloads/loads the chunk that the change is in. So, to fix this, you could schedule a timer:
Location[] invisibleBlocks; //set this to the locations of all of the blocks you want to make invisible
plugin.getServer().getScheduler().scheduleSyncDelayedTask(plugin, new Runnable(){
public void run(){
for(Player p : Bukkit.getOnlinePlayers()){ //get all online players
for(Location l : invisibleBlocks){ //get all invisible blocks
p.sendBlockChange(l, Material.AIR, 0); //send block change of AIR to the player
}
}
}
},100);//delay time is 5 seconds (5 seconds * 20 ticks per second)
Then all you need to do is put glowstone in the the invisibleBlocks locations, and it will appear as air, but (should) still emit light.
One problem with this is that if a player tries to walk into the block, they will walk in half way, then get teleported back out. This is because the client thinks there isn't a block there, yet the server knows that there is, and when the player walks into the block, the server teleports them back out, making a jerky kind of motion.
If you put this somewhere where players can't walk into it, you should be good!

Are these two Observable Operations Equivalent?

I'm not sure why, but for some reason when using the observable that is created via concat I will always get all values that are pushed from my list (works as intended). Where as with the normal subscribe it seems that some values never make it to those who have subscribed to the observable (only in certain conditions).
These are the two cases that I am using. Could anyone attempt to explain why in certain cases when subscribing to the second version not all values are received? Are they not equivalent? The intent here is to rewind the stream. What are some reasons that could explain why Case 2 fails while Case 1 does not.
Replay here is just a list of the ongoing stream.
Case 1.
let observable =
Observable.Create(fun (o:IObserver<'a>) ->
let next b =
for v in replay do
o.OnNext(v.Head)
o.OnNext(b)
o.OnCompleted()
someOtherObs.Subscribe(next, o.OnError, o.OnCompleted))
let toReturn = observable.Concat(someOtherObs).Publish().RefCount()
Case 2.
let toReturn =
Observable.Create(fun (o:IObserver<'a>) ->
for v in replay do
o.OnNext(v.Head)
someOtherObs.Subscribe(o)
).Publish().RefCount()
Caveat! I don't use F# regularly enough to be 100% comfortable with the syntax, but I think I see what's going on.
That said, both of these cases look odd to me and it greatly depends on how someOtherObs is implemented, and where (in terms of threads) things are running.
Case 1 Analysis
You apply concat to a source stream which appears to work like this:
It subscribes to someOtherObs, and in response to the first event (a) it pushes the elements of replay to the observer.
Then it sends event (a) to the observer.
Then it completes. At this point the stream is finished and no further events are sent.
In the event that someOtherObs is empty or just has a single error, this will be propagated to the observer instead.
Now, when this stream completes, someOtherObs is concatenated on to it. What happens now is a little unpreditcable - if someOtherObs is cold, then the first event would be sent a second time, if someOtherObs is hot, then the first event is not resent, but there's a potential race condition around which event of the remainder will go next which depends on how someOtherObs is implemented. You could easily miss events if it's hot.
Case 2 Analysis
You replay all the replay events, and then send all the events of someOtherObs - but again there's a race condition if someOtherObs is hot because you only subscribe after pushing replay, and so might miss some events.
Comments
In either case, it seems messy to me.
This looks like an attempt to do a merge of a state of the world (sotw) and a live stream. In this case, you need to subscribe to the live stream first, and cache any events while you then acquire and push the sotw events. Once sotw is pushed, you push the cached events - being careful to de-dupe events that may been read in the sotw - until you are caught up with live at which point you can just pass live events though.
You can often get away with naive implementations that flush the live cache in an OnNext handler of the live stream subscription, effectively blocking the source while you flush - but you run the risk of applying too much back pressure to the live source if you have a large history and/or a fast moving live stream.
Some considerations for you to think on that will hopefully set you on the right path.
For reference, here is an extremely naïve and simplistic C# implementation I knocked up that compiles in LINQPad with rx-main nuget package. Production ready implementations I have done in the past can get quite complex:
void Main()
{
// asynchronously produce a list from 1 to 10
Func<Task<List<int>>> sotw =
() => Task<List<int>>.Run(() => Enumerable.Range(1, 10).ToList());
// a stream of 5 to 15
var live = Observable.Range(5, 10);
// outputs 1 to 15
live.MergeSotwWithLive(sotw).Subscribe(Console.WriteLine);
}
// Define other methods and classes here
public static class ObservableExtensions
{
public static IObservable<TSource> MergeSotwWithLive<TSource>(
this IObservable<TSource> live,
Func<Task<List<TSource>>> sotwFactory)
{
return Observable.Create<TSource>(async o =>
{
// Naïve indefinite caching, no error checking anywhere
var liveReplay = new ReplaySubject<TSource>();
live.Subscribe(liveReplay);
// No error checking, no timeout, no cancellation support
var sotw = await sotwFactory();
foreach(var evt in sotw)
{
o.OnNext(evt);
}
// note naive disposal
// and extremely naive de-duping (it really needs to compare
// on some unique id)
// we are only supporting disposal once the sotw is sent
return liveReplay.Where(evt => !sotw.Any(s => s.Equals(evt)))
.Subscribe(o);
});
}
}

Set an initial focal distance on iOS

I'm working on an iOS-app where one of the features is scanning QR-codes. For this I'm using the excellent library, ZBar. The scanning works fine and is generally really quick. However when you use smaller QR-codes it takes a bit longer to scan, mostly due to the fact that the autofocus needs some time to adjust. I was experimenting and noticed that the focus could be locked using the following code:
AVCaptureDevice *cameraDevice = readerView.device;
if ([cameraDevice lockForConfiguration:nil]) {
[cameraDevice setFocusMode:AVCaptureFocusModeLocked];
[cameraDevice unlockForConfiguration];
}
When this code is used after a successful scan, the coming scans are really quick. That made me wonder, could I somehow lock the focus before even scanning one code? The app will only scan rather small QR-codes so there will never be a need for focusing on something far away. Sure, I could implement something like tap to focus, but preferably I would like to avoid that extra step.
Is there a way to achieve this? Or are there maybe another way of speeding things up when dealing with smaller QR-codes?
// Alexander
In iOS7 this is now possible!
Apple has added the property autoFocusRangeRestriction to the AVCaptureDevice class. This property is of the enum AVCaptureAutoFocusRangeRestriction which has three different values:
AVCaptureAutoFocusRangeRestrictionNone - Default, no restrictions
AVCaptureAutoFocusRangeRestrictionNear - The subject that matters is close to the camera
AVCaptureAutoFocusRangeRestrictionFar - The subject that matters is far from the camera
To check if the method is available we should first check if the property autoFocusRangeRestrictionSupported is true. And since it's only supported in iOS7 an onwards we should also use respondsToSelector so we don't get an exception on earlier iOS-versions.
So the resulting code should look something like this:
AVCaptureDevice *cameraDevice = zbarReaderView.device;
if ([cameraDevice respondsToSelector:#selector(isAutoFocusRangeRestrictionSupported)] && cameraDevice.autoFocusRangeRestrictionSupported) {
// If we are on an iOS version that supports AutoFocusRangeRestriction and the device supports it
// Set the focus range to "near"
if ([cameraDevice lockForConfiguration:nil]) {
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
[cameraDevice unlockForConfiguration];
}
}
This seems to somewhat speed up the scanning of small QR-codes according to my initial tests :)
Update - iOS8
With iOS8, Apple has given us lots of new camera API's to play with. One of this new methods is this one:
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler
This method locks focus by moving the lens to a position between 0.0 and 1.0. I played around with the method, locking the lens at close values. However, in general it caused more problems then it solved. You had to keep the QR-codes/barcodes at a very specific distance, which could cause issues when you had codes of different sizes.
But. I think I have found a pretty good alternative to locking focus altogether. When the user press the scan button, I lock the lens to a close distance, and when it's finished I switch the camera back to auto focus. This gives us the benefits of keeping auto focus on, but forces the camera to begin at a close distance where a QR-code/barcode is likely to be found. This in combination with:
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
And:
cameraDevice.focusPointOfInterest = CGPointMake(0.5,0.5);
Results in a pretty snappy scanner.
I also built a custom scanner with the API's introduced in iOS7, instead of using ZBar. Mostly because the ZBar-libs are quite outdated and as when iPhone 5 introduced ARMv7s I now had to recompile it again for ARM64.
// Alexander
iOS 8 recently added this configuration! It is almost like they read stack overflow
/*!
#method setFocusModeLockedWithLensPosition:completionHandler:
#abstract
Sets focusMode to AVCaptureFocusModeLocked and locks lensPosition at an explicit value.
#param lensPosition
The lens position, as described in the documentation for the lensPosition property. A value of AVCaptureLensPositionCurrent can be used
to indicate that the caller does not wish to specify a value for lensPosition.
#param handler
A block to be called when lensPosition has been set to the value specified and focusMode is set to AVCaptureFocusModeLocked. If
setFocusModeLockedWithLensPosition:completionHandler: is called multiple times, the completion handlers will be called in FIFO order.
The block receives a timestamp which matches that of the first buffer to which all settings have been applied. Note that the timestamp
is synchronized to the device clock, and thus must be converted to the master clock prior to comparison with the timestamps of buffers
delivered via an AVCaptureVideoDataOutput. The client may pass nil for the handler parameter if knowledge of the operation's completion
is not required.
#discussion
This is the only way of setting lensPosition.
This method throws an NSRangeException if lensPosition is set to an unsupported level.
This method throws an NSGenericException if called without first obtaining exclusive access to the receiver using lockForConfiguration:.
*/
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler NS_AVAILABLE_IOS(8_0);
EDIT: this is a method of AVCaptureDevice

How to Throttle CoreMIDI in Objective-C

My CoreMIDI connection on iOS is apparently fast enough to handle ANYTHING that hits it... if I'm just doing some simple object creation and NSLog. In the UI, I don't have time to handle everything that comes in. The UI would blow up, or just finish processing too late.
However, I need to do real processing and UI display in response to CoreMIDI inputs. What I'd like is to process the latest messages every, say, 1ms or 2ms. I've been doing this with a collection that gets emptied by a timer-fired method every 1ms (processFromServerAsync). One problem is that some messages might fall through the cracks, I think, if I grab and substitute:
NSDictionary *queueCopy = [self.queue copy];
// here the dictionary could get messages not in the queue copy!
self.queue = [NSMutableDictionary dictionary];
I realize that I could handle this by synchronizing with a lock, which is easy to screw up:
-(NSMutableDictionary *)messageQueue {
#synchronized(self) {
if (!messageQueue_)
self.messageQueue = [NSMutableDictionary dictionary];
return messageQueue_;
}
}
-(NSDictionary*)clearMessageQueueAndReturnCopy {
#synchronized(self) {
if (!messageQueue_)
return [NSDictionary dictionary];
NSDictionary *retVal = [messageQueue_ copy];
self.messageQueue = [NSMutableDictionary dictionary];
return retVal;
}
}
However, I'm not convinced that I'm even handling this in the correct way. How is throttling typically done (even outside of Obj-C)? I surely cannot process all those messages in the UI nor the program.
There are some well-established patterns for throttling streams of incoming data. This comes up a lot in finance, where you might have a data feed throwing 100K messages/sec at a system.
You employ a sliding window mechanism to discard redundant messages while ensuring that the client has the latest possible copy of the data. You set your window up over some time period (a few milliseconds) then set up a queue for each data stream (meaning a particular CC, midi note etc.) You start a global timer when the first message comes in. You send that message to the client immediately. If anything else comes in during the window you push it to its queue. The queue has just one entry - the latest value - so you overwrite the queued value with each subsequent update. When the timer ticks (the window is over) you send the latest message out to the client. Then, you send the next message out as soon as it comes in, start a new window and repeat. This gives a reasonable balance between swamping the client and avoiding aliasing of update intervals to the timer window. Aliasing is less of an issue with 1-2ms intervals so a cruder rigid timer approach might work for you.
The critical thing is ensuring that you have separate windows for each data stream. You can't risk overwriting or ignoring, say, a note off because a control change came in. One timer, one single-entry queue per Midi message number.

Resources