How to disable Trickle Timer in Contiki-NG? - contiki

I have one question for you. I want to disable the trickle timer in the rpl-mrhof.c file. I defined one flag name as Trickle_flag. I want to disable the Trickle timer in my program when the Trickle_flag is equal to 1 and the DIO transmissions will be suppressed. When the Trickle_flag is equal to 0 the DIO transmissions will be continued. I want to stop all node's DIO transmissions. Any idea how to change the trickle function?

The correct way how to disable a trickle timer is to call the trickle_timer_stop(tt) macro, where tt is the timer to stop. See the documentation of this macro.
Stopping DIO transmissions is a different question though. The Contiki RPL implementation was developed before the Trickle timer library existed, so it does not use the Trickle timer API, but instead implements the Trickle timers internally, through callback timers (struct ctimer). Don't use ctimer_stop(&instance->dio_timer) for this either, it will not give reliable results, as the rpl_reset_dio_timer function may be called by some events, which will restart the timer. Your best approach is filter out the DIO messages before they are sent, instead of messing with the Trickle timers in a way that's not expected by the developers of the code. You can add an if statement to dio_output based on the flag value, for example, to return immediately if the DIO suppression flag is set.

Related

ReactiveSwift buffered pipe

How can a buffered replay subject be implemented in ReactiveSwift?
I've looked at replayLazily(upTo:) operator of SignalProducer, and also the pipe() function of the Signal type, however I can't see a straightforward way of creating something equivalent to Rx ReplaySubject.
This brings up the following questions as well:
ReactiveSwift implements Subject with Signal.pipe(), however you can't specify a buffer for the pipe the same way you can for a Rx ReplaySubject. Are there any workarounds?
replayLazily(upTo:) operator is missing from the Signal type. I guess this is not so bad since you can create a SignalProducer from a Signal. But why does Signal not have the same operator?
Has anyone encountered this problem before? Or am I missing something?
Any help would be much appreciated.
The Signal docs say:
An observer of a Signal will see the exact same sequence of events as all other observers. In other words, events will be sent to all observers at the same time.
This is in contrast to producers, which create a new signal each time they are started, which means it's possible for each observer to see different events.
The buffering scenario requires each observer to receive a list of current values in the buffer upon subscription, and other observers should not receive these values each time a new observer is added. Therefore, each observer needs their own signal, and that means the buffering mechanism must be implemented as a producer which can create a new signal for each subscriber.
There's a good discussion from 2016 when replayLazily was added that hopefully clarifies the thinking behind the operator and why it absolutely can't be a part of Signal.

How to make a function atomic in Swift?

I'm currently writing an iOS app in Swift, and I encountered the following problem: I have an object A. The problem is that while there is only one thread for the app (I didn't create separate threads), object A gets modified when
1) a certain NSTimer() triggers
2) a certain observeValueForKeyPath() triggers
3) a certain callback from Parse triggers.
From what I know, all the above three cases work kind of like a software interrupt. So as the code run, if NSTimer()/observeValueForKeyPath()/callback from Parse happens, current code gets interrupted and jumps to corresponding code. This is not a race condition (since just one thread), and I don't think something like this https://gist.github.com/Kaelten/7914a8128eca45f081b3 can solve this problem.
There is a specific function B called in all three cases to modify object A, so I'm thinking if I can make this function B atomic, then this problem is solved. Is there a way to do this?
You are making some incorrect assumptions. None of the things you mention interrupt the processor. 1 and 2 both operate synchronously. The timer won't fire or observeValueForKeyPath won't be called until your code finishes and your app services the event loop.
Atomic properties or other synchronization techniques are only meaningful for concurrent (multi-threaded) code. If memory serves, Atomic is only for properties, not other methods/functions.
I believe Parse uses completion blocks that are run on a background thread, in which case your #3 **is* using separate threads, even though you didn't realize that you were doing so. This is the only case in which you need to be worried about synchronization. In that case the simplest thing is to simply bracket your completion block code inside a call to dispatch_async(dispatch_get_main_queue()), which makes all the code in the dispatch_async closure run on the main, avoiding concurrency issues entirely.

ReactiveCocoa - Stop the triggering of subscribeNext until another signal has been completed

I'm pretty new to FRP and I'm facing a problem:
I subscribe to an observable that triggers subscribeNext every second.
In the subscribeNext's block, I zip observables that execute asynchronous operations and in zip's completed block I perform an action with the result.
let signal: RACSignal
let asynchOperations: [RACSignal]
var val: AnyObject?
// subscribeNext is trigered every second
signal.subscribeNext {
let asynchOperations = // several RACSignal
// Perform asynchronous operations
RACSignal.zip(asynchOperations).subscribeNext({
val = $0
}, completed: {
// perform actions with `val`
})
}
I would like to stop the triggering of subscribeNext for signal (that is normally triggered every second) until completed (from the zip) has been reached.
Any suggestion?
It sounds like you want an RACCommand.
A command is an object that can perform asynchronous operations, but only have one instance of its operation running at a time. As soon as you tell a command to start execute:ing, it will become "disabled," and will automatically become enabled again when the operation completes.
(You can also make a command that's enabled based on other criteria than just "am I executing right now," but it doesn't sound like you need that here.)
Once you have that, you could derive a signal that "gates" the interval signal (for example, if:then:else: on the command's enabled signal toggling between RACSignal.empty and your actual signal -- I do this enough that I have a helper for it), or you can just check the canExecute property before invoking execute: in your subscription block.
Note: you're doing a slightly weird thing with your inner subscription there -- capturing the value and then dealing with the value on the completed block.
If you're doing that because it's more explicit, and you know that the signal will only send one value but you feel the need to encode that directly, then that's fine. I don't think it's standard, though -- if you have a signal that will only send one value, that's something that unfortunately can't be represented at the type level, but is nonetheless an assumption that you can make in your code (or at least, I find myself comfortable with that assumption. To each their own).
But if you're doing it for timing reasons, or because you actually only want the last value sent from the signal, you can use takeLast:1 instead to get a signal that will always send exactly one value right at the moment that the inner signal completes, and then only subscribe in the next block.
Slight word of warning: RACCommands are meant to be used from the main thread to back UI updates; if you want to use a command on a background thread you'll need to be explicit about the scheduler to deliver your signals on (check the docs for more details).
Another completely different approach to getting similar behavior is temporal recursion: perform your operation, then when it's complete, schedule the operation to occur again one second later, instead of having an ongoing timer.
This is slightly different as you'll always wait one second between operations, whereas in the current one you could be waiting anywhere between zero and one seconds, but if that's a not a problem then this is a much simpler solution than using an RACCommand.
ReactiveCocoa's delay: method makes this sort of ad-hoc scheduling very convenient -- no manual NSTimer wrangling here.

What If the Updates Handler for CoreMotion Does Not Finish Fast Enough?

I am registering to receive updates from a CMMotionManager like so:
motionManager.startDeviceMotionUpdatesToQueue(deviceMotionQueue) {
[unowned self] (deviceMotion, error) -> Void in
// ... handle data ...
}
where deviceMotionQueue is an NSOperationQueue with the highest quality of service, i.e. the highest possible update rate:
self.deviceMotionQueue.qualityOfService = NSQualityOfService.UserInteractive
This means that I am getting updates often. Like really often. So I was wondering: what happens if I don't handle one update fast enough? If the update interval is shorter than the execution time of 'handle data'? Will the motion manager drop some information? Or will it queue up and after a while become run out of memory? Or is this not feasable at all?
It's hard to know what the internal CoreMotion implementation will do, and given that what it does is an "implementation detail", even if you could discern its current behavior, you wouldn't want to rely on that behavior moving forward.
I think the common solution to this is to do the minimum amount of work in the motion update handler, and then manage the work/rate-limiting/etc yourself. So, for instance, if you wanted to drop interstitial updates that arrived while you were processing the last update, you could have the update handler that you pass into CoreMotion do nothing but (safely) add a copy of deviceMotion to a mutable array, and then enqueue the "real" handler on a different queue. The real handler might then have a decision tree like:
if the array is empty, return immediately
otherwise (safely) take the last element, clear all elements from the array, and do the work based on the last element
This would have the effect of letting you take only the most recent reading, but also to have knowledge of how many updates were missed, and, if it's useful, what those missed updates were. Depending on your app, it might be useful to batch process the missed events as a group.
But the takeaway is this: if you want to be sure about how a system like this behaves, you have to manage it yourself.

How do I slow down the game loop in the game_loop package

I am using the game_loop pub package to handle my eventloop. The problem is that it updates way too often. I don't need to update or redraw that often, and input key repetition is also too fast. I do not know much about eventloops or browser redraws, so I might think of it the wrong way, but is there a way to slow the loop down?
Run heavy tasks in the separate Isolate while keeping game loop as lightweight as possible. The game loop should be implemented with window.animationFrame. How do I drive an animation loop at 60fps with Dart? You should learn all about requestAnimationFrame - it's the key to the smooth animations.
And your game logic speed should not depend on the browser FPS(Frames Per Second) use scheduler instead Does Dart have a scheduler?
Try adding a timer and avoid adding an onUpdate or onRender handler to your GameLoop instance:
//timer fires 20 times per second, as an example
gameLoop.addTimer(render,0.05,periodic: true);
...
...
render(GameLoopTimer timer)
{
//draw or update code here
}

Resources