C++ equivalent of .NET's Task.Delay? - task

I'm writing a C++/CX component to be consumed by Window's store Apps. I'm looking for a way to accomplish what Task.Delay(1000) does in C#.

Old Question, but still unanswered.
You can use
#include <chrono>
#include <thread>
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
This will need C++11, which shouldn't be a problem when using C++/CX.

After one year of using C++/CX, I have a general and reasonably correct answer to this question.
This link (from the Visual C++ Parallel Patterns Library documentation) includes a snippet for a function called complete_after(). That function creates a task that will complete after the specified number of milliseconds. You can then define a continuation task that will execute afterwards:
void MyFunction()
{
// ... Do a first thing ...
concurrency::create_task(complete_after(1000), concurrency::task_continuation_context::use_current)
.then([]() {
// Do the next thing, on the same thread.
});
}
Or better yet, if you use Visual C++'s coroutines capabilities simply type:
concurrency::task<void> MyFunctionAsync()
{
// ... Do a first thing ...
co_await complete_after(1000);
// Do the next thing.
// Warning: if not on the UI thread (e.g., on a threadpool thread), this may resume on a different thread.
}

You could create a concurrency::task, wait for 1000 time units and then call the ".then" method for the task. This will ensure that there is at least a wait of 1000 time units between the time you created the task and between the time it gets executed.

I'm not going to claim to be a wizard - I'm still fairly new to UWP and C++/CX., but what I'm using is the following:
public ref class MyClass sealed {
public:
MyClass()
{
m_timer = ref new Windows::UI::Xaml::DispatcherTimer;
m_timer->Tick += ref new Windows::Foundation::EventHandler<Platform::Object^>(this, &MyClass::PostDelay);
}
void StartDelay()
{
m_timer->Interval.Duration = 200 * 10000;// 200ms expressed in 100s of nanoseconds
m_timer->Start();
}
void PostDelay(Platform::Object^ sender, Platform::Object ^args)
{
m_timer->Stop();
// Do some stuff after the delay
}
private:
Windows::UI::Xaml::DispatcherTimer ^m_timer;
}
The main advantage over other approaches is that:
it's non-blocking
You're guaranteed to be called back on the XAML UI thread

Related

How do I run some code only once in Dart?

I wonder if there's a language sugar/SDK utility function in Dart that allows to protect a certain code from running more than once?
E.g.
void onUserLogin() {
...
runOnce(() {
handleInitialMessage();
});
...
}
I know I can add a global or class static boolean flag to check but it would be accessible in other functions of the same scope with a risk of accidental mixup in the future.
In C++ I could e.g. use a local static bool for this.
There is no built-in functionality to prevent code from running more than once. You need some kind of external state to know whether it actually did run.
You can't just remember whether the function itself has been seen before, because you use a function expression ("lambda") here, and every evaluation of that creates a new function object which is not even equal to other function objects created by the same expression.
So, you need something to represent the location of the call.
I guess you could hack up something using stack traces. I will not recommend that (very expensive for very little advantage).
So, I'd recommend something like:
class RunOnce {
bool _hasRun = false;
void call(void Function() function) {
if (_hasRun) return;
// Set after calling if you don't want a throw to count as a run.
_hasRun = true;
function();
}
}
...
static final _runOnce = RunOnce();
void onUserLogin() {
_runOnce(handleInitialMessage);
}
It's still just a static global that can be accidentally reused.

What is the difference between Flux.create() vs Flux.push() in project reactor?

What is the difference between Flux.create and Flux.push? I am looking--ideally with an example use case--to understand when I should use one or the other.
As the documentation says:
create: Programmatically create a Flux with the capability of emitting multiple elements in a synchronous or asynchronous manner through the FluxSink API. This includes emitting elements from multiple threads.
push: Programmatically create a Flux with the capability of emitting multiple elements from a single-threaded producer through the FluxSink API.
They are useful if the case you want to adapt some other async external API and not worry about cancellation and backpressure (which is handled automatically by this to method).
Here an example:
#Test
void testCreateToWrapMultiThreadsAsyncExternalAPI() {
SequenceCreator sequenceCreator = new SequenceCreator();
int numberOfElements = 10000;
StepVerifier.create(sequenceCreator.createNumberSequence(numberOfElements))
.expectNextCount(numberOfElements)
.verifyComplete();
}
#Slf4j
class SequenceCreator {
public Flux<Integer> createNumberSequence(Integer elementsToEmit) {
return Flux.create(sharedSink -> multiThreadSource(elementsToEmit, sharedSink));
}
void multiThreadSource(Integer elementsToEmit, FluxSink<Integer> sharedSink) {
Thread producingThread1 = new Thread(() -> emitElements(sharedSink, elementsToEmit / 2), "Thread_1");
Thread producingThread2 = new Thread(() -> emitElements(sharedSink, elementsToEmit / 2), "Thread_2");
producingThread1.start(); // Start to emit elements
producingThread2.start();
try {
producingThread1.join(); // Wait that thread finishes
producingThread2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
sharedSink.complete();
}
public void emitElements(FluxSink<Integer> sink, Integer count) {
IntStream.range(1, count + 1).boxed().forEach(n -> {
log.info("onNext {}", n);
sink.next(n);
});
}
}
Here you have a source that emits in elements in parallel. The source is composed of 2 threads and each thread emit numberOfElements / 2 elements that corresponds to 5000 elements emitted by Thread 1 and Thread 2 in this example. This source is wrapped with the create method and here we test that in total 10'000 elements are emitted. The test passed.
Now try to replace create with push. The test won't pass (if it is passing use a higher number for numberOfElements). That's because push expect that only one producing thread may invoke next, complete or error at a time since it won't manage the use on the FluxSink API in a concurrently way.
Hoping this toy example can help you to understand when to use one than another.
From the documentation at https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html
create()
Programmatically create a Flux with the capability of emitting multiple elements in a synchronous or asynchronous manner through the FluxSink API.
push()
Programmatically create a Flux with the capability of emitting multiple elements from a single-threaded producer through the FluxSink API.
With create() you can produce items from multiple threads. Use push() only if you do not intend to use multiple threads.

Caching streams in Functional Reactive Programming

I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.

Dart: update UI during long computation

I have a small dart script which I intend to use in the following way:
CanvasElement canvas;
void main() {
canvas = querySelector('#canvas');
querySelector('#start-button').onClick.listen((_) => work());
}
void work() {
var state; // some state of the computation
for (int i = 0; i < /*big number*/; i++) {
// do some long computation
render(state); // display intermediate result visualisation on canvas
}
}
void render(var state) {
canvas.context2D.... // draw on canvas based on state
}
that is listen for click on a button and on that click execute some long computation and from that computation display some intermediate results on the canvas live as the computation progresses.
However, this does not work - the canvas is updated only once after the whole computation completes.
Why is that? How should I arrange my code to work in a live, responsive way?
One of solutions would be to put parts of your long computation into dart's event loop, i.e. queuing computation by waiting for a future it immediately return.
Please, see sample on DartPad.
But if you have lots of heavy computations, I think it would be better to start a new isolate and communicate with it about it's state.
Update
Here is almost, not exactly the same, work function, rewritten without using await, maybe it would be a bit clearer how it works:
for (int i = 0; i < 200; i++) {
new Future(()=>compute(i)).then((double result)=>render(i+result*50));
}
Here, we are actually creating 200 futures, queuing them into event loop (Future constructor does that), and registering a personal callback for each one.

Using polymer's `.job` in polymer.dart

In Polymer there is a this.job() function that handles the delayed processing of events. How do you access this functionality from polymer.dart?
#override
void attached() {
super.attached();
dom.window.onMouseMove.listen(mouseMoveHandler);
}
PolymerJob mouseMoveJob;
void mouseMoveHandler(dom.MouseEvent e) {
print('mousemove');
mouseMoveJob = scheduleJob(mouseMoveJob, onDone, new Duration(milliseconds: 500));
}
void onDone() {
print('done');
}
If the job isn't rescheduled for 500ms it is executed.
In polymer this is often used during initialization when
xxxChanged(old);
is called several times succinctly because xxx is updated on changes from several other states which are initialized one after the other but it is enough when xxxChanged is executed for the last update (a much shorter timeout should be used then like 0-20 ms depending whether xxxChanged is only called from sync or also from async code.
Another situation where I used this pattern (but not using PolymerJob) is where an #observable field is bound to a slider <input type="range" value='{{slider}}'>.
This invokes sliderChanged(oldVal, newVal) very often in a short interval when you move the knob. The execution of the update is expensive and can't be finished between two such calls see http://bwu-dart.github.io/bwu_datagrid/example/e04_model.html for an example.
Without some delayed execution this would be very cumbersome to use.
Try using Future:
doJob() => print('hi');
new Future(doJob).then((_) => print('job is done'));
Here are the docs for the Future class.

Resources