Having an async publisher like the one bellow, is there a way with Project Reactor to wait till the entire stream is finished processing?
Of course, without having to add a sleep for an unknown duration...
#Test
public void groupByPublishOn() throws InterruptedException {
UnicastProcessor<Integer> processor = UnicastProcessor.create();
List<Integer> results = new ArrayList<>();
Flux<Flux<Integer>> groupPublisher = processor.publish(1)
.autoConnect()
.groupBy(i -> i % 2)
.map(group -> group.publishOn(Schedulers.parallel()));
groupPublisher.log()
.subscribe(g -> g.log()
.subscribe(results::add));
List<Integer> input = Arrays.asList(1, 3, 5, 2, 4, 6, 11, 12, 13);
input.forEach(processor::onNext);
processor.onComplete();
Thread.sleep(500);
Assert.assertTrue(results.size() == input.size());
}
You can replace these lines:
groupPublisher.log()
.subscribe(g -> g.log()
.subscribe(results::add));
with this
groupPublisher.log()
.flatMap(g -> g.log()
.doOnNext(results::add)
)
.blockLast();
flatMap is a better pattern than subscribe-within-subscribe and will take care of subscribing to the group for you.
doOnNext takes care of the consuming side-effect (adding values to the collection), freeing you up from the need to perform that in the subscription.
blockLast() replaces the subscription, and rather than letting you provide handlers for the events it blocks until the completion (and returns the last emitted item, but you would already have taken care of that within doOnNext).
The main problem to use blockLast() is that you will never release your pipeline if your operation are not able to finish.
What you need to do is get the Disposable and check if has finish the pipeline which means the boolean isDisposed it will return true.
Then it´s up to you to decide if you want to have a timeout, like the lazy count implementation :)
int count = 0;
#Test
public void checkIfItDisposable() throws InterruptedException {
Disposable subscribe = Flux.just(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.map(number -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
return number;
}).subscribeOn(Schedulers.newElastic("1"))
.subscribe();
while (!subscribe.isDisposed() && count < 100) {
Thread.sleep(400);
count++;
System.out.println("Waiting......");
}
System.out.println("It disposable:" + subscribe.isDisposed());
And in case you want to use blockLast, at least add a timeout
#Test
public void checkIfItDisposableBlocking() throws InterruptedException {
Flux.just(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
.map(number -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
return number;
}).subscribeOn(Schedulers.newElastic("1"))
.blockLast(Duration.of(60, ChronoUnit.SECONDS));
System.out.println("It disposable");
}
You can see more Reactor examples here if you need more ides https://github.com/politrons/reactive
Related
I am failing to understand, why the error thrown from addItem method in below code is not caught in the try-catch block
void main() async {
var executor = Executor();
var stream = Stream.fromIterable([0, 1, 2, 3, 4, 5, 6, 7]);
try {
await for (var _ in stream) {
executor.submit(() => demoMethod());
}
await executor.execute();
} catch (e) {
print(e);
}
}
Future<void> demoMethod() async {
var list = [1, 2, 3, 1, 4, 5];
var executor = Executor();
var test = Test();
for (var element in list) {
executor.submit(() => test.addItem(element));
}
await executor.execute();
test.list.forEach(print);
}
class Test {
var list = <int>[];
Future<void> addItem(int i) async {
if (list.contains(i)) {
throw Exception('Item exists');
}
list.add(i);
}
}
class Executor {
final List<Future<void>> _futures = [];
bool _disposed = false;
void submit(Future<void> Function() computation) {
if (!_disposed) {
_futures.add(computation());
} else {
throw Exception('Executor is already disposed');
}
}
Future<void> execute() async {
await Future.wait(_futures, eagerError: true);
_disposed = true;
}
}
but below code is able to catch the error properly
void main() async {
var executor = Executor();
try {
for (var i = 0; i < 10; i++) {
executor.submit(() => demoMethod());
}
await executor.execute();
} catch (e) {
print(e);
}
}
I am guessing it has something to do with the stream processing.
It's the stream.
In your other examples, you synchronously run through a loop a and call Executor.submit with all the computations, then immediately call executor.execute().
There is no asychronous gap between calling the function which returns a future, and Future.wait starting to wait for that future.
In the stream code, each stream events starts an asynchronous computation by calling Executor.submit. That creates a future, stores it in a list, and goes back to waiting for the stream.
If that future completes, with an error, before the stream ends and Future.wait gets called, then there is no error handler attached to the future yet. The error is then considered unhandled, and is reported to the current Zone's uncaught error handler. Here that's the root zone, which means it's a global uncaught error, which may crash your entire program.
You need to make sure the future doesn't consider its error unhandled.
The easiest way to do that is to change submit to:
void submit(Future<void> Function() computation) {
if (!_disposed) {
_futures.add(computation()..ignore());
} else {
throw StateError('Executor is already disposed');
}
}
The ..ignore() tells the future that it's OK to not have an error handler.
You know, because the code will later come back and call executor.execute, that any errors will still be reported, so it should be safe to just postpone them a little. That's what Future.ignore is for.
(Also changed Exception to StateError, because that's what you should use to report people using objects that have been disposed or otherwise decommissioned.)
I'm trying to check for each movement in a different task and after checking if there was a collision, in some iterations it generates an Exception "A source matrix was not long or sufficient. Check the index and length, as well as the lower limits of the matrix."
If you try to run sequentially in a "for" the error does not occur, I need to run in parallel to increase performance.
In debugging tests I notice that the error always occurs when trying to run cd.DoWork()
private void btn_Tasks_Click(object sender, EventArgs e)
{
// The source of your work items, create a sequence of Task instances.
Task[] tasks = Enumerable.Range(0,tabelaPosicao.Count).Select(i =>
// Create task here.
Task.Run(() =>
{
VerifiCollision(i);
})
// No signalling, no anything.
).ToArray();
// Wait on all the tasks.
Task.WaitAll(tasks);
}
private void VerifiCollision(object x)
{
int xx = (int)x;
int AuxIncrMorsa = Convert.ToInt32(tabelaPosicao[xx].Posicao) * -1;
bRef_BaseMorsa.Transformation = new Translation(0, AuxIncrMorsa, 0);
CollisionDetection cd = new CollisionDetection(new List<Entity>() { bRef_BaseMorsa }, new List<Entity>() { bRef_Matriz }, model1.Blocks, true, CollisionDetection2D.collisionCheckType.OBWithSubdivisionTree, maxTrianglesNumForOctreeNode: 5);
{
if (cd != null)
{
try
{
cd.DoWork();
}
catch (Exception e)
{
e.StackTrace;
}
catch (AggregateException ae)
{
var messege = ae.Message;
}
}
model1.Entities.ClearSelection();
if (cd3.Result != null && cd3.Result.Count > 0)
{
tabelaPosicao[xx].Tuple = new Tuple<string, string>(cd3.Result[0].Item1.ParentName,
cd3.Result[0].Item2.ParentName);
}
}
}
Before applying the transformation you need to clone the entity.
You can have a look at the "WORKFLOW" topic of this article.
I solved it by cloning the Bloks and BlockReference, so each iteration with its transformation performed separately, so there was no possibility that the transformation of one iteration would interfere with another. Grateful for the help.
Let's assume I have a Stream<int> emitting integers in different time deltas i.e. between 5ms and 1000ms.
When the delta is <= 50ms I want to merge them. for example:
3, (delta:100) 5, (delta:27) 6, (delta:976) 3
I want to consume: 3, 11(merged using addition), 3.
Is this possible?
You can use the debounceBuffer stream transformer from the stream_transform package.
stream
.transform(debounceBuffer(const Duration(milliseconds: 50)))
.map((list) => list.fold(0, (t, e) => t + e))
You can write that easily enough yourself:
Stream<int> debounce(
Stream<int> source, Duration limit, int combine(int a, int b)) async* {
int prev;
var stopwatch;
await for (var event in source) {
if (stopwatch == null) {
// First event.
prev = event;
stopwatch = Stopwatch()..start();
} else {
if (stopwatch.elapsed < limit) {
prev = combine(prev, event);
} else {
yield prev;
prev = event;
}
stopwatch.reset();
}
}
// If any event, yield prev.
if (stopwatch != null) yield prev;
}
Playing with Dart, is it possible to create a delay constructing a Future?:
Future<String>.value("Hello").then((newsDigest) {
print(newsDigest);
}) // .delayed(Duration(seconds: 5))
Yes, this is possible:
factory Future.delayed(Duration duration, [FutureOr<T> computation()]) {
_Future<T> result = new _Future<T>();
new Timer(duration, () {
try {
result._complete(computation?.call());
} catch (e, s) {
_completeWithErrorCallback(result, e, s);
}
});
return result;
}
As you have already discovered Future.delayed constructor creates a future that runs after a delay:
From the docs:
Future<T>.delayed(
Duration duration,
[ FutureOr<T> computation()
])
The computation will be executed after the given duration has passed, and the future is completed with the result of the computation.
If computation returns a future, the future returned by this constructor will complete with the value or error of that future.
For the sake of simplicity, taking a future that complete immediately with a value, this snippet creates a delayed future that complete after 3 seconds:
import 'dart:async';
main() {
var future = Future<String>.value("Hello");
var delayedFuture = Future.delayed(Duration(seconds: 3), () => future);
delayedFuture.then((value) {
print("Done: $value");
});
}
What could be the trigger that makes the filter function throws or rethrows an error?
someArray.filter(includeElement: (Self.Generator.Element) throws -> Bool )
The "trigger" is the presence of some code inside the closure that can throw an error, i.e. there is a try inside the closure.
The filter method is defined to not only accept closures that might throw an error, but also rethrow any errors thrown by its closure. So, if you call filter with closure that throws an error (i.e. the closure has a try statement), you can then wrap the whole filter in its own do-try-catch pattern to gracefully handle any errors its closures may throw.
do {
let result = array.filter {
// some code with `try` in it here
return success
}
} catch {
print(error)
}
For example, let's imagine you had some Fraction type that throws some custom Error when you try to calculateValue when the denominator is zero.
enum MathError: Error {
case divideByZero
}
struct Fraction {
let numerator: Int
let denominator: Int
func calculateValue() throws -> Double {
if denominator == 0 {
throw MathError.divideByZero
}
return Double(numerator) / Double(denominator)
}
}
You can then do something like:
let fractions = [
Fraction(numerator: 1, denominator: 3),
Fraction(numerator: 5, denominator: 7),
Fraction(numerator: 4, denominator: 0)
]
do {
let biggerThanOneHalf = try fractions.filter {
try $0.calculateValue() > 0.5
}
print(biggerThanOneHalf)
} catch {
print(error)
}
Clearly, if the closure that you supplied to filter doesn't throw any errors (i.e. there is no try in the closure), then you do not have to worry about filter rethrowing anything, and no do-catch block is simply not needed at all:
let numbers = [0, 1, 2, 3, 4, 5]
let evenNumbers = numbers.filter { $0 % 2 == 0 }
For Swift 2 rendition, see previous revision of this answer.