I am using reactor in a project, and one of the features calls a blocking service, which connects to a device and gets an infinite stream of events.
I am trying to do a load test to see how many calls can I make to the blocking service.
I am generating around 1000 requests to the blocking service
Flux.just("ip1", "ip2", "ip3", "ip4")
.repeat(250)
The problem is that reactor is only processing the first 256 requests, after that it isn't making any more requests.
When I added the .log("preConnect") I can see that it is logging only one request(256) from the downstream subscriber.
I don't understand what I am doing wrong.
I am attaching simplified example which can reproduce the issue.
package test.reactor;
import org.junit.jupiter.api.Test;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Supplier;
import java.util.stream.Stream;
public class ReactorTest {
#Test
void testLoad() throws InterruptedException {
AtomicInteger id = new AtomicInteger(0);
Flux.just("ip1", "ip2", "ip3", "ip4")
.repeat(250) // will create a total of 1004 messages
.map(str -> str + " id: " + id.incrementAndGet())
.log("preConnect")
.flatMap(this::blocking)
.log()
.subscribeOn(Schedulers.parallel())
.subscribe();
new CountDownLatch(1).await();
}
private Flux<String> blocking(String ip) {
Mono<String> connectMono = Mono.fromCallable(this::connect)
.subscribeOn(Schedulers.boundedElastic())
.map(msg -> "Connected: "+ip + msg);
Flux<String> streamFlux = Mono.fromCallable(this::infiniteNetworkStream)
.subscribeOn(Schedulers.boundedElastic())
.flatMapMany(Flux::fromStream)
.map(msg -> ip + msg);
return connectMono.concatWith(streamFlux);
}
private Stream<String> infiniteNetworkStream() {
return Stream.generate(new Supplier<String>() {
#Override
public String get() {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return "Hello";
}
});
}
private String connect() throws Exception{
Thread.sleep(100);
return "success";
}
}
Figured out the issue, it has to do with flatmap, the default concurrency for flatmap is 256. It will not request more items from the upstream publisher until the current subscriptions go below 256.
In my case since my flux is infinite, it wasn't processing any after 256.
The solution I found was to increase the concurrency
Flux.just("ip1", "ip2", "ip3", "ip4")
.repeat(250) // will create a total of 1004 messages
.map(str -> str + " id: " + id.incrementAndGet())
.log("preConnect")
.flatMap(this::blocking, 1000) // added 1000 here to increase concurrency
.log()
.subscribeOn(Schedulers.parallel())
.subscribe();
Related
My goal is to process gui events in parallel, but only when I have processing power plus the last event must always be processed. I have a panel which can be resized. Every resize produces new event. I want to process new width, height of panel on a computation thread pool (Scheduler newParallel = Schedulers.newParallel("Computation", 4);) in ordered fashion. If none of the threads is available I need to drop the oldest gui events and when thread becomes available it should take the latest from the backpressure queue.
I wrote the test app and I have several issues. After gui events stop to being produced there is still considerable time when processing is done which ultimately will manifest as an unwanted animation effect. My guess is that the backpressure queue size=256 kept the old events and is still processing it but it does not match with the result logs. After producing 561 events only 33 events were processed (why not 256?) with ids [0-32, 560]. Is there a way to change the backpressure buffer size (I could not find a way to do it) or maybe there is totally different way I should approach this task?
I attach test code for recreation.
import java.util.Random;
import java.util.concurrent.TimeUnit;
import javafx.application.Application;
import javafx.beans.value.ChangeListener;
import javafx.scene.Scene;
import javafx.scene.layout.StackPane;
import javafx.stage.Stage;
import reactor.core.publisher.Flux;
import reactor.core.publisher.FluxSink.OverflowStrategy;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Scheduler;
import reactor.core.scheduler.Schedulers;
public class BackpressureApp extends Application {
public static void main(String[] args) {
launch(args);
}
private int id = 0;
#Override
public void start(Stage stage) throws Exception {
Scheduler computation = Schedulers.newParallel("Computation", 4);
Flux<Width> flux = Flux.create(sink -> {
stage.widthProperty().addListener((ChangeListener<Number>) (observable, oldValue, newValue) -> {
Width width = new Width(id++, newValue.doubleValue());
System.out.println("[" + Thread.currentThread().getName() + "] PUBLISH width=" + width);
sink.next(width);
});
}, OverflowStrategy.LATEST);
flux.concatMap(width -> Mono.just(width).subscribeOn(computation).map(this::process))
.publishOn(Schedulers.single())
.subscribe(width -> {
System.out.println("[" + Thread.currentThread().getName() + "] RECEIVED width=" + width);
});
stage.setScene(new Scene(new StackPane()));
stage.show();
}
public Width process(Width width) {
Random random = new Random();
int next = random.nextInt(1000);
try {
TimeUnit.MILLISECONDS.sleep(next);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("[" + Thread.currentThread().getName() + "] PROCESS width=" + width + " sleep=" + next);
return width;
}
}
class Width {
private final int id;
private final double width;
public Width(int id, double width) {
super();
this.id = id;
this.width = width;
}
public int getId() {
return id;
}
public double getWidth() {
return width;
}
#Override
public String toString() {
return "Width[id=" + id + ", width=" + width + "]";
}
}
I was using Flux.concatDelayError because I want to subscribe to multiple Monos one by one, and also want to know if something has failed.
However, now I would also like to short-circuit if one of my Monos completes with a specific type of error.
Is this possible easily?
Using onErrorResume operator, you could configure a conditional fallback to Mono.empty() for each Mono:
package com.example;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import static java.util.function.Predicate.not;
public class ReactorExample
{
public static void main(String[] args)
{
Mono<String> mono = Mono.just("first").doOnNext(a -> System.out.println(a + " was called."));
Mono<String> mono2 = Mono.<String>error(new RuntimeException("Not terminating error."))
.onErrorResume(not(ShortCircuitingException.class::isInstance), e -> Mono.empty());
Mono<String> mono3 = Mono.just("third").doOnNext(a -> System.out.println(a + " was called."));
Mono<String> mono4 = Mono.<String>error(new ShortCircuitingException())
.onErrorResume(not(ShortCircuitingException.class::isInstance), e -> Mono.empty());
Mono<String> mono5 = Mono.just("fifth").doOnNext(a -> System.out.println(a + " was called."));
Flux.concat(mono, mono2, mono3, mono4, mono5)
.collectList()
.block();
}
private static class ShortCircuitingException extends RuntimeException
{
}
}
Output:
first was called.
third was called.
Exception in thread "main" com.example.ReactorExample$ShortCircuitingException
My requirement is as follows:
There are multiple clients IoT devices. They send data to a server, they receive messages from server and change their behaviour. There are various front ends who want to monitor data from devices and send commands to devices.
I was reading about MQTT and understand it to have subscribers, publishers and a broker in between.
My question is, can I register my devices as publishers and subscribers to the same broker? Is this advisable? Thanks.
I do not see a problem with that.
To keep things separate, you may want to use different channels for transmitting data and control messages.
Yes, there should be no issue with MQTT publish and subscribe in the same client.
Here is a sample Java code for a MQTT Client with both Publish and Subscribe:
import org.eclipse.paho.client.mqttv3.*;
import org.eclipse.paho.client.mqttv3.persist.MqttDefaultFilePersistence;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MQTTClient {
private static final Logger logger = LoggerFactory.getLogger(MQTTClient.class);
public static void main(String[] args) {
MqttClient mqttClient;
String tmpDir = System.getProperty("java.io.tmpdir");
String subscribeTopicName = "echo";
String publishTopicName = "thing";
String payload;
MqttDefaultFilePersistence dataStore = new MqttDefaultFilePersistence(tmpDir);
try {
mqttClient = new MqttClient("tcp://localhost:1883", "thing1", dataStore);
MqttConnectOptions mqttConnectOptions = new MqttConnectOptions();
mqttConnectOptions.setUserName("/:guest");
mqttConnectOptions.setPassword("guest".toCharArray());
mqttConnectOptions.setCleanSession(false);
mqttClient.connect(mqttConnectOptions);
logger.info("Connected to Broker");
mqttClient.subscribe(subscribeTopicName);
logger.info(mqttClient.getClientId() + " subscribed to topic: {}", subscribeTopicName);
mqttClient.setCallback(new MqttCallback() {
#Override
public void connectionLost(Throwable throwable) {
logger.info("Connection lost to MQTT Broker");
}
#Override
public void messageArrived(String topic, MqttMessage message) throws Exception {
logger.info("-------------------------------------------------");
logger.info("| Received ");
logger.info("| Topic: {}", topic);
logger.info("| Message: {}", new String(message.getPayload()));
logger.info("| QoS: {}", message.getQos());
logger.info("-------------------------------------------------");
}
#Override
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
logger.info("Delivery Complete");
}
});
MqttMessage message = new MqttMessage();
for (int i = 1; i < 6; i++) {
payload = "Message " + i + " from Thing";
message.setPayload(payload
.getBytes());
logger.info("Set Payload: {}", payload);
logger.info(mqttClient.getClientId() + " published to topic: {}", publishTopicName);
//Qos 1
mqttClient.publish(publishTopicName, message);
}
} catch (MqttException me) {
logger.error("reason: {}", me.getReasonCode());
logger.error("msg: {}", me.getMessage());
logger.error("loc: {} ", me.getLocalizedMessage());
logger.error("cause: {}", me.getCause());
logger.error("excep: {}", me);
me.printStackTrace();
}
}
}
In the code above, please check mqttClient.subscribe for subscribe and mqttClient.publish for publish.
I have explained how this works end to end with RabbitMQ as MQTT broker in a blog and the sample working code that I have used is available on GitHub. Please check: http://softwaredevelopercentral.blogspot.com/2017/12/iot-internet-of-things-tutorial.html
There should be only one broker in the overall setup. For scalability reasons we may put multiple brokers which can work in unison and as a whole is always representation of a single broker. In the multiple broker set up too, the edge client will connect to a single broker only.
Make sure you keep one unique publishing topic and one unique subscribing topic per device for the scalability of the device, lower edge processing and easy understanding of humans.
Again there are always trade offs based on the usecase.
Cheers,
Ranjith
I was trying reactor library and I'm not able to figure out why below mono never return back with onNext or onComplete call. I think I missing very trivial thing. Here's a sample code.
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new Subscriber<MyUser>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
}
#Override
public void onNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
}
#Override
public void onError(Throwable t) {
System.out.println("onError!" + Thread.currentThread().getName());
}
#Override
public void onComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
});
}
private static class MyServiceService {
private Repository myRepo = new Repository();
public Mono<MyUser> save(String userId) {
return myRepo.save(userId);
}
}
private static class Repository {
public Mono<MyUser> save(String userId) {
return Mono.create(myUserMonoSink -> {
Future<MyUser> submit = exe.submit(() -> this.blockingMethod(userId));
ListenableFuture<MyUser> myUserListenableFuture = JdkFutureAdapters.listenInPoolThread(submit);
Futures.addCallback(myUserListenableFuture, new FutureCallback<MyUser>() {
#Override
public void onSuccess(MyUser result) {
myUserMonoSink.success(result);
}
#Override
public void onFailure(Throwable t) {
myUserMonoSink.error(t);
}
});
});
}
private MyUser blockingMethod(String userId) throws InterruptedException {
Thread.sleep(5000);
return new MyUser("blocking", userId);
}
}
Above code only prints Subcribed!main. What I'm not able to figure out is why that future callback is not pushing values through myUserMonoSink.success
The important thing to keep in mind is that a Flux or Mono is asynchronous, most of the time.
Once you subscribe, the asynchronous processing of saving the user starts in the executor, but execution continues in your main code after .subscribe(...).
So the main thread exits, terminating your test before anything was pushed to the Mono.
[sidebar]: when is it ever synchronous?
When the source of data is a Flux/Mono synchronous factory method. BUT with the added pre-requisite that the rest of the chain of operators doesn't switch execution context. That could happen either explicitly (you use a publishOn or subscribeOn operator) or implicitly (some operators like time-related ones, eg. delayElements, run on a separate Scheduler).
Simply put, your source is ran in the ExecutorService thread of exe, so the Mono is indeed asynchronous. Your snippet on the other hand is ran on main.
How to fix the issue
To observe the correct behavior of Mono in an experiment (as opposed to fully async code in production), several possibilities are available:
keep subscribe with system.out.printlns, but add a new CountDownLatch(1) that is .countDown() inside onComplete and onError. await on the countdown latch after the subscribe.
use .log().block() instead of .subscribe(...). You lose the customization of what to do on each event, but log() will print those out for you (provided you have a logging framework configured). block() will revert to blocking mode and do pretty much what I suggested with the CountDownLatch above. It returns the value once available or throws an Exception in case of error.
instead of log() you can customize logging or other side effects using .doOnXXX(...) methods (there's one for pretty much every type of event + combinations of events, eg. doOnSubscribe, doOnNext...)
If you're doing a unit test, use StepVerifier from the reactor-tests project. It will subscribe to the flux/mono and wait for events when you call .verify(). See the reference guide chapter on testing (and the rest of the reference guide in general).
Issue is that in created anonymous class onSubscribe method does nothing.
If you look at implementation of LambdaSubscriber, it requests some number of events.
Also it's easier to extend BaseSubscriber as it has some predefined logic.
So your subscriber implementation would be:
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new BaseSubscriber<MyUser>() {
#Override
protected void hookOnSubscribe(Subscription subscription) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
request(1); // or requestUnbounded();
}
#Override
protected void hookOnNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
// request(1); // if wasn't called requestUnbounded() 2
}
#Override
protected void hookOnComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
#Override
protected void hookOnError(Throwable throwable) {
System.out.println("onError!" + Thread.currentThread().getName());
}
});
Maybe it's not the best implementation, I'm new to reactor too.
Simon's answer has pretty good explanation about testing asynchronous code.
My goal is to collect all tweets containing the words "France" and "Germany" and to also collect associated metadata (e.g., the geo coordinates attached to the tweet). I know that this metadata is available, but I can't figure out how to access it with the Java library I'm using : "twitter4j".
Ok, so what I have so far is taken from code samples on the twitter4j site. It prints out all tweets containing my chosen keywords, as they are provided in real-time by Twitter's Streaming API. I call the filter method on my TwitterStream object, and this provides the stream. But I need more control. Namely, I would like to be able to:
1) write the tweets to a file;
2) only print out the first 1000 tweets;
3) access other metadata attached to the tweet (the filter method just prints out the username and the tweet itself).
Here is the code I have so far:
import twitter4j.FilterQuery;
import twitter4j.Status;
import twitter4j.StatusDeletionNotice;
import twitter4j.StatusListener;
import twitter4j.TwitterException;
import twitter4j.TwitterStream;
import twitter4j.TwitterStreamFactory;
import twitter4j.conf.ConfigurationBuilder;
public class Stream {
public static void main(String[] args) throws TwitterException {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true);
cb.setOAuthConsumerKey("bbb");
cb.setOAuthConsumerSecret("bbb");
cb.setOAuthAccessToken("bbb");
cb.setOAuthAccessTokenSecret("bbb");
TwitterStream twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
StatusListener listener = new StatusListener() {
public void onStatus(Status status) {
System.out.println("#" + status.getUser().getScreenName() + " - " + status.getText());
}
public void onDeletionNotice(StatusDeletionNotice statusDeletionNotice) {
System.out.println("Got a status deletion notice id:" + statusDeletionNotice.getStatusId());
}
public void onTrackLimitationNotice(int numberOfLimitedStatuses) {
System.out.println("Got track limitation notice:" + numberOfLimitedStatuses);
}
public void onScrubGeo(long userId, long upToStatusId) {
System.out.println("Got scrub_geo event userId:" + userId + " upToStatusId:" + upToStatusId);
}
public void onException(Exception ex) {
ex.printStackTrace();
}
};
FilterQuery fq = new FilterQuery();
String keywords[] = {"France", "Germany"};
fq.track(keywords);
twitterStream.addListener(listener);
twitterStream.filter(fq);
}
}
After looking at this with fresh eyes I realised the solution (which was pretty obvious). Editing the following part of the code:
public void onStatus(Status status) {
System.out.println("#" + status.getUser().getScreenName() + " - " + status.getText());
}
allows me to access other metadata. For example, if I want to access the tweet's date, I simply need to add the following:
System.out.println(status.getCreatedAt());
The Error 401 comes when the API is trying to access some information which is unable to fetch at present. So you need to check the permission which are allowed on twitter. Change it to READ, WRITE and ... for full API access. Or there might be problem as you might be using the proxy server. Hence mention the proxy details using the following commands.
System.getProperties().put("http.proxyHost", "10.3.100.211");
System.getProperties().put("http.proxyPort", "8080");
To write tweets on file:
FileWriter file = new FileWriter(....);
public void onStatus(Status status) {
System.out.println("#" + status.getUser().getScreenName() + " - " + status.getText() + " -> "+ status.getCreatedAt());
try {
file.write(status.getUser().getScreenName() + " - " + status.getText() + " -> "+ status.getCreatedAt() +"\n");
file.flush();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}