EMQX- Publish MQTT Topic with unique identifier is taking much more time than Static MQTT Topic - mqtt

I was trying to publish messages on emqx broker on different topics.Scenario takes much time while publishing with dynamic topic with one client and if we put topic name as static it takes much less time.
Here I have posted result and code for the same.
I am using EMQX broker with Eclipse paho client Version 3 and Qos level 1.
Time for different topics with 100 simple publish message (Consider id as dynamic here):
Total time pattern 1: /config/{id}/outward::36 sec -----------------> HERE TOPIC is DYNAMIC. and {id} is a variable whose value is changing in loop as shown in below code
Total time pattern 2: /config/test::1.2 sec -----------------------> HERE TOPIC is STATIC
How shall I publish message with different id so topic creation wont take much time?
public class MwttPublish {
static IMqttClient instance= null;
public static IMqttClient getInstance() {
try {
if (instance == null) {
instance = new MqttClient(mqttHostUrl, "SimpleTestMQTT");
}
if (!instance.isConnected()) {
MqttConnectOptions options = new MqttConnectOptions();
options.setUserName("test");
options.setPassword("test".toCharArray());
options.setAutomaticReconnect(true);
options.setCleanSession(false);
options.setConnectionTimeout(10);
instance.connect(options);
}
} catch (final Exception e) {
System.out.println("Exception in mqtt: {}" + e.getMessage());
}
return instance;
}
public static void publishMessage() throws MqttException {
IMqttClient iMqttClient = getInstance();
MqttMessage mqttMessage = new MqttMessage("Hello".getBytes());
mqttMessage.setQos(1);
mqttMessage.setRetained(true);
System.out.println("Publish Start for pattern 1");
int i =0;
final BigDecimal mqttmsgPublishstartTime = new BigDecimal(System.currentTimeMillis());
do {
iMqttClient.publish("/config/" +i +"/outward", mqttMessage);
i++;
}while(i<100);
System.out.println("Total time pattern 1 /config/i/outward::" + (new BigDecimal(System.currentTimeMillis())).subtract(mqttmsgPublishstartTime));
System.out.println("Publish Start for pattern 2");
final BigDecimal mqttmsgPublishstartTime1 = new BigDecimal(System.currentTimeMillis());
i =0;
do {
iMqttClient.publish("/config/test", mqttMessage);
i++;
}while(i<100);
System.out.println("Total time pattern 2 /config/test::" + (new BigDecimal(System.currentTimeMillis())).subtract(mqttmsgPublishstartTime1));
}
}

This is not a valid test, you've fallen into many of the clasic micro benchmark traps e.g.
Way too small a sample size
No account for JVM JIT warm up or GC overhead
Not comparing like to like e.g. time taken to concatenate the strings for the topics
Please check out the following: https://stackoverflow.com/a/2844291/504554
Also from a MQTT point of view topics are ephemeral they only really "exist" for the instant a message is published while the broker checks for subscribed clients with a matching pattern.

Related

Flink Count of Events using metric

I have a topic in kafka where i am getting multiple type of events in json format. I have created a filestreamsink to write these events to S3 with bucketing.
FlinkKafkaConsumer errorTopicConsumer = new FlinkKafkaConsumer(ERROR_KAFKA_TOPICS,
new SimpleStringSchema(),
properties);
final StreamingFileSink<Object> errorSink = StreamingFileSink
.forRowFormat(new Path(outputPath + "/error"), new SimpleStringEncoder<>("UTF-8"))
.withBucketAssigner(new EventTimeBucketAssignerJson())
.build();
env.addSource(errorTopicConsumer)
.name("error_source")
.setParallelism(1)
.addSink(errorSink)
.name("error_sink").setParallelism(1);
public class EventTimeBucketAssignerJson implements BucketAssigner<Object, String> {
#Override
public String getBucketId(Object record, Context context) {
StringBuffer partitionString = new StringBuffer();
Tuple3<String, Long, String> tuple3 = (Tuple3<String, Long, String>) record;
try {
partitionString.append("event_name=")
.append(tuple3.f0).append("/");
String timePartition = TimeUtils.getEventTimeDayPartition(tuple3.f1);
partitionString.append(timePartition);
} catch (Exception e) {
partitionString.append("year=").append(Constants.DEFAULT_YEAR).append("/")
.append("month=").append(Constants.DEFAULT_MONTH).append("/")
.append("day=").append(Constants.DEFAULT_DAY);
}
return partitionString.toString();
}
#Override
public SimpleVersionedSerializer<String> getSerializer() {
return SimpleVersionedStringSerializer.INSTANCE;
}
}
Now i want to publish hourly count of each event as metrics to prometheus and publish a grafana dashboard over that.
So please help me how can i achieve hourly count for each event using flink metrics and publish to prometheus.
Thanks
Normally, this is done by simply creating a counter for requests and then using the rate() function in Prometheus, this will give you the rate of requests in the given time.
If You, however, want to do this on Your own for some reason, then You can do something similar to what has been done in org.apache.kafka.common.metrics.stats.Rate. So You would, in this case, need to gather list of samples with the time at which they were collected, along with the window size You want to use for calculation of the rate, then You could simply do the calculation, i.e. remove samples that went out of scope and has expired and then simply calculate how many samples are in the window.
You could then set the Gauge to the calculated value.

Using Spring AMQP consumer in spring-webflux

I have an app that's using Boot 2.0 with webflux, and has an endpoint returning a Flux of ServerSentEvent. The events are created by leveraging spring-amqp to consume messages off a RabbitMQ queue. My question is: How do I best bridge the MessageListener's configured listener method to a Flux that can be passed up to my controller?
Project Reactor's create section mentions that it "can be very useful to bridge an existing API with the reactive world - such as an asynchronous API based on listeners", but I'm unsure how to hook into the message listener directly since it's wrapped in the DirectMessageListenerContainer and MessageListenerAdapter. Their example from the create section:
Flux<String> bridge = Flux.create(sink -> {
myEventProcessor.register(
new MyEventListener<String>() {
public void onDataChunk(List<String> chunk) {
for(String s : chunk) {
sink.next(s);
}
}
public void processComplete() {
sink.complete();
}
});
});
So far, the best option I have is to create a Processor and simply call onNext() each time in the RabbitMQ listener method to manually produce an event.
I have something like this:
#SpringBootApplication
#RestController
public class AmqpToWebfluxApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(AmqpToWebfluxApplication.class, args);
RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
for (int i = 0; i < 100; i++) {
rabbitTemplate.convertAndSend("foo", "event-" + i);
}
}
private TopicProcessor<String> sseFluxProcessor = TopicProcessor.share("sseFromAmqp", Queues.SMALL_BUFFER_SIZE);
#GetMapping(value = "/sseFromAmqp", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getSeeFromAmqp() {
return this.sseFluxProcessor;
}
#RabbitListener(id = "fooListener", queues = "foo")
public void handleAmqpMessages(String message) {
this.sseFluxProcessor.onNext(message);
}
}
The TopicProcessor.share() allows to have many concurrent subscribers which we get when we return this TopicProcessor as a Flux to our /sseFromAmqp REST request via WebFlux.
The #RabbitListener just delegates its received messages to that TopicProcessor.
In the main() I have a code to confirm that I can publish to the TopicProcessor even if there is no subscribers.
Tested with two separate curl sessions and published messages to the queue via RabbitMQ Management Plugin.
By the way I use share() because of: https://projectreactor.io/docs/core/release/reference/#_topicprocessor
from multiple upstream Publishers when created in the shared configuration
That' because that #RabbitListener really can be called from different ListenerContainer threads, concurrently.
UPDATE
Also I moved this sample to my Sandbox: https://github.com/artembilan/sendbox/tree/master/amqp-to-webflux
Let's suppose you want to have a single RabbitMQ listener that somehow puts messages to one or more Flux(es). Flux.create is indeed a good way how to create such a Flux.
Let's start with Messaging with RabbitMQ Spring guide and try to adapt it.
The original Receiver would have to be modified in order to be able to put received messages to a FluxSink.
#Component
public class Receiver {
/**
* Collection of sinks enables more than one subscriber.
* Have to keep in mind that the FluxSink instance that the emitter works with, is provided per-subscriber.
*/
private final List<FluxSink<String>> sinks = new ArrayList<>();
/**
* Adds a sink to the collection. From now on, new messages will be put to the sink.
* Method will be called when a new Flux is created by calling Flux.create method.
*/
public void addSink(FluxSink<String> sink) {
sinks.add(sink);
}
public void receiveMessage(String message) {
sinks.forEach(sink -> {
if (!sink.isCancelled()) {
sink.next(message);
} else {
// If canceled, don't put any new messages to the sink.
// Sink is canceled when a subscriber cancels the subscription.
sinks.remove(sink);
}
});
}
}
Now we have a receiver that puts RabbitMQ messages to sink. Then, creating a Flux is rather simple.
#Component
public class FluxFactory {
private final Receiver receiver;
public FluxFactory(Receiver receiver) { this.receiver = receiver; }
public Flux<String> createFlux() {
return Flux.create(receiver::addSink);
}
}
Receiver bean is autowired to the factory. Of course, you don't have to create a special factory. This only demonstrates the idea how to use the Receiver to create the Flux.
The rest of the application from Messaging with RabbitMQ guide may stay the same, including the bean instantiation.
#SpringBootApplication
public class Application {
...
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
...
}
I used similar design to adapt Twitter streaming API sucessfuly. Though, there may be a nicer way how to do it.

Read from multiple Pubsub subscriptions using ValueProvider

I have multiple subscriptions from Cloud PubSub to read based on certain prefix pattern using Apache Beam. I extend PTransform class and implement expand() method to read from multiple subscriptions and do Flatten transformation to the PCollectionList (multiple PCollection on from each subscription). I have a problem to pass subscription prefix as ValueProvider into the expand() method, since expand() is called on template creation time, not when launching the job. However, if I only use 1 subscription, I can pass ValueProvider into PubsubIO.readStrings().fromSubscription().
Here's some sample code.
public class MultiPubSubIO extends PTransform<PBegin, PCollection<PubsubMessage>> {
private ValueProvider<String> prefixPubsub;
public MultiPubSubIO(#Nullable String name, ValueProvider<String> prefixPubsub) {
super(name);
this.prefixPubsub = prefixPubsub;
}
#Override
public PCollection<PubsubMessage> expand(PBegin input) {
List<String> myList = null;
try {
// prefixPubsub.get() will return error
myList = PubsubHelper.getAllSubscription("projectID", prefixPubsub.get());
} catch (Exception e) {
LogHelper.error(String.format("Error getting list of subscription : %s",e.toString()));
}
List<PCollection<PubsubMessage>> collectionList = new ArrayList<PCollection<PubsubMessage>>();
if(myList != null && !myList.isEmpty()){
for(String subs : myList){
PCollection<PubsubMessage> pCollection = input
.apply("ReadPubSub", PubsubIO.readMessagesWithAttributes().fromSubscription(this.prefixPubsub));
collectionList.add(pCollection);
}
PCollection<PubsubMessage> pubsubMessagePCollection = PCollectionList.of(collectionList)
.apply("FlattenPcollections", Flatten.pCollections());
return pubsubMessagePCollection;
} else {
LogHelper.error(String.format("No subscription with prefix %s found", prefixPubsub));
return null;
}
}
public static MultiPubSubIO read(ValueProvider<String> prefixPubsub){
return new MultiPubSubIO(null, prefixPubsub);
}
}
So I'm thinking of how to use the same way PubsubIO.read().fromSubscription() to read from ValueProvider. Or am I missing something?
Searched links:
extract-value-from-valueprovider-in-apache-beam - Answer talked about using DoFn, while I need PTransform that receives PBegin.
Unfortunately this is not possible currently:
It is not possible for the value of a ValueProvider to affect transform expansion - at expansion time, it is unknown; by the time it is known, the pipeline shape is already fixed.
There is currently no transform like PubsubIO.read() that can accept a PCollection of topic names. Eventually there will be (it is enabled by Splittable DoFn), but it will take a while - nobody is working on this currently.
You can use MultipleReadFromPubSub from apache beam io module https://beam.apache.org/releases/pydoc/2.27.0/_modules/apache_beam/io/gcp/pubsub.html
topic_1 = PubSubSourceDescriptor('projects/myproject/topics/a_topic')
topic_2 = PubSubSourceDescriptor(
'projects/myproject2/topics/b_topic',
'my_label',
'my_timestamp_attribute')
subscription_1 = PubSubSourceDescriptor(
'projects/myproject/subscriptions/a_subscription')
results = pipeline | MultipleReadFromPubSub(
[topic_1, topic_2, subscription_1])

waitForConfirmsOrDie vs PublisherCallbackChannel.Listener

I need to achieve the impact of waitForConfirmsOrDie in core java implementation in spring . In core java it is achievable request wise ( channel.confirmSelect , set Mandatory , publish and Channel.waitForConfirmsOrDie(10000) will wait for 10 sec)
I implemented template.setConfirmCallback ( hope it is same as PublisherCallbackChannel.Listener) and it works great , but ack/nack is at a common place ( confirm call back ) , for the individual sender no idea like waitForConfirmsOrDie , where he is sure within this time ack hasn't came and can take action
do send methods wait for specified period internally like waitForConfirmsOrDie in spring if ack hasn't came and if publisherConfirms is enabled.
There is currently no equivalent of waitForConfirmsOrDie in the Spring API.
Using a connection factory with publisher confirms enabled calls confirmSelect() on its channels; together with a template confirm callback, you can achieve the same functionality by keeping a count of sends yourself and adding a method to your callback to wait - something like...
#Autowired
private RabbitTemplate template;
private void runDemo() throws Exception {
MyCallback confirmCallback = new MyCallback();
this.template.setConfirmCallback(confirmCallback);
this.template.setMandatory(true);
for (int i = 0; i < 10; i++) {
template.convertAndSend(queue().getName(), "foo");
}
confirmCallback.waitForConfirmsOrDie(10, 10_000);
System.out.println("All ack'd");
}
private static class MyCallback implements ConfirmCallback {
private final BlockingQueue<Boolean> queue = new LinkedBlockingQueue<>();
#Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
queue.add(ack);
}
public void waitForConfirmsOrDie(int count, long timeout) throws Exception {
int remaining = count;
while (remaining-- > 0) {
Boolean ack = queue.poll(timeout, TimeUnit.MILLISECONDS);
if (ack == null) {
throw new TimeoutException("timed out waiting for acks");
}
else if (!ack) {
System.err.println("Received a nack");
}
}
}
}
One difference, though is the channel won't be force-closed.
Also, in a multi-threaded environment, you either need a dedicated template/callback per thread, or use CorrelationData to correlate the acks to the sends (e.g. put the thread id into the correlation data and use it in the callback).
I have opened AMQP-717 for us to consider providing something like this out of the box.

jedis pubsub and timeouts: how to listen infinitely as subscriber?

I'm struggling with the concept of creating a Jedis-client which listens infinitely as a subscriber to a Redis pubsub channel and handles messages when they come in.
My problem is that after a while of inactivity the server stops responding silently. I think this is due to a timeout occurring on the Jedis-client I subscribe with.
Would this likely indeed be the case? If so, is there a way to configure this particular Jedis-client to not timeout? (While other Jedispools aren't affected with some globally set timeout)
Alternatively, is there another (best practice) way of what I'm trying to achieve?
This is my code, (modified/ stripped for display) :
executed during web-server startup:
new Thread(AkkaStarter2.getSingleton()).start();
AkkaStarter2.java
private Jedis sub;
private AkkaListener akkaListener;
public static AkkaStarter2 getSingleton(){
if(singleton==null){
singleton = new AkkaStarter2();
}
return singleton;
}
private AkkaStarter2(){
sub = new Jedis(REDISHOST, REDISPORT);
akkaListener = new AkkaListener();
}
public void run() {
//blocking
sub.psubscribe(akkaListener, AKKAPREFIX + "*");
}
class AkkaListener extends JedisPubSub {
....
public void onPMessage(String pattern, String akkaChannel,String jsonSer) {
...
}
}
Thanks.
ermmm, the below solves it all. Indeed it was a Jedis thing
private AkkaStarter2(){
//0 specifying no timeout.. Overlooked this 100 times
sub = new Jedis(REDISHOST, REDISPORT,0);
akkaListener = new AkkaListener();
}

Resources