Can I in a #RabbitConsumer find out if any messages are prefetched for this consumer - spring-amqp

I need to know if there are more messages comming for this consumer.
Right now I count the messages on the queue. But that give me only what is left on the queue and not what has been prefetched.
#RabbitListener(queues = QUEUENAME)
public void recieve(Message message, Channel channel) throws IOException {
long messagesOnQueue = channel.messageCount(QUEUENAME);
if(messagesOnQueue>1) {
//add message to list
}
else {
//save the list
}
}
It would be really great If there was a way to tell if messages was prefetched for this consumer. Is that possible? If I can get that count then I dont care if there are messages on the queue as well.
After recieving suggestions from Gary I have changed the implementation to this, and it works.
When manually acknowledging a message it has to be done on the same channel as you get the message. But you can save a reference to it in case you need it in another thread.
In your spring boot application.yml add this
spring:
rabbitmq:
listener:
direct:
prefetch: 200
simple:
prefetch: 200
acknowledgeMode: MANUAL
Code from the consumer.
//The list we build and save in one transaction
private Set<PayloadDto> unhandledPayloads = new HashSet<>();
private long latestTag = 0L;
private Channel latestChannel;
#RabbitListener(queues = QUEUE_NAME, id = "consumerId")
public void recieve(Message message, Channel channel) throws IOException {
PayloadDto payloadDto = parse(message.getBody());
unhandledPayloads.add(payloadDto);
latestTag = message.getMessageProperties().getDeliveryTag();
latestChannel = channel;
if (unhandledPayloads.size() > UNHANDLED_PAYLOADS_LIMIT) {
service.createOrUpdate(unhandledPayloads);
queue.clear();
channel.basicAck(latestTag, true);
}
}
#EventListener(condition = "event.listenerId == 'consumerId'")
public void onApplicationEvent(ListenerContainerIdleEvent event) {
if(!queue.isEmpty()) {
service.createOrUpdate(unhandledPayloads);
queue.clear();
latestChannel.basicAck(latestTag, true);
}
}
The reason we are trying to build up a list before saving it is to be able to do batch insert to make it run faster.

Not currently, but it wouldn't be hard to add a feature. Open a github issue to request it. However, I am not sure how useful it would be. If there are still messages in the queue, consuming a prefetched will fetch another.

Related

Using Spring AMQP consumer in spring-webflux

I have an app that's using Boot 2.0 with webflux, and has an endpoint returning a Flux of ServerSentEvent. The events are created by leveraging spring-amqp to consume messages off a RabbitMQ queue. My question is: How do I best bridge the MessageListener's configured listener method to a Flux that can be passed up to my controller?
Project Reactor's create section mentions that it "can be very useful to bridge an existing API with the reactive world - such as an asynchronous API based on listeners", but I'm unsure how to hook into the message listener directly since it's wrapped in the DirectMessageListenerContainer and MessageListenerAdapter. Their example from the create section:
Flux<String> bridge = Flux.create(sink -> {
myEventProcessor.register(
new MyEventListener<String>() {
public void onDataChunk(List<String> chunk) {
for(String s : chunk) {
sink.next(s);
}
}
public void processComplete() {
sink.complete();
}
});
});
So far, the best option I have is to create a Processor and simply call onNext() each time in the RabbitMQ listener method to manually produce an event.
I have something like this:
#SpringBootApplication
#RestController
public class AmqpToWebfluxApplication {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = SpringApplication.run(AmqpToWebfluxApplication.class, args);
RabbitTemplate rabbitTemplate = applicationContext.getBean(RabbitTemplate.class);
for (int i = 0; i < 100; i++) {
rabbitTemplate.convertAndSend("foo", "event-" + i);
}
}
private TopicProcessor<String> sseFluxProcessor = TopicProcessor.share("sseFromAmqp", Queues.SMALL_BUFFER_SIZE);
#GetMapping(value = "/sseFromAmqp", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> getSeeFromAmqp() {
return this.sseFluxProcessor;
}
#RabbitListener(id = "fooListener", queues = "foo")
public void handleAmqpMessages(String message) {
this.sseFluxProcessor.onNext(message);
}
}
The TopicProcessor.share() allows to have many concurrent subscribers which we get when we return this TopicProcessor as a Flux to our /sseFromAmqp REST request via WebFlux.
The #RabbitListener just delegates its received messages to that TopicProcessor.
In the main() I have a code to confirm that I can publish to the TopicProcessor even if there is no subscribers.
Tested with two separate curl sessions and published messages to the queue via RabbitMQ Management Plugin.
By the way I use share() because of: https://projectreactor.io/docs/core/release/reference/#_topicprocessor
from multiple upstream Publishers when created in the shared configuration
That' because that #RabbitListener really can be called from different ListenerContainer threads, concurrently.
UPDATE
Also I moved this sample to my Sandbox: https://github.com/artembilan/sendbox/tree/master/amqp-to-webflux
Let's suppose you want to have a single RabbitMQ listener that somehow puts messages to one or more Flux(es). Flux.create is indeed a good way how to create such a Flux.
Let's start with Messaging with RabbitMQ Spring guide and try to adapt it.
The original Receiver would have to be modified in order to be able to put received messages to a FluxSink.
#Component
public class Receiver {
/**
* Collection of sinks enables more than one subscriber.
* Have to keep in mind that the FluxSink instance that the emitter works with, is provided per-subscriber.
*/
private final List<FluxSink<String>> sinks = new ArrayList<>();
/**
* Adds a sink to the collection. From now on, new messages will be put to the sink.
* Method will be called when a new Flux is created by calling Flux.create method.
*/
public void addSink(FluxSink<String> sink) {
sinks.add(sink);
}
public void receiveMessage(String message) {
sinks.forEach(sink -> {
if (!sink.isCancelled()) {
sink.next(message);
} else {
// If canceled, don't put any new messages to the sink.
// Sink is canceled when a subscriber cancels the subscription.
sinks.remove(sink);
}
});
}
}
Now we have a receiver that puts RabbitMQ messages to sink. Then, creating a Flux is rather simple.
#Component
public class FluxFactory {
private final Receiver receiver;
public FluxFactory(Receiver receiver) { this.receiver = receiver; }
public Flux<String> createFlux() {
return Flux.create(receiver::addSink);
}
}
Receiver bean is autowired to the factory. Of course, you don't have to create a special factory. This only demonstrates the idea how to use the Receiver to create the Flux.
The rest of the application from Messaging with RabbitMQ guide may stay the same, including the bean instantiation.
#SpringBootApplication
public class Application {
...
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
...
}
I used similar design to adapt Twitter streaming API sucessfuly. Though, there may be a nicer way how to do it.

How are Firebase offline capabilities supposed to detect when the cache is outdated? [duplicate]

Whenever I use addListenerForSingleValueEvent with setPersistenceEnabled(true), I only manage to get a local offline copy of DataSnapshot and NOT the updated DataSnapshot from the server.
However, if I use addValueEventListener with setPersistenceEnabled(true), I can get the latest copy of DataSnapshot from the server.
Is this normal for addListenerForSingleValueEvent as it only searches DataSnapshot locally (offline) and removes its listener after successfully retrieving DataSnapshot ONCE (either offline or online)?
Update (2021): There is a new method call (get on Android and getData on iOS) that implement the behavior you'll like want: it first tries to get the latest value from the server, and only falls back to the cache when it can't reach the server. The recommendation to use persistent listeners still applies, but at least there's a cleaner option for getting data once even when you have local caching enabled.
How persistence works
The Firebase client keeps a copy of all data you're actively listening to in memory. Once the last listener disconnects, the data is flushed from memory.
If you enable disk persistence in a Firebase Android application with:
Firebase.getDefaultConfig().setPersistenceEnabled(true);
The Firebase client will keep a local copy (on disk) of all data that the app has recently listened to.
What happens when you attach a listener
Say you have the following ValueEventListener:
ValueEventListener listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
System.out.println(snapshot.getValue());
}
#Override
public void onCancelled(FirebaseError firebaseError) {
// No-op
}
};
When you add a ValueEventListener to a location:
ref.addValueEventListener(listener);
// OR
ref.addListenerForSingleValueEvent(listener);
If the value of the location is in the local disk cache, the Firebase client will invoke onDataChange() immediately for that value from the local cache. If will then also initiate a check with the server, to ask for any updates to the value. It may subsequently invoke onDataChange() again if there has been a change of the data on the server since it was last added to the cache.
What happens when you use addListenerForSingleValueEvent
When you add a single value event listener to the same location:
ref.addListenerForSingleValueEvent(listener);
The Firebase client will (like in the previous situation) immediately invoke onDataChange() for the value from the local disk cache. It will not invoke the onDataChange() any more times, even if the value on the server turns out to be different. Do note that updated data still will be requested and returned on subsequent requests.
This was covered previously in How does Firebase sync work, with shared data?
Solution and workaround
The best solution is to use addValueEventListener(), instead of a single-value event listener. A regular value listener will get both the immediate local event and the potential update from the server.
A second solution is to use the new get method (introduced in early 2021), which doesn't have this problematic behavior. Note that this method always tries to first fetch the value from the server, so it will take longer to completely. If your value never changes, it might still be better to use addListenerForSingleValueEvent (but you probably wouldn't have ended up on this page in that case).
As a workaround you can also call keepSynced(true) on the locations where you use a single-value event listener. This ensures that the data is updated whenever it changes, which drastically improves the chance that your single-value event listener will see the current value.
So I have a working solution for this. All you have to do is use ValueEventListener and remove the listener after 0.5 seconds to make sure you've grabbed the updated data by then if needed. Realtime database has very good latency so this is safe. See safe code example below;
public class FirebaseController {
private DatabaseReference mRootRef;
private Handler mHandler = new Handler();
private FirebaseController() {
FirebaseDatabase.getInstance().setPersistenceEnabled(true);
mRootRef = FirebaseDatabase.getInstance().getReference();
}
public static FirebaseController getInstance() {
if (sInstance == null) {
sInstance = new FirebaseController();
}
return sInstance;
}
Then some method you'd have liked to use "addListenerForSingleEvent";
public void getTime(final OnTimeRetrievedListener listener) {
DatabaseReference ref = mRootRef.child("serverTime");
ref.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
if (listener != null) {
// This can be called twice if data changed on server - SO DEAL WITH IT!
listener.onTimeRetrieved(dataSnapshot.getValue(Long.class));
}
// This can be called twice if data changed on server - SO DEAL WITH IT!
removeListenerAfter2(ref, this);
}
#Override
public void onCancelled(DatabaseError databaseError) {
removeListenerAfter2(ref, this);
}
});
}
// ValueEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ValueEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
// ChildEventListener version workaround for addListenerForSingleEvent not working.
private void removeListenerAfter2(DatabaseReference ref, ChildEventListener listener) {
mHandler.postDelayed(new Runnable() {
#Override
public void run() {
HelperUtil.logE("removing listener", FirebaseController.class);
ref.removeEventListener(listener);
}
}, 500);
}
Even if they close the app before the handler is executed, it will be removed anyways.
Edit: this can be abstracted to keep track of added and removed listeners in a HashMap using reference path as key and datasnapshot as value. You can even wrap a fetchData method that has a boolean flag for "once" if this is true it would do this workaround to get data once, else it would continue as normal.
You're Welcome!
You can create transaction and abort it, then onComplete will be called when online (nline data) or offline (cached data)
I previously created function which worked only if database got connection lomng enough to do synch. I fixed issue by adding timeout. I will work on this and test if this works. Maybe in the future, when I get free time, I will create android lib and publish it, but by then it is the code in kotlin:
/**
* #param databaseReference reference to parent database node
* #param callback callback with mutable list which returns list of objects and boolean if data is from cache
* #param timeOutInMillis if not set it will wait all the time to get data online. If set - when timeout occurs it will send data from cache if exists
*/
fun readChildrenOnlineElseLocal(databaseReference: DatabaseReference, callback: ((mutableList: MutableList<#kotlin.UnsafeVariance T>, isDataFromCache: Boolean) -> Unit), timeOutInMillis: Long? = null) {
var countDownTimer: CountDownTimer? = null
val transactionHandlerAbort = object : Transaction.Handler { //for cache load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, true)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.abort()
}
}
val transactionHandlerSuccess = object : Transaction.Handler { //for online load
override fun onComplete(p0: DatabaseError?, p1: Boolean, data: DataSnapshot?) {
countDownTimer?.cancel()
val listOfObjects = ArrayList<T>()
data?.let {
data.children.forEach {
val child = it.getValue(aClass)
child?.let {
listOfObjects.add(child)
}
}
}
callback.invoke(listOfObjects, false)
}
override fun doTransaction(p0: MutableData?): Transaction.Result {
return Transaction.success(p0)
}
}
In the code if time out is set then I set up timer which will call transaction with abort. This transaction will be called even when offline and will provide online or cached data (in this function there is really high chance that this data is cached one).
Then I call transaction with success. OnComplete will be called ONLY if we got response from firebase database. We can now cancel timer (if not null) and send data to callback.
This implementation makes dev 99% sure that data is from cache or is online one.
If you want to make it faster for offline (to don't wait stupidly with timeout when obviously database is not connected) then check if database is connected before using function above:
DatabaseReference connectedRef = FirebaseDatabase.getInstance().getReference(".info/connected");
connectedRef.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot snapshot) {
boolean connected = snapshot.getValue(Boolean.class);
if (connected) {
System.out.println("connected");
} else {
System.out.println("not connected");
}
}
#Override
public void onCancelled(DatabaseError error) {
System.err.println("Listener was cancelled");
}
});
When workinkg with persistence enabled, I counted the times the listener received a call to onDataChange() and stoped to listen at 2 times. Worked for me, maybe helps:
private int timesRead;
private ValueEventListener listener;
private DatabaseReference ref;
private void readFB() {
timesRead = 0;
if (ref == null) {
ref = mFBDatabase.child("URL");
}
if (listener == null) {
listener = new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
//process dataSnapshot
timesRead++;
if (timesRead == 2) {
ref.removeEventListener(listener);
}
}
#Override
public void onCancelled(DatabaseError databaseError) {
}
};
}
ref.removeEventListener(listener);
ref.addValueEventListener(listener);
}

waitForConfirmsOrDie vs PublisherCallbackChannel.Listener

I need to achieve the impact of waitForConfirmsOrDie in core java implementation in spring . In core java it is achievable request wise ( channel.confirmSelect , set Mandatory , publish and Channel.waitForConfirmsOrDie(10000) will wait for 10 sec)
I implemented template.setConfirmCallback ( hope it is same as PublisherCallbackChannel.Listener) and it works great , but ack/nack is at a common place ( confirm call back ) , for the individual sender no idea like waitForConfirmsOrDie , where he is sure within this time ack hasn't came and can take action
do send methods wait for specified period internally like waitForConfirmsOrDie in spring if ack hasn't came and if publisherConfirms is enabled.
There is currently no equivalent of waitForConfirmsOrDie in the Spring API.
Using a connection factory with publisher confirms enabled calls confirmSelect() on its channels; together with a template confirm callback, you can achieve the same functionality by keeping a count of sends yourself and adding a method to your callback to wait - something like...
#Autowired
private RabbitTemplate template;
private void runDemo() throws Exception {
MyCallback confirmCallback = new MyCallback();
this.template.setConfirmCallback(confirmCallback);
this.template.setMandatory(true);
for (int i = 0; i < 10; i++) {
template.convertAndSend(queue().getName(), "foo");
}
confirmCallback.waitForConfirmsOrDie(10, 10_000);
System.out.println("All ack'd");
}
private static class MyCallback implements ConfirmCallback {
private final BlockingQueue<Boolean> queue = new LinkedBlockingQueue<>();
#Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
queue.add(ack);
}
public void waitForConfirmsOrDie(int count, long timeout) throws Exception {
int remaining = count;
while (remaining-- > 0) {
Boolean ack = queue.poll(timeout, TimeUnit.MILLISECONDS);
if (ack == null) {
throw new TimeoutException("timed out waiting for acks");
}
else if (!ack) {
System.err.println("Received a nack");
}
}
}
}
One difference, though is the channel won't be force-closed.
Also, in a multi-threaded environment, you either need a dedicated template/callback per thread, or use CorrelationData to correlate the acks to the sends (e.g. put the thread id into the correlation data and use it in the callback).
I have opened AMQP-717 for us to consider providing something like this out of the box.

How do I get the current attempt number on a background job in Hangfire?

There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :

jedis pubsub and timeouts: how to listen infinitely as subscriber?

I'm struggling with the concept of creating a Jedis-client which listens infinitely as a subscriber to a Redis pubsub channel and handles messages when they come in.
My problem is that after a while of inactivity the server stops responding silently. I think this is due to a timeout occurring on the Jedis-client I subscribe with.
Would this likely indeed be the case? If so, is there a way to configure this particular Jedis-client to not timeout? (While other Jedispools aren't affected with some globally set timeout)
Alternatively, is there another (best practice) way of what I'm trying to achieve?
This is my code, (modified/ stripped for display) :
executed during web-server startup:
new Thread(AkkaStarter2.getSingleton()).start();
AkkaStarter2.java
private Jedis sub;
private AkkaListener akkaListener;
public static AkkaStarter2 getSingleton(){
if(singleton==null){
singleton = new AkkaStarter2();
}
return singleton;
}
private AkkaStarter2(){
sub = new Jedis(REDISHOST, REDISPORT);
akkaListener = new AkkaListener();
}
public void run() {
//blocking
sub.psubscribe(akkaListener, AKKAPREFIX + "*");
}
class AkkaListener extends JedisPubSub {
....
public void onPMessage(String pattern, String akkaChannel,String jsonSer) {
...
}
}
Thanks.
ermmm, the below solves it all. Indeed it was a Jedis thing
private AkkaStarter2(){
//0 specifying no timeout.. Overlooked this 100 times
sub = new Jedis(REDISHOST, REDISPORT,0);
akkaListener = new AkkaListener();
}

Resources