Spring AMQP and messages in queue - spring-amqp

In a Spring AMQP project, I would like to get the number of messages in a certain queue (to make decisions based on that number of messages) in RabbitMQ in real time (I can't use the management plugin).
The basic configuration is this:
#Bean(name="managementServerHandler")
public ManagementServerHandler managementServerHandler(){
return new ManagementServerHandler();
}
#Bean
public MessageListenerAdapter broadcastManagementServerHandler() {
return new MessageListenerAdapter(managementServerHandler(), "handleMessage");
}
#Bean(name="broadcastManagementMessageListenerContainer")
public SimpleMessageListenerContainer broadcastManagementMessageListenerContainer()
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(_connectionFactory());
container.setQueueNames( REQUEST_MANAGEMENT_QUEUE );
container.setMessageListener(broadcastManagementServerHandler());
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
container.setAutoDeclare(true);
container.setAutoStartup(true);
container.setConcurrentConsumers(1);
container.setRabbitAdmin((RabbitAdmin)_amqpAdmin());
container.setPrefetchCount(50);
container.setDeclarationRetries(3);
container.setMissingQueuesFatal(true);
container.setFailedDeclarationRetryInterval(1000);
container.setRecoveryInterval(400);
return container;
}
Where the "ManagementServerHandler" is just:
public class ManagementServerHandler implements ServletContextAware, MessageListener
{
#Override
public void onMessage(Message msg)
{....}
}
I need the number of queued messages in the onMessage method, but I can't find the way to do it.
I asked this question, but I don't know how to get the AMQP channel:
RabbitMQ and queue data
Thanks!

Use RabbitAdmin.getQueueProperties(queue)
/**
* Returns 3 properties {#link #QUEUE_NAME}, {#link #QUEUE_MESSAGE_COUNT},
* {#link #QUEUE_CONSUMER_COUNT}, or null if the queue doesn't exist.
*/
#Override
public Properties getQueueProperties(final String queueName) {

Related

Dependency Injection in Apache Storm topology

Little background: I am working on a topology using Apache Storm, I thought why not use dependency injection in it, but I was not sure how it will behave on cluster environment when topology deployed to cluster. I started looking for answers on if DI is good option to use in Storm topologies, I came across some threads about Apache Spark where it was mentioned serialization is going to be problem and saw some responses for apache storm along the same lines. So finally I decided to write a sample topology with google guice to see what happens.
I wrote a sample topology with two bolts, and used google guice to injects dependencies. First bolt emits a tick tuple, then first bolt creates message, bolt prints the message on log and call some classes which does the same. Then this message is emitted to second bolt and same printing logic there as well.
First Bolt
public class FirstBolt extends BaseRichBolt {
private OutputCollector collector;
private static int count = 0;
private FirstInjectClass firstInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
firstInjectClass = injector.getInstance(FirstInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
count++;
String message = "Message count "+count;
firstInjectClass.printMessage(message);
log.error(message);
collector.emit("TO_SECOND_BOLT", new Values(message));
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declareStream("TO_SECOND_BOLT", new Fields("MESSAGE"));
}
#Override
public Map<String, Object> getComponentConfiguration() {
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 10);
return conf;
}
}
Second Bolt
public class SecondBolt extends BaseRichBolt {
private OutputCollector collector;
private SecondInjectClass secondInjectClass;
#Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
collector = outputCollector;
Injector injector = Guice.createInjector(new Module());
secondInjectClass = injector.getInstance(SecondInjectClass.class);
}
#Override
public void execute(Tuple tuple) {
String message = (String) tuple.getValue(0);
secondInjectClass.printMessage(message);
log.error("SecondBolt {}",message);
collector.ack(tuple);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
}
Class in which dependencies are injected
public class FirstInjectClass {
FirstInterface firstInterface;
private final String prepend = "FirstInjectClass";
#Inject
public FirstInjectClass(FirstInterface firstInterface) {
this.firstInterface = firstInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
firstInterface.printMethod(message);
}
}
Interface used for binding
public interface FirstInterface {
void printMethod(String message);
}
Implementation of interface
public class FirstInterfaceImpl implements FirstInterface{
private final String prepend = "FirstInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Same way another class that receives dependency via DI
public class SecondInjectClass {
SecondInterface secondInterface;
private final String prepend = "SecondInjectClass";
#Inject
public SecondInjectClass(SecondInterface secondInterface) {
this.secondInterface = secondInterface;
}
public void printMessage(String message){
log.error("{} {}", prepend, message);
secondInterface.printMethod(message);
}
}
another interface for binding
public interface SecondInterface {
void printMethod(String message);
}
implementation of second interface
public class SecondInterfaceImpl implements SecondInterface{
private final String prepend = "SecondInterfaceImpl";
public void printMethod(String message){
log.error("{} {}", prepend, message);
}
}
Module Class
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(FirstInterface.class).to(FirstInterfaceImpl.class);
bind(SecondInterface.class).to(SecondInterfaceImpl.class);
}
}
Nothing fancy here, just two bolts and couple of classes for DI. I deployed it on server and it works just fine. The catch/problem though is that I have to initialize Injector in each bolt which makes me question what is side effect of it going to be?
This implementation is simple, just 2 bolts.. what if I have more bolts? what impact it would create on topology if I have to initialize Injector in all bolts?
If I try to initialize Injector outside prepare method I get error for serialization.

SimpleRabbitListenerContainerFactory and defaultRequeueRejected

As per doc, defaultRequeueRejected's default value is true, but looking at code it seems its false. I am not sure if I am missing anything or we have to change that in SimpleRabbitListenerContainerFactory.java
EDIT
Sample code, after putting message in test queue, I expect it to stay in queue since its failing but it is throwing it out. I want message to be retried so I configured that in container factory if it fails after retry I want it to be back in queue. I am sure I am missing understanding here.
#SpringBootApplication
public class MsgRequeExampleApplication {
public static void main(String[] args) {
SpringApplication.run(MsgRequeExampleApplication.class, args);
}
#Bean(name = "myContainerFactory")
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setMissingQueuesFatal(false);
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(500);
factory.setAdviceChain(new Advice[] { org.springframework.amqp.rabbit.config.RetryInterceptorBuilder.stateless()
.maxAttempts(2).backOffPolicy(backOffPolicy).build() });
return factory;
}
#RabbitListener(queues = "test", containerFactory = "myContainerFactory")
public void processAdvisory(Message message) throws MyBusinessException {
try{
//Simulating exception while processing message
String nullString=null;
nullString.length();
}catch(Exception ex){
throw new MyBusinessException(ex.getMessage());
}
}
public class MyBusinessException extends Exception {
public MyBusinessException(String msg) {
super(msg);
}
}
}
There is a good description in the SimpleMessageListenerContainer JavaDocs:
/**
* Set the default behavior when a message is rejected, for example because the listener
* threw an exception. When true, messages will be requeued, when false, they will not. For
* versions of Rabbit that support dead-lettering, the message must not be requeued in order
* to be sent to the dead letter exchange. Setting to false causes all rejections to not
* be requeued. When true, the default can be overridden by the listener throwing an
* {#link AmqpRejectAndDontRequeueException}. Default true.
* #param defaultRequeueRejected true to reject by default.
*/
public void setDefaultRequeueRejected(boolean defaultRequeueRejected) {
this.defaultRequeueRejected = defaultRequeueRejected;
}
Does it make sense to you?
UPDATE
To requeue after retry exhausting you need to configure some custom MessageRecoverer on the RetryInterceptorBuilder with the code like:
.recoverer((message, cause) -> {
ReflectionUtils.rethrowRuntimeException(cause);
})
This way the exception will be thrown to the listener container and according its defaultRequeueRejected the message will be requeued or not.

Not able to receive onNext and onComplete call on subscribed mono

I was trying reactor library and I'm not able to figure out why below mono never return back with onNext or onComplete call. I think I missing very trivial thing. Here's a sample code.
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new Subscriber<MyUser>() {
#Override
public void onSubscribe(Subscription s) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
}
#Override
public void onNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
}
#Override
public void onError(Throwable t) {
System.out.println("onError!" + Thread.currentThread().getName());
}
#Override
public void onComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
});
}
private static class MyServiceService {
private Repository myRepo = new Repository();
public Mono<MyUser> save(String userId) {
return myRepo.save(userId);
}
}
private static class Repository {
public Mono<MyUser> save(String userId) {
return Mono.create(myUserMonoSink -> {
Future<MyUser> submit = exe.submit(() -> this.blockingMethod(userId));
ListenableFuture<MyUser> myUserListenableFuture = JdkFutureAdapters.listenInPoolThread(submit);
Futures.addCallback(myUserListenableFuture, new FutureCallback<MyUser>() {
#Override
public void onSuccess(MyUser result) {
myUserMonoSink.success(result);
}
#Override
public void onFailure(Throwable t) {
myUserMonoSink.error(t);
}
});
});
}
private MyUser blockingMethod(String userId) throws InterruptedException {
Thread.sleep(5000);
return new MyUser("blocking", userId);
}
}
Above code only prints Subcribed!main. What I'm not able to figure out is why that future callback is not pushing values through myUserMonoSink.success
The important thing to keep in mind is that a Flux or Mono is asynchronous, most of the time.
Once you subscribe, the asynchronous processing of saving the user starts in the executor, but execution continues in your main code after .subscribe(...).
So the main thread exits, terminating your test before anything was pushed to the Mono.
[sidebar]: when is it ever synchronous?
When the source of data is a Flux/Mono synchronous factory method. BUT with the added pre-requisite that the rest of the chain of operators doesn't switch execution context. That could happen either explicitly (you use a publishOn or subscribeOn operator) or implicitly (some operators like time-related ones, eg. delayElements, run on a separate Scheduler).
Simply put, your source is ran in the ExecutorService thread of exe, so the Mono is indeed asynchronous. Your snippet on the other hand is ran on main.
How to fix the issue
To observe the correct behavior of Mono in an experiment (as opposed to fully async code in production), several possibilities are available:
keep subscribe with system.out.printlns, but add a new CountDownLatch(1) that is .countDown() inside onComplete and onError. await on the countdown latch after the subscribe.
use .log().block() instead of .subscribe(...). You lose the customization of what to do on each event, but log() will print those out for you (provided you have a logging framework configured). block() will revert to blocking mode and do pretty much what I suggested with the CountDownLatch above. It returns the value once available or throws an Exception in case of error.
instead of log() you can customize logging or other side effects using .doOnXXX(...) methods (there's one for pretty much every type of event + combinations of events, eg. doOnSubscribe, doOnNext...)
If you're doing a unit test, use StepVerifier from the reactor-tests project. It will subscribe to the flux/mono and wait for events when you call .verify(). See the reference guide chapter on testing (and the rest of the reference guide in general).
Issue is that in created anonymous class onSubscribe method does nothing.
If you look at implementation of LambdaSubscriber, it requests some number of events.
Also it's easier to extend BaseSubscriber as it has some predefined logic.
So your subscriber implementation would be:
MyServiceService service = new MyServiceService();
service.save("id")
.map(myUserMono -> new MyUser(myUserMono.getName().toUpperCase(), myUserMono.getId().toUpperCase()))
.subscribe(new BaseSubscriber<MyUser>() {
#Override
protected void hookOnSubscribe(Subscription subscription) {
System.out.println("Subscribed!" + Thread.currentThread().getName());
request(1); // or requestUnbounded();
}
#Override
protected void hookOnNext(MyUser myUser) {
System.out.println("OnNext on thread " + Thread.currentThread().getName());
// request(1); // if wasn't called requestUnbounded() 2
}
#Override
protected void hookOnComplete() {
System.out.println("onCompleted!" + Thread.currentThread().getName());
}
#Override
protected void hookOnError(Throwable throwable) {
System.out.println("onError!" + Thread.currentThread().getName());
}
});
Maybe it's not the best implementation, I'm new to reactor too.
Simon's answer has pretty good explanation about testing asynchronous code.

Very slow performance of spring-amqp comsumer

I've been experiencing troubles with spring-boot consumer. I compared the work of two consumers.
First consumer:
import com.rabbitmq.client.*;
import java.io.IOException;
public class Recv {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope,
AMQP.BasicProperties properties, byte[] body) throws IOException {
}
};
channel.basicConsume(QUEUE_NAME, true, consumer);
}
}
Second consumer:
#Controller
public class Consumer {
#RabbitListener(queues = "hello")
public void processMessage(Message message) {
}
}
There are no config files for spring-boot consumer installed, everything goes by default.
On my computer first one works 10 times faster. What might be the problem?
The default prefetch (basicQos) for Spring AMQP consumers is 1 which means only 1 message is outstanding at the consumer at any one time; configure the rabbitListenerContainerFactory #Bean to set the prefetchCount to something larger.
You will have to override the default boot-configured #Bean.

Queues not recreated after broker failure

I'm using Spring-AMQP-rabbit in one of applications which acts as a message-consumer. The queues are created and subscribed to the exchange at startup.
My problem:
When the RabbitMq server is restarted or removed and added completely, the Queue's are not recreated. The connection to the RabbitMq server is re-stored, but not the Queues.
I've tried to do the queue admin within a ConnectionListener but that hangs on startup. I guess the admin is connection aware and should do queue management upon connection restore isn't?
My Queues are created by a service:
#Lazy
#Service
public class AMQPEventSubscriber implements EventSubscriber {
private final ConnectionFactory mConnectionFactory;
private final AmqpAdmin mAmqpAdmin;
#Autowired
public AMQPEventSubscriber(final AmqpAdmin amqpAdmin,
final ConnectionFactory connectionFactory,
final ObjectMapper objectMapper) {
mConnectionFactory = connectionFactory;
mAmqpAdmin = amqpAdmin;
mObjectMapper = objectMapper;
}
#Override
public <T extends DomainEvent<?>> void subscribe(final Class<T> topic, final EventHandler<T> handler) {
final EventName topicName = topic.getAnnotation(EventName.class);
if (topicName != null) {
final MessageListenerAdapter adapter = new MessageListenerAdapter(handler, "handleEvent");
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
converter.setJsonObjectMapper(mObjectMapper);
adapter.setMessageConverter(converter);
final Queue queue = new Queue(handler.getId(), true, false, false, QUEUE_ARGS);
mAmqpAdmin.declareQueue(queue);
final Binding binding = BindingBuilder.bind(queue).to(Constants.DOMAIN_EVENT_TOPIC).with(topicName.value());
mAmqpAdmin.declareBinding(binding);
final SimpleMessageListenerContainer listener = new SimpleMessageListenerContainer(mConnectionFactory);
listener.setQueues(queue);
listener.setMessageListener(adapter);
listener.start();
} else {
throw new IllegalArgumentException("subscribed Event type has no exchange key!");
}
}
}
Part of my handler app:
#Component
public class FooEventHandler implements EventHandler<FooEvent> {
private final UserCallbackMessenger mUserCallbackMessenger;
private final HorseTeamPager mHorseTeamPager;
#Autowired
public FooEventHandler(final EventSubscriber subscriber) {
subscriber.subscribe(FooEvent.class, this);
}
#Override
public void handleEvent(final FooEvent event) {
// do stuff
}
}
I wonder why out-of-the-box feature with the RabbitAdmin and beans for Broker entities doesn't fit your requirements:
A further benefit of doing the auto declarations in a listener is that if the connection is dropped for any reason (e.g. broker death, network glitch, etc.) they will be applied again the next time they are needed.
See more info in the Reference Manual.

Resources