I'm currently trying to understand why my ConfirmCallback is called before I invoked Channel.basicAck / Channel.basicNack on the ChannelAwareMessageListener.
Please find below my current setup
#Component
public class MyMessageListener implements ChannelAwareMessageListener {
private Logger LOGGER = LoggerFactory.getLogger(MyMessageListener.class);
#Override
public void onMessage(Message message, Channel channel) throws Exception {
Thread.sleep(1000L);
String arg = String.valueOf(message.getBody());
LOGGER.info("Received message {}", arg);
Thread.sleep(1000L);
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
}
#Component
public class LoggingConfirmCallback implements RabbitTemplate.ConfirmCallback{
private Logger LOGGER = LoggerFactory.getLogger(LoggingConfirmCallback.class);
#Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
LOGGER.info("Received confirm with result {}", ack);
}
}
#SpringBootApplication
#Configuration
public class Application {
#Autowired
RabbitTemplate rabbitTemplate;
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(Application.class, args);
context.getBean(Application.class).doIt();
}
public void doIt() {
rabbitTemplate.convertAndSend(null, null, "hello", new CorrelationData(UUID.randomUUID().toString()));
}
#Bean
#Qualifier("confirmConnectionFactory")
ConnectionFactory confirmConnectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setPublisherConfirms(true);
factory.setHost("192.168.59.103");
factory.setChannelCacheSize(5);
return factory;
}
#Bean
#Primary
RabbitTemplate firstExchange(#Qualifier("confirmConnectionFactory") ConnectionFactory connectionFactory, LoggingConfirmCallback loggingConfirmCallback) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setConfirmCallback(loggingConfirmCallback);
rabbitTemplate.setExchange("first");
rabbitTemplate.setMandatory(true);
return rabbitTemplate;
}
#Bean
MessageListenerAdapter myMessageListenerAdapter(MyMessageListener receiver) {
return new MessageListenerAdapter(receiver);
}
#Bean
SimpleMessageListenerContainer myQueueListener(#Qualifier("confirmConnectionFactory")ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setQueueNames("first.queue");
container.setMessageListener(listenerAdapter);
return container;
}
What I see in the logging is this
2015-07-17 08:56:32.352 INFO 5892 --- [ main] com.coderskitchen.rmqdct.Application : Started Application in 1.393 seconds (JVM running for 1.704)
2015-07-17 08:56:32.372 INFO 5892 --- [168.59.103:5672] c.c.rmqdct.LoggingConfirmCallback : Received confirm with result true
2015-07-17 08:56:33.373 INFO 5892 --- [cTaskExecutor-1] c.c.rmqdct.MyMessageListener : Received message [B#67962299
But I expected to the the message
Received confirm with result true
after
Received message [B#67962299
Thanks in advance
Peter
Publisher confirms have nothing to do with message reception.
The broker confirms that it has taken responsibility for the message by successfully delivering it to the configured queue(s).
It is quite independent of the consumer acknowledging reception. If you need that, you will have to send an application-level message back to the producer.
Related
I am using spring-amqp, and using consumerBatchEnabled to receive batch of events as mentioned in below link:
https://docs.spring.io/spring-amqp/docs/2.2.5.RELEASE/reference/html/#receiving-batch
and registering the listener as below:
import org.springframework.messaging.Message;
#RabbitListener(queues = "batch.2", containerFactory = "consumerBatchContainerFactory")
public void consumerBatch2(List<org.springframework.messaging.Message<Invoice>> messages) {
//code here to process events
}
Also have config class defined as below
#Bean
#Autowired
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
return factory;
}
#Bean
public RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory){);
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(messageConverter());
return rabbitTemplate;
}
#Bean
public SimpleRabbitListenerContainerFactory consumerBatchContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer,
ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setPrefetchCount(1000);
factory.setBatchListener(true);
factory.setBatchSize(1000);
factory.setConsumerBatchEnabled(true);
factory.setReceiveTimeout(1000l);
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public MessageConverter messageConverter(){
return new Jackson2JsonMessageConverter();
}
however when I publish the event, i get ClassCastException, java.util.ArrayList cannot be cast to org.springframework.amqp.core.Message:
stack trace
Execution of Rabbit message listener failed.","logger_name":"org.springframework.amqp.rabbit.listener.ConditionalRejectingErrorHandler","thread_name":"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-1","level":"WARN","level_value":30000,"stack_trace":"java.lang.ClassCastException: java.util.ArrayList cannot be cast to org.springframework.amqp.core.Message\n\tat brave.spring.rabbit.TracingRabbitListenerAdvice.invoke(TracingRabbitListenerAdvice.java:75)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)\n\tat org.springframework.amqp.rabbit.listener.$Proxy210.invokeListener(Unknown Source)\n\tat org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:1537)\n\tat org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:1532)\n\tat org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:1472)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.executeWithList(SimpleMessageListenerContainer.java:1037)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1026)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:923)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1600(SimpleMessageListenerContainer.java:83)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1298)\n\tat org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1204)\n\tat java.lang.Thread.run(Thread.java:748)\n"}
{"timestamp":"2021-06-10T13:47:56.851+0000","message":"Restarting Consumer
There is no issue without consumerBatchEnabled, and able to receive and process Message
#RabbitListener(queues = "batch.2")
public void consumerBatch2(org.springframework.messaging.Message<Invoice> message) {
//code here
}
its a known issue https://github.com/openzipkin/brave/issues/1240.
Bug is resolved in library version 5.12.4
I have multiple virtual hosts each with a request queue and a response queue. These virtual hosts serve different clients. The names for the request queue and the response queue remain the same across the virtual hosts.
I have created a SimpleRoutingConnectionFactory with the clientName()+"ConnectionFactory" as the lookup key and a corresponding CachingConnectionFactory as the value in the map. I'm able to publish message to the request queues by binding and the RabbitTemplate to a virtual host before convertAndSend and then unbinding it.
I'm not able to consume messages from the response queues from different virtual hosts. I have created a SimpleRabbitListenerContainerFactory for each client. I implemented RabbitListenerConfigurer and registered a SimpleRabbitListenerEndpoint for each SimpleRabbitListenerContainerFactory. I also set the connectionFactory on each SimpleRabbitListenerContainerFactory as the client's CachingConnectionFactory.
#Configuration
public class RabbitConfiguration implements RabbitListenerConfigurer {
#Autowired
private ApplicationContext applicationContext;
#Autowired
private ClientList clients;
#Bean
#Primary
public SimpleRoutingConnectionFactory routingConnectionFactory() {
final var routingConnectionFactory = new SimpleRoutingConnectionFactory();
final Map<Object, ConnectionFactory> routeMap = new HashMap<>();
applicationContext.getBeansOfType(ConnectionFactory.class)
.forEach((beanName, bean) -> {
routeMap.put(beanName, bean);
});
routingConnectionFactory.setTargetConnectionFactories(routeMap);
return routingConnectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(routingConnectionFactory());
}
#Bean
public DirectExchange orbitExchange() {
return new DirectExchange("orbit-exchange");
}
#Bean
public Queue requestQueue() {
return QueueBuilder
.durable("request-queue")
.lazy()
.build();
}
#Bean
public Queue responseQueue() {
return QueueBuilder
.durable("response-queue")
.lazy()
.build();
}
#Bean
public Binding requestBinding() {
return BindingBuilder.bind(requestQueue())
.to(orbitExchange())
.with("orbit-request");
}
#Bean
public Binding responseBinding() {
return BindingBuilder.bind(responseQueue())
.to(orbitExchange())
.with("orbit-response");
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
clients.get()
.stream()
.forEach(client -> {
var endpoint = createEndpoint(client);
var listenerContainerFactory = applicationContext.getBean(client.getName() + "ListenerContainerFactory");
listenerContainerFactory.setConnectionFactory((ConnectionFactory)applicationContext.getBean(client.getName() + "ConnectionFactory"));
registrar.registerEndpoint(endpoint, listenerContainerFactory);
});
}
}
private SimpleRabbitListenerEndpoint createEndpoint(Client client) {
var endpoint = new SimpleRabbitListenerEndpoint();
endpoint.setId(client.getName());
endpoint.setQueueNames("response-queue");
endpoint.setMessageListener(new MessageListenerAdapter(new MessageReceiver(), "receive"));
return endpoint;
}
}
However, I get org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer: Failed to check/redeclare auto-delete queue(s). java.lang.IllegalStateException: Cannot determine target ConnectionFactory for lookup key [null]
I'm not able to figure out whats causing this as I'm not using the SimpleRoutingConnectionFactory for message consumption at all.
EDIT:
Full stack trace below -
ERROR [2020-07-09T04:12:38,028] [amdoListenerEndpoint-1] [TraceId:] org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer: Failed to check/redeclare auto-delete queue(s).
java.lang.IllegalStateException: Cannot determine target ConnectionFactory for lookup key [null]
at org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.determineTargetConnectionFactory(AbstractRoutingConnectionFactory.java:120)
at org.springframework.amqp.rabbit.connection.AbstractRoutingConnectionFactory.createConnection(AbstractRoutingConnectionFactory.java:98)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:214)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2089)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2062)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2042)
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueInfo(RabbitAdmin.java:407)
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueProperties(RabbitAdmin.java:391)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.attemptDeclarations(AbstractMessageListenerContainer.java:1836)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.redeclareElementsIfNecessary(AbstractMessageListenerContainer.java:1817)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.initialize(SimpleMessageListenerContainer.java:1349)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1195)
at java.base/java.lang.Thread.run(Thread.java:834)
EDIT2:
I used the routingConnectionFactory with every listener and used the setLookUpKeyQualifier. No more exceptions but, the listeners don't seem to be doing anything i.e., the queues are not being listened to.
#Import(MqConfig.class)
//This is to import CachingConnectinFactory beans and SimpleRabbitListenerContainerFactory beans for all clients
#Configuration
public class RabbitConfiguration implements RabbitListenerConfigurer {
#Autowired
private ApplicationContext applicationContext;
#Autowired
private ClientList clients;
#Bean
#Primary
public SimpleRoutingConnectionFactory routingConnectionFactory() {
final var routingConnectionFactory = new SimpleRoutingConnectionFactory();
final Map<Object, ConnectionFactory> routeMap = new HashMap<>();
applicationContext.getBeansOfType(ConnectionFactory.class)
.forEach((beanName, bean) -> {
routeMap.put(beanName+"[response-queue]", bean);
});
routingConnectionFactory.setTargetConnectionFactories(routeMap);
return routingConnectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
return new RabbitTemplate(routingConnectionFactory());
}
#Bean
public DirectExchange orbitExchange() {
return new DirectExchange("orbit-exchange");
}
#Bean
public Queue requestQueue() {
return QueueBuilder
.durable("request-queue")
.lazy()
.build();
}
#Bean
public Queue responseQueue() {
return QueueBuilder
.durable("response-queue")
.lazy()
.build();
}
#Bean
public Binding requestBinding() {
return BindingBuilder.bind(requestQueue())
.to(orbitExchange())
.with("orbit-request");
}
#Bean
public Binding responseBinding() {
return BindingBuilder.bind(responseQueue())
.to(orbitExchange())
.with("orbit-response");
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
clients.get()
.stream()
.forEach(client -> {
var endpoint = createEndpoint(client);
var listenerContainerFactory = getListenerContainerFactory(Client client);
listenerContainerFactory.setConnectionFactory((ConnectionFactory)applicationContext.getBean(client.getName() + "ConnectionFactory"));
registrar.registerEndpoint(endpoint, listenerContainerFactory);
});
}
}
private SimpleRabbitListenerEndpoint createEndpoint(Client client) {
var endpoint = new SimpleRabbitListenerEndpoint();
endpoint.setId(client.getName());
endpoint.setQueueNames("response-queue");
endpoint.setMessageListener(new MessageListenerAdapter(new MessageReceiver(), "receive"));
return endpoint;
}
private SimpleRabbitListenerContainerFactory getListenerContainerFactory(Client client) {
var listenerContainerFactory = (SimpleRabbitListenerContainerFactory) applicationContext.getBean(client.getName() + "ListenerContainerFactory");
listenerContainerFactory.setConnectionFactory(routingConnectionFactory());
listenerContainerFactory.setContainerCustomizer(container -> {
container.setQueueNames("response-queue");
container.setLookupKeyQualifier(client.getName());
container.setMessageListener(message -> log.info("Received message"));
});
return listenerContainerFactory;
}
}
There is something very strange going on; [null] implies that when we call getRoutingLookupKey() the cf is not a routing cf but when we call getConnectionFactory() it is.
It's not obvious how that can happen. Perhaps you can figure out why in a debugger?
One solution would be to inject the routing cf and use setLookupKeyQualifier(...).
The lookup key will then be clientId[queueName].
I use Spring cloud Spring service connector to connect Rabbitmq service on CloudFoundry.
public class CloudConfig extends AbstractCloudConfig {
#Bean
public ConnectionFactory rabbitFactory()
{
return connectionFactory().rabbitConnectionFactory();
}
}
But I need to declare a CachingConnectionFactory and set its PublisherConfirms true. Because we need use publisherConfirm to check ack when we send message to queue. I have no idea about how to inject the connectionFactory which is got from cloud spring service connector. Or how we could handle this situation.
The documentation includes examples of customizing details of the connection provided by Connectors.
In your case, you should be able to do something like this:
#Bean
public RabbitConnectionFactory rabbitFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("publisherConfirms", true);
RabbitConnectionFactoryConfig rabbitConfig = new RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
You can reconfigure the CCF created by the connector as follows:
#Bean
public SmartInitializingSingleton factoryConfigurer() {
return new SmartInitializingSingleton() {
#Autowired
private CachingConnectionFactory connectionFactory;
#Override
public void afterSingletonsInstantiated() {
this.connectionFactory.setPublisherConfirms(true);
}
};
}
You must be sure not to perform any RabbitMQ operations before the application context is fully initialized (which is best practice anyway).
This is RabbitTemplate
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMandatory(true);
template.setMessageConverter(new Jackson2JsonMessageConverter());
template.setConfirmCallback((correlationData, ack, cause) -> {
if (!ack) {
System.out.println("send message failed: " + cause + correlationData.toString());
} else {
System.out.println("Publisher Confirm" + correlationData.toString());
}
});
return template;
}
This is spring-cloud config:
#Bean
public ConnectionFactory rabbitConnectionFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("publisherConfirms", true);
RabbitConnectionFactoryConfig rabbitConfig = new RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
When I use this sender to send message.The result is not expected.
#Component
public class TestSender {
#Autowired
private RabbitTemplate rabbitTemplate;
#Scheduled(cron = "0/5 * * * * ? ")
public void send() {
System.out.println("===============================================================");
this.rabbitTemplate.convertAndSend(EXCHANGE, "routingkey", "hello world",
(Message m) -> {
m.getMessageProperties().setHeader("tenant", "aaaaa");
return m;
}, new CorrelationData(UUID.randomUUID().toString()));
Date date = new Date();
System.out.println("Sender Msg Successfully - " + date);
}
}
I am trying to trace Spring cloud integration aws + AWS SQS app with Sleuth. The receiver receives message from SQS after a message is added to queue. The log has app name, but no trace id and span id while receiving message from sql queue. Here is a line from the log:
2017-07-28 16:24:02.352 INFO [sqs-sleuth-demo,,,] 9706 --- [enerContainer-2] com.example.demo.SQSMessageReceiver : dequeued message: hello world
I use Spring Boot '1.5.4.RELEASE' and Spring Cloud 'Dalston.SR1'. Here is the dependencies:
dependencies {
compile("org.springframework.boot:spring-boot")
compile("org.springframework.boot:spring-boot-starter")
compile("org.springframework.boot:spring-boot-starter-web")
compile("org.springframework.cloud:spring-cloud-aws-messaging")
compile("org.springframework.cloud:spring-cloud-aws-autoconfigure")
compile('org.springframework.cloud:spring-cloud-starter-sleuth')
compile("org.springframework.integration:spring-integration-aws:1.0.0.RELEASE")
compile("com.amazonaws:aws-java-sdk-sqs")
}
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}"
}
}
AppConfig.java
#Configuration
public class AppConfig {
#Value("${amazon.aws.accesskey}")
private String amazonAWSAccessKey;
#Value("${amazon.aws.secretkey}")
private String amazonAWSSecretKey;
#Value("${amazon.sqs.endpoint}")
private String amazonSqsEndpoint;
#Value("${cloud.aws.region.static}")
private String awsRegion;
#Bean
#Primary
public AWSCredentialsProviderChain credentialsProviderChain() {
return new DefaultAWSCredentialsProviderChain();
}
}
SQSMessageReceiver.java
#Component
public class SQSMessageReceiver {
private static final Logger LOGGER = LoggerFactory.getLogger(SQSMessageReceiver.class);
#Autowired
private RestTemplate restTemplate;
#SqsListener(value="${amazon.sqs.queue.name}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receive(String message) throws Exception {
LOGGER.info("dequeued message: " + message);
}
}
and, DemoApplication.java
#SpringBootApplication
#EnableSqs
public class DemoApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(DemoApplication.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Is it possible to trace SQS events with Sleuth or anything wrong with the setting/code?
Thanks,
Hi I am developing Spring-boot-RabbitMQ version 1.6.I am having few queries while developing the application. Read the docs and browsed other stack overflow question but i cannot get few things clear(Might be because of my bad memory).
It would be great if some one answers my questions.
1) Currently i am having 4-Producers and 4-Consumers.Producer may produce millions of messages or events so using a single connection for both producer & consumer will block consumer to consume the messages.So what i would thought is creating separate connections for producer and consumer so that both will not block and will give some performance improvement.Am i correct with this approach?
2) I am using CachingConnectionFactory in order to create connection using SimpleRabbitListenerContainerFactory.While making call to this factory whether it will return new connection for us?So if we use CachingConnectionFactory do we really need to write a separate connection factories for both Producer & consumer.Please find my below
1)Configuration class
#Configuration
#EnableRabbit
public class RabbitMqConfiguration{
#Autowired
private CachingConnectionFactory cachingConnectionFactory;
#Value("${concurrent.consumers}")
public int concurrent_consumers;
#Value("${max.concurrent.consumers}")
public int max_concurrent_consumers;
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(cachingConnectionFactory);
factory.setConcurrentConsumers(concurrent_consumers);
factory.setMaxConcurrentConsumers(max_concurrent_consumers);
factory.setMessageConverter(jsonMessageConverter());
return factory;
}
#Bean
public MessageConverter jsonMessageConverter()
{
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
return converter;
}
}
2)Producer Class
#Configuration
public class TaskProducerConfiguration extends RabbitMqConfiguration {
#Value("${queue1}")
public String queue1;
#Value("${queue2}")
public String queue2;
#Value("${queue3}")
public String queue1;
#Value("${queue4}")
public String queue2;
#Value("${spring.rabbit.exchange}")
public String exchange;
#Autowired
private CachingConnectionFactory cachingConnectionFactory;
#Primary
#Bean
public RabbitTemplate getQueue1Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue1);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue2Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue2);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue3Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue3);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue4Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue4);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean(name="queue1Bean")
public Queue queue1()
{
return new Queue(this.queue1);
}
#Bean(name="queue2Bean")
public Queue queue2()
{
return new Queue(this.queue2);
}
#Bean(name="queue3Bean")
public Queue queue3()
{
return new Queue(this.queue3);
}
#Bean(name="queue4Bean")
public Queue queue4()
{
return new Queue(this.queue4);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(exchange);
}
#Bean
List<Binding> bindings(Queue queue1Bean,Queue queue2Bean,Queue queue3Bean,Queue queue4Bean, TopicExchange exchange) {
List<Binding> bindingList = new ArrayList<Binding>();
bindingList.add(BindingBuilder.bind(queue1Bean).to(exchange).with(this.queue1));
bindingList.add(BindingBuilder.bind(queue2Bean).to(exchange).with(this.queue2));
bindingList.add(BindingBuilder.bind(queue3Bean).to(exchange).with(this.queue3));
bindingList.add(BindingBuilder.bind(queue4Bean).to(exchange).with(this.queue4));
return bindingList;
}
}
3) Receiver Class(Just Shared one receiver class rest of the 3-receiver classes are one and the same except queue name & routing key).
#Component
public class Queue1Receiver {
#Autowired
private TaskProducer taskProducer;
#Value("${queue1}")
public String queue1;
#RabbitListener(id="queue1",containerFactory="rabbitListenerContainerFactory",queues = "#{queue1Bean}")
public void handleQueue1Message(TaskMessage taskMessage,#Header(AmqpHeaders.CONSUMER_QUEUE) String queue)
{
System.out.println("Queue::"+queue);
System.out.println("CustomerId: " + taskMessage.getCustomerID());
if(taskMessage.isHasQueue2()){
taskProducer.sendQueue2Message(taskMessage);
}
if(taskMessage.isHasQueue3()){
taskProducer.sendQueue3Message(taskMessage);
}
if(taskMessage.isHasQueue4()){
taskProducer.sendQueue4Message(taskMessage);
}
}
#Bean
public Queue queue1Bean() {
// This queue has the following properties:
// name: my_durable,durable: true,exclusive: false,auto_delete: false
return new Queue(queue1, true, false, false);
}
}
Your help should be appreciable.
Note : Down Voters please register your comment before down voting so that in future i can avoid the mistake.
Edited based on comments by Gary Russell:
1)RabbitMqConfiguration
#Configuration
#EnableRabbit
public class RabbitMqConfiguration{
#Value("${concurrent.consumers}")
public int concurrent_consumers;
#Value("${max.concurrent.consumers}")
public int max_concurrent_consumers;
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrentConsumers(concurrent_consumers);
factory.setMaxConcurrentConsumers(max_concurrent_consumers);
factory.setMessageConverter(jsonMessageConverter());
return factory;
}
#Bean
public CachingConnectionFactory connectionFactory()
{
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
connectionFactory.setCacheMode(CacheMode.CONNECTION);
return connectionFactory;
}
#Bean
public MessageConverter jsonMessageConverter()
{
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
return converter;
}
}
using a single connection for both producer & consumer will block consumer to consume the messages`
What leads you to believe that? A single connection will generally be fine. If you really want separate connections, change the connection factory cacheMode to CONNECTION.
You can use connection pooling in the same case keeping the pool size appropriate may solve the problem.As suggested in the above answer both producer and consumer are using the same connection so pooling might help you out instead.