Springboot RabbitMq no consumer connected - spring-amqp

I'm using springboot and rabbitmq to receive a message.
The first consumer i created works, declared as below:
#Component
public class UserConsumer {
#Autowired
private RabbitTemplate template;
#RabbitListener(queues = MessagingConfig.CONSUME_QUEUE)
public void consumeMessageFromQueue(MassTransitRequest userRequest) {
...
}
}
I then needed a second consumer so i duplicated the above and called it another name:
#Component
public class PackConsumer {
#Autowired
private RabbitTemplate template;
#RabbitListener(queues = MessagingConfig.CONSUME_QUEUE_CREATE_PACK)
public void consumeMessageFromQueue(MassTransitRequest fileRequest) {
...
}
}
Everything works locally on my machine, however when i deploy it the new queue does not process messages because there is no consumer connected to it. The UserConsumer continues to work.
Is there something else i should be doing in order to connect to the new queue at the same time as the original?
During my learning i did add a "MessagingConfig" class as below, however i believe it relates to sending messages and not receiving them or an alternative configuration:
#Configuration
public class MessagingConfig {
public static final String CONSUME_QUEUE = "merge-document-request";
public static final String CONSUME_EXCHANGE = "merge-document-request";
public static final String CONSUME_ROUTING_KEY = "";
public static final String PUBLISH_QUEUE = "merge-document-response";
public static final String PUBLISH_EXCHANGE = "merge-document-response";
public static final String PUBLISH_ROUTING_KEY = "";
public static final String CONSUME_QUEUE_CREATE_PACK = "create-pack-request";
public static final String CONSUME_EXCHANGE_CREATE_PACK = "create-pack-request";
public static final String CONSUME_ROUTING_KEY_CREATE_PACK = "";
public static final String PUBLISH_QUEUE_CREATE_PACK = "create-pack-response";
public static final String PUBLISH_EXCHANGE_CREATE_PACK = "create-pack-response";
public static final String PUBLISH_ROUTING_KEY_CREATE_PACK = "";
#Bean
public Queue queue() {
return new Queue(CONSUME_QUEUE);
}
#Bean
public TopicExchange exchange() {
return new TopicExchange(CONSUME_EXCHANGE);
}
#Bean
public Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(CONSUME_ROUTING_KEY);
}
#Bean
public MessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public AmqpTemplate template(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(converter());
return rabbitTemplate;
}
}
Thanks in advance

Related

RabbitListenerConfigureIntegrationTests Example

I am looking for some integration test examples for RabbitListenerConfigurer and RabbitListenerEndpointRegistrar and calling #rabbitListner annotation and test the message conversion and pass additional paramenters such as Channel and message properties etc.
Some thing like this
#RunWith(SpringJUnit4ClassRunner.class)
public class RabbitListenerConfigureIntegrationTests {
public final String sampleMessage="{\"ORCH_KEY\":{\"inputMap\":{},\"outputMap\":{\"activityId\":\"10001002\",\"activityStatus\":\"SUCCESS\"}}}";
#Test
public void testRabiitListenerConfigurer() throws Exception {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(
EnableRabbitConfigWithCustomConversion.class);
RabbitListenerConfigurer registrar = ctx.getBean(RabbitListenerConfigurer.class);
/* I want to get the Listener instance here */
Message message = MessageBuilder.withBody(sampleMessage.getBytes())
.andProperties(MessagePropertiesBuilder.newInstance()
.setContentType("application/json")
.build())
.build();
/* call listener.onmessage(message) and that intern pass the call back to #rabbit listener and by that time MessageHandler which is registered should kick off and convert the message */
}
#Configuration
#EnableRabbit
public static class EnableRabbitConfigWithCustomConversion implements RabbitListenerConfigurer {
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory());
}
#Bean
public ConnectionFactory mockConnectionFactory() {
return mock(ConnectionFactory.class);
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(mockConnectionFactory());
factory.setAutoStartup(false);
return factory;
}
#Bean
MessageHandlerMethodFactory messageHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory messageHandlerMethodFactory = new DefaultMessageHandlerMethodFactory();
messageHandlerMethodFactory.setMessageConverter(consumerJackson2MessageConverter());
return messageHandlerMethodFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public Listener messageListener1() {
return new Listener();
}
}
public class Listener {
#RabbitListener(queues = "QUEUE")
public void listen(ExchangeDTO dto, Channel chanel) {
System.out.println("Result:" + dto.getClass() + ":" + dto.toString());
/*ExchangeDTO dto = (ExchangeDTO)messageConverter.fromMessage(message);
System.out.println("dto:"+dto);*/
}
}
EDIT 2
I am not getting Exchange DTO populated with values. instead I get Null values
Here is Log :
15:00:50.994 [main] DEBUG org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter - Processing [GenericMessage [payload=byte[93], headers={contentType=application/json, id=8bf86bf1-7e45-d136-9126-69959f92f100, timestamp=1552680050993}]]
Result:class com.dsicover.dftp.scrubber.subscriber.ExchangeDTO:DTO [inputMap={}, outputMap={}]
public class ExchangeDTO implements Serializable {
/**
*
*/
private static final long serialVersionUID = 1L;
private HashMap<String, Object> inputMap = new HashMap<String, Object>();
private HashMap<String, Object> outputMap = new HashMap<String, Object>();
public HashMap<String, Object> getInputMap() {
return inputMap;
}
public void setInputMap(HashMap<String, Object> inputMap) {
this.inputMap = inputMap;
}
public HashMap<String, Object> getOutputMap() {
return outputMap;
}
public void setOutputMap(HashMap<String, Object> outputMap) {
this.outputMap = outputMap;
}
#Override
public String toString() {
return "DTO [inputMap=" + this.inputMap + ", outputMap=" + this.outputMap + "]";
}
}
Is there any thing i am missing in Jackson2MessageConverter.
Give the #RabbitListener an id
RabbitListenerEndpointRegistry.getListenerContainer(id);
cast container to AbstractMessageListenerContainer
container.getMessageListener()
cast listener to ChannelAwareMessageListener
call onMessage().
use a mock channel and verify expected call
EDIT
#Autowired
private RabbitListenerEndpointRegistry registry;
#Test
public void test() throws Exception {
AbstractMessageListenerContainer listenerContainer =
(AbstractMessageListenerContainer) this.registry.getListenerContainer("foo");
ChannelAwareMessageListener listener =
(ChannelAwareMessageListener) listenerContainer.getMessageListener();
Channel channel = mock(Channel.class);
listener.onMessage(new Message("foo".getBytes(),
MessagePropertiesBuilder
.newInstance()
.setDeliveryTag(42L)
.build()), channel);
verify(channel).basicAck(42L, false);
}
EDIT2
Your json does not look like a DTO, it looks like a Map<String, DTO>.
This works fine for me...
#SpringBootApplication
public class So55188061Application {
public static void main(String[] args) {
SpringApplication.run(So55188061Application.class, args);
}
#RabbitListener(id = "foo", queues = "foo")
public void listen(Map<String, Foo> in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag) throws IOException {
System.out.println(in);
channel.basicAck(tag, false);
}
#Bean
public MessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
public static class Foo {
private HashMap<String, Object> inputMap = new HashMap<String, Object>();
private HashMap<String, Object> outputMap = new HashMap<String, Object>();
public HashMap<String, Object> getInputMap() {
return this.inputMap;
}
public void setInputMap(HashMap<String, Object> inputMap) {
this.inputMap = inputMap;
}
public HashMap<String, Object> getOutputMap() {
return this.outputMap;
}
public void setOutputMap(HashMap<String, Object> outputMap) {
this.outputMap = outputMap;
}
#Override
public String toString() {
return "Foo [inputMap=" + this.inputMap + ", outputMap=" + this.outputMap + "]";
}
}
}
and
#RunWith(SpringRunner.class)
#SpringBootTest
public class So55188061ApplicationTests {
public final String sampleMessage =
"{\"ORCH_KEY\":{\"inputMap\":{},"
+ "\"outputMap\":{\"activityId\":\"10001002\",\"activityStatus\":\"SUCCESS\"}}}";
#Autowired
private RabbitListenerEndpointRegistry registry;
#Test
public void test() throws Exception {
AbstractMessageListenerContainer listenerContainer = (AbstractMessageListenerContainer) this.registry
.getListenerContainer("foo");
ChannelAwareMessageListener listener = (ChannelAwareMessageListener) listenerContainer.getMessageListener();
Channel channel = mock(Channel.class);
listener.onMessage(MessageBuilder.withBody(sampleMessage.getBytes())
.andProperties(MessagePropertiesBuilder.newInstance()
.setContentType("application/json")
.setDeliveryTag(42L)
.build())
.build(),
channel);
verify(channel).basicAck(42L, false);
}
}
and
{ORCH_KEY=Foo [inputMap={}, outputMap={activityId=10001002, activityStatus=SUCCESS}]}
According to your complex requirement to have everything on board, I don't see how we can make a deal with the mock(ConnectionFactory.class). We would need to mock much more to have everything working.
Instead, I would suggest to take a look into the real integration test against existing RabbitMQ or at least embedded QPid.
In addition you may consider to use a #RabbitListenerTest to spy your #RabbitListener invocation without interfering your production code.
More info is in the Reference Manual: https://docs.spring.io/spring-amqp/docs/2.1.4.RELEASE/reference/#test-harness

java.lang.NoSuchMethodError using JDBI

public class MyApplication extends Application<MyConfiguration> {
final static Logger LOG = Logger.getLogger(MyApplication.class);
public static void main(final String[] args) throws Exception {
new MyApplication().run(args);
}
#Override
public String getName() {
return "PFed";
}
#Override
public void initialize(final Bootstrap<MyConfiguration> bootstrap) {
// TODO: application initialization
bootstrap.addBundle(new DBIExceptionsBundle());
}
#Override
public void run(final MyConfiguration configuration,
final Environment environment) {
// TODO: implement application
final DBIFactory factory = new DBIFactory();
final DBI jdbi = factory.build(environment, configuration.getDataSourceFactory(), "postgresql");
UserDAO userDAO = jdbi.onDemand(UserDAO.class);
userDAO.findNameById(1);
UserResource userResource = new UserResource(new UserService(userDAO));
environment.jersey().register(userResource);
}
I get the the following error at findNameById.
java.lang.NoSuchMethodError: java.lang.Object.findNameById(I)Ljava/lang/String;
at org.skife.jdbi.v2.sqlobject.CloseInternalDoNotUseThisClass$$EnhancerByCGLIB$$a0e63670.CGLIB$findNameById$5()
}
public interface UserDAO {
#SqlQuery("select userId from user where id = :email")
User isEmailAndUsernameUnique(#Bind("email") String email);
#SqlQuery("select name from something where id = :id")
String findNameById(#Bind("id") int id);
}

Default to content_type application/json with overriden isFatal from DefaultExceptionStrategy

I'd like to not require my clients to provide content_type application/json but just default to it. I got this working.
I also tried to combine with another example to implement a custom isFatal(Throwable t) from ConditionalRejectingErrorHandler. I can get my custom error handler to fire, but then it seems to require the content_type property again. I can't figure out how to get them both to work at the same time.
Any ideas?
The below successfully works to not require content_type
EDIT: The below code does not work as I thought. An old message in the queue with the property content_type application/json must have been pulled in
#EnableRabbit
#Configuration
public class ExampleRabbitConfigurer implements
RabbitListenerConfigurer {
#Value("${spring.rabbitmq.host:'localhost'}")
private String host;
#Value("${spring.rabbitmq.port:5672}")
private int port;
#Value("${spring.rabbitmq.username}")
private String username;
#Value("${spring.rabbitmq.password}")
private String password;
#Autowired
private MappingJackson2MessageConverter mappingJackson2MessageConverter;
#Autowired
private DefaultMessageHandlerMethodFactory messageHandlerMethodFactory;
#Bean
public MappingJackson2MessageConverter mappingJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public DefaultMessageHandlerMethodFactory messageHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(mappingJackson2MessageConverter);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
}
The below here works to override isFatal() in ConditionalRejectingErrorHandler. The SimpleRabbitListenerContainerFactory.setMessageConverter() seems like it should serve the same purpose as DefaultMessageHandlerMethodFactory.setMessageConverter(). Obviously this is not the case.
#EnableRabbit
#Configuration
public class ExampleRabbitConfigurer {
#Value("${spring.rabbitmq.host:'localhost'}")
private String host;
#Value("${spring.rabbitmq.port:5672}")
private int port;
#Value("${spring.rabbitmq.username}")
private String username;
#Value("${spring.rabbitmq.password}")
private String password;
#Autowired
ConnectionFactory connectionFactory;
#Autowired
Jackson2JsonMessageConverter jackson2JsonConverter;
#Autowired
ErrorHandler amqpErrorHandlingExceptionStrategy;
#Bean
public Jackson2JsonMessageConverter jackson2JsonConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public ErrorHandler amqpErrorHandlingExceptionStrategy() {
return new ConditionalRejectingErrorHandler(new AmqpErrorHandlingExceptionStrategy());
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(jackson2JsonConverter);
factory.setErrorHandler(amqpErrorHandlingExceptionStrategy);
return factory;
}
public static class AmqpErrorHandlingExceptionStrategy extends ConditionalRejectingErrorHandler.DefaultExceptionStrategy {
private final Logger LOGGER = org.slf4j.LoggerFactory.getLogger(getClass());
#Override
public boolean isFatal(Throwable t) {
if (t instanceof ListenerExecutionFailedException) {
ListenerExecutionFailedException lefe = (ListenerExecutionFailedException) t;
LOGGER.error("Failed to process inbound message from queue "
+ lefe.getFailedMessage().getMessageProperties().getConsumerQueue()
+ "; failed message: " + lefe.getFailedMessage(), t);
}
return super.isFatal(t);
}
}
Use an "after receive" MessagePostProcessor to add the contentType header to the inbound message.
Starting with version 2.0, you can add the MPP to the container factory.
For earlier versions you can reconfigure...
#SpringBootApplication
public class So47424449Application {
public static void main(String[] args) {
SpringApplication.run(So47424449Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitListenerEndpointRegistry registry, RabbitTemplate template) {
return args -> {
SimpleMessageListenerContainer container =
(SimpleMessageListenerContainer) registry.getListenerContainer("myListener");
container.setAfterReceivePostProcessors(m -> {
m.getMessageProperties().setContentType("application/json");
return m;
});
container.start();
// send a message with no content type
template.setMessageConverter(new SimpleMessageConverter());
template.convertAndSend("foo", "{\"bar\":\"baz\"}", m -> {
m.getMessageProperties().setContentType(null);
return m;
});
template.convertAndSend("foo", "{\"bar\":\"qux\"}", m -> {
m.getMessageProperties().setContentType(null);
return m;
});
};
}
#Bean
public Jackson2JsonMessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#RabbitListener(id = "myListener", queues = "foo", autoStartup = "false")
public void listen(Foo foo) {
System.out.println(foo);
if (foo.bar.equals("qux")) {
throw new MessageConversionException("test");
}
}
public static class Foo {
public String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
As you can see, since it modifies the source message, the modified header is available in the error handler...
2017-11-22 09:39:26.615 WARN 97368 --- [cTaskExecutor-1] ingErrorHandler$DefaultExceptionStrategy : Fatal message conversion error; message rejected; it will be dropped or routed to a dead letter exchange, if so configured: (Body:'{"bar":"qux"}' MessageProperties [headers={}, contentType=application/json, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=2, consumerTag=amq.ctag-re1kcxKV14L_nl186stM0w, consumerQueue=foo]), contentType=application/json, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=2, consumerTag=amq.ctag-re1kcxKV14L_nl186stM0w, consumerQueue=foo])

Google Dataflow: java.lang.IllegalArgumentException: Cannot setCoder(null)

I am trying to build a custom sink for unzipping files.
Having this simple code:
public static class ZipIO{
public static class Sink extends com.google.cloud.dataflow.sdk.io.Sink<String> {
private static final long serialVersionUID = -7414200726778377175L;
private final String unzipTarget;
public Sink withDestinationPath(String s){
if(s!=""){
return new Sink(s);
}
else {
throw new IllegalArgumentException("must assign destination path");
}
}
protected Sink(String path){
this.unzipTarget = path;
}
#Override
public void validate(PipelineOptions po){
if(unzipTarget==null){
throw new RuntimeException();
}
}
#Override
public ZipFileWriteOperation createWriteOperation(PipelineOptions po){
return new ZipFileWriteOperation(this);
}
}
private static class ZipFileWriteOperation extends WriteOperation<String, UnzipResult>{
private static final long serialVersionUID = 7976541367499831605L;
private final ZipIO.Sink sink;
public ZipFileWriteOperation(ZipIO.Sink sink){
this.sink = sink;
}
#Override
public void initialize(PipelineOptions po) throws Exception{
}
#Override
public void finalize(Iterable<UnzipResult> writerResults, PipelineOptions po) throws Exception {
long totalFiles = 0;
for(UnzipResult r:writerResults){
totalFiles +=r.filesUnziped;
}
LOG.info("Unzipped {} Files",totalFiles);
}
#Override
public ZipIO.Sink getSink(){
return sink;
}
#Override
public ZipWriter createWriter(PipelineOptions po) throws Exception{
return new ZipWriter(this);
}
}
private static class ZipWriter extends Writer<String, UnzipResult>{
private final ZipFileWriteOperation writeOp;
public long totalUnzipped = 0;
ZipWriter(ZipFileWriteOperation writeOp){
this.writeOp = writeOp;
}
#Override
public void open(String uID) throws Exception{
}
#Override
public void write(String p){
System.out.println(p);
}
#Override
public UnzipResult close() throws Exception{
return new UnzipResult(this.totalUnzipped);
}
#Override
public ZipFileWriteOperation getWriteOperation(){
return writeOp;
}
}
private static class UnzipResult implements Serializable{
private static final long serialVersionUID = -8504626439217544799L;
public long filesUnziped=0;
public UnzipResult(long filesUnziped){
this.filesUnziped=filesUnziped;
}
}
}
}
The processing fails with error:
Exception in thread "main" java.lang.IllegalArgumentException: Cannot setCoder(null)
at com.google.cloud.dataflow.sdk.values.TypedPValue.setCoder(TypedPValue.java:67)
at com.google.cloud.dataflow.sdk.values.PCollection.setCoder(PCollection.java:150)
at com.google.cloud.dataflow.sdk.io.Write$Bound.createWrite(Write.java:380)
at com.google.cloud.dataflow.sdk.io.Write$Bound.apply(Write.java:112)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner$BatchWrite.apply(DataflowPipelineRunner.java:2118)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner$BatchWrite.apply(DataflowPipelineRunner.java:2099)
at com.google.cloud.dataflow.sdk.runners.PipelineRunner.apply(PipelineRunner.java:75)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.apply(DataflowPipelineRunner.java:465)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.apply(BlockingDataflowPipelineRunner.java:169)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:368)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:275)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.apply(DataflowPipelineRunner.java:463)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.apply(BlockingDataflowPipelineRunner.java:169)
at com.google.cloud.dataflow.sdk.Pipeline.applyInternal(Pipeline.java:368)
at com.google.cloud.dataflow.sdk.Pipeline.applyTransform(Pipeline.java:291)
at com.google.cloud.dataflow.sdk.values.PCollection.apply(PCollection.java:174)
at com.mcd.de.tlogdataflow.StarterPipeline.main(StarterPipeline.java:93)
Any help is appreciated.
Thanks & BR
Philipp
This crash is caused by a bug in the Dataflow Java SDK (specifically, this line) which was also present in the Apache Beam (incubating) Java SDK.
The method Sink.WriterOperation#getWriterResultCoder() must always be overridden, but we failed to mark it abstract. It is fixed in Beam, but unchanged in the Dataflow SDK. You should override this method and return an appropriate coder.
You have some options to come up with the coder:
Write your own small coder class, wrapping one of VarLongCoder or BigEndianLongCoder
Just use a long instead of the UnzipResult structure so you can use those as-is.
Less advisable due to the excess size, you could use SerializableCoder.of(UnzipResult.class)

Handling Connections in Spring-Boot-RabbitMQ

Hi I am developing Spring-boot-RabbitMQ version 1.6.I am having few queries while developing the application. Read the docs and browsed other stack overflow question but i cannot get few things clear(Might be because of my bad memory).
It would be great if some one answers my questions.
1) Currently i am having 4-Producers and 4-Consumers.Producer may produce millions of messages or events so using a single connection for both producer & consumer will block consumer to consume the messages.So what i would thought is creating separate connections for producer and consumer so that both will not block and will give some performance improvement.Am i correct with this approach?
2) I am using CachingConnectionFactory in order to create connection using SimpleRabbitListenerContainerFactory.While making call to this factory whether it will return new connection for us?So if we use CachingConnectionFactory do we really need to write a separate connection factories for both Producer & consumer.Please find my below
1)Configuration class
#Configuration
#EnableRabbit
public class RabbitMqConfiguration{
#Autowired
private CachingConnectionFactory cachingConnectionFactory;
#Value("${concurrent.consumers}")
public int concurrent_consumers;
#Value("${max.concurrent.consumers}")
public int max_concurrent_consumers;
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(cachingConnectionFactory);
factory.setConcurrentConsumers(concurrent_consumers);
factory.setMaxConcurrentConsumers(max_concurrent_consumers);
factory.setMessageConverter(jsonMessageConverter());
return factory;
}
#Bean
public MessageConverter jsonMessageConverter()
{
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
return converter;
}
}
2)Producer Class
#Configuration
public class TaskProducerConfiguration extends RabbitMqConfiguration {
#Value("${queue1}")
public String queue1;
#Value("${queue2}")
public String queue2;
#Value("${queue3}")
public String queue1;
#Value("${queue4}")
public String queue2;
#Value("${spring.rabbit.exchange}")
public String exchange;
#Autowired
private CachingConnectionFactory cachingConnectionFactory;
#Primary
#Bean
public RabbitTemplate getQueue1Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue1);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue2Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue2);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue3Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue3);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean
public RabbitTemplate getQueue4Template()
{
RabbitTemplate template = new RabbitTemplate(cachingConnectionFactory);
template.setRoutingKey(this.queue4);
template.setMessageConverter(jsonMessageConverter());
return template;
}
#Bean(name="queue1Bean")
public Queue queue1()
{
return new Queue(this.queue1);
}
#Bean(name="queue2Bean")
public Queue queue2()
{
return new Queue(this.queue2);
}
#Bean(name="queue3Bean")
public Queue queue3()
{
return new Queue(this.queue3);
}
#Bean(name="queue4Bean")
public Queue queue4()
{
return new Queue(this.queue4);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(exchange);
}
#Bean
List<Binding> bindings(Queue queue1Bean,Queue queue2Bean,Queue queue3Bean,Queue queue4Bean, TopicExchange exchange) {
List<Binding> bindingList = new ArrayList<Binding>();
bindingList.add(BindingBuilder.bind(queue1Bean).to(exchange).with(this.queue1));
bindingList.add(BindingBuilder.bind(queue2Bean).to(exchange).with(this.queue2));
bindingList.add(BindingBuilder.bind(queue3Bean).to(exchange).with(this.queue3));
bindingList.add(BindingBuilder.bind(queue4Bean).to(exchange).with(this.queue4));
return bindingList;
}
}
3) Receiver Class(Just Shared one receiver class rest of the 3-receiver classes are one and the same except queue name & routing key).
#Component
public class Queue1Receiver {
#Autowired
private TaskProducer taskProducer;
#Value("${queue1}")
public String queue1;
#RabbitListener(id="queue1",containerFactory="rabbitListenerContainerFactory",queues = "#{queue1Bean}")
public void handleQueue1Message(TaskMessage taskMessage,#Header(AmqpHeaders.CONSUMER_QUEUE) String queue)
{
System.out.println("Queue::"+queue);
System.out.println("CustomerId: " + taskMessage.getCustomerID());
if(taskMessage.isHasQueue2()){
taskProducer.sendQueue2Message(taskMessage);
}
if(taskMessage.isHasQueue3()){
taskProducer.sendQueue3Message(taskMessage);
}
if(taskMessage.isHasQueue4()){
taskProducer.sendQueue4Message(taskMessage);
}
}
#Bean
public Queue queue1Bean() {
// This queue has the following properties:
// name: my_durable,durable: true,exclusive: false,auto_delete: false
return new Queue(queue1, true, false, false);
}
}
Your help should be appreciable.
Note : Down Voters please register your comment before down voting so that in future i can avoid the mistake.
Edited based on comments by Gary Russell:
1)RabbitMqConfiguration
#Configuration
#EnableRabbit
public class RabbitMqConfiguration{
#Value("${concurrent.consumers}")
public int concurrent_consumers;
#Value("${max.concurrent.consumers}")
public int max_concurrent_consumers;
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrentConsumers(concurrent_consumers);
factory.setMaxConcurrentConsumers(max_concurrent_consumers);
factory.setMessageConverter(jsonMessageConverter());
return factory;
}
#Bean
public CachingConnectionFactory connectionFactory()
{
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
connectionFactory.setCacheMode(CacheMode.CONNECTION);
return connectionFactory;
}
#Bean
public MessageConverter jsonMessageConverter()
{
final Jackson2JsonMessageConverter converter = new Jackson2JsonMessageConverter();
return converter;
}
}
using a single connection for both producer & consumer will block consumer to consume the messages`
What leads you to believe that? A single connection will generally be fine. If you really want separate connections, change the connection factory cacheMode to CONNECTION.
You can use connection pooling in the same case keeping the pool size appropriate may solve the problem.As suggested in the above answer both producer and consumer are using the same connection so pooling might help you out instead.

Resources