We have a requirement for our project to listen to queues hosted in different machines. For example, we have 4 virtual hosts. I have created new instance of SimpleMessageListenerContainer for each hosts but i use one instance of MessageListener(it implements ChannelAwareListener so that i can manually ack). MessageListener is a bean managed by spring. I maintain a map of host and container instance when i create the containers. On receiving the message, check if i received the desired message from the host, get the container instance from the map (using #Resource) and stop listening to the host. Also manually ACK and store the message in the cassandra database.
Right now, there are times when some of the messages dont get persisted in the database and gets lost. I think it might be a race condition or due to the fact that i am using only one instance of messagelistener but i had to do that so that i can get the map(#Resource). Sorry if I am not making any sense. I am using the AMQP for the first time and trying to understand it. Any suggestions will be great. Thank you!
Why do you need manual ack? It's generally better to let the container take care of acks (AUTO). It will ack the message on success and nack it if the listener throws an exception.
Related
Can anyone tell me in the course, it is possible to override the parameters of individual box.cfg on a running instance. For example, add a replica, for several days I have been trying to deploy three replicas on three hosts via the docker service stack.
When I raise my hands on each server, everything works, through deploy they do not see each other and fall. I've tried all sorts of ways. hung up the endpoint on the target nodes, when requested, it gives the ip of the machine on which the container rises, if the ip matches one of those indicated in SEED, then substitutes the internal ip of the container instead (otherwise it cannot connect to itself).
In theory, it all works as I described, but there are suspicions that everything is not much different, I suppose that the problem is that before the declaration of box.cfg the instance does not reserve the address. Alas, I can not go inside the container because it cannot rise. I got the idea that if all three nodes are declared at the minimum settings and as soon as they rise to listen to the subnet, as soon as the node finds another, it will write it to replication and override box.cfg. Correct me please who had experience.
Some of the box.cfg parameters are dynamic. For example, the box.cfg{listen=}. You can set this one from the Lua code as you wish. In your case, if the container gets its IP address later, you need to specify only the port in listen. This way, Tarantool will listen on all possible interfaces.
The replication_source is a bit trickier. You can set it dynamically, but your first (initializing) call to box.cfg should be with the replication_source. This is because all instances that are initialized without this parameter will create their own replicaset, and it will make it impossible to join them to another replicaset.
You can read more about Tarantool replication architecture here: https://www.tarantool.io/en/doc/latest/book/replication/repl_architecture/
On a computer, 2 ubuntu virtual machines are installed. On one of them there is another virtual machine with Fiware-orion Context broker. Both VMs have ROS.
I am trying to make a simple publisher-subscriber ROS program, that sends a message from one VM to another one through FIROS(firos is installed and configured). The problem is that the message from a publishing VM is being sent to FIROS(or it is better to say, the topic is shared through FIROS), but somehow it is not being achieved by the subscribing VM, and therefore I cannot see the message being sent.
We are using the local network so there shouldn't be an issue with port forwarding. Moreover, using rostopic list it is visible that it has fiwaretopics on both VMs running.
Can it be, that the issue lies in using Virtual Machines rather than 2 separate PCs?
Thank you in advance.
I solved this.
There were 2 problems - first, the IP address of the server in config.json must be of the machine where the FIROS is running, not where I wanted to send it.
2 problem, the FIROS has to be launched last, after all other nodes are being run. Therefore it is able to subscribe to those topics and send the data. I was running FIROS first and therefore failed to subscribe, because there were nothing to subscribe to at that particular moment.
Is there any sort of way to broadcast an incoming request to all containers in a swarm?
EDIT: More info
I have a distributed application with many docker containers. The client can send requests to the swarm and have it respond. However, in some cases, the client needs to change a state on all server instances and therefore I would either need to be able to broadcast a message or have all the Docker containers talk to each other similar to MPI, which I'm trying to avoid.
There is no built-in way to turn a unicast packet into a multicast packet, nor any common 3rd party way of doing (That I've seen or heard of).
I'm not sure what "change a state on all server instances" means. Are we talking about the running state on all containers in a single service?
Or the actual underlying OS? All containers on all services? etc.
Without knowing more about your use case, I'd say it's likely better to design something where the request is received by one Swarm service, and then it's stored in a queue system where a backend worker would pick it up and "change the state on all server instances" for you.
It depends on your specific use case. One way to do it is to send a docker service update --force, which will cause all containers to reboot. If your containers fetch the information that is changed at startup, it would have the required effect
I have a project in which I use Spring AMQP. I have two SimpleMessageListenerContainer, one with a self-declared queue by the server (amq-gen), and one with a queue with a given name.
I use a SimpleRoutingConnectionFactory with two CachingConnectionFactory. For error detection I have a ConnectionListener, ListenerContainerConsumerFailedEvent, and a ConditionalExceptionLogger.
The idea is to switch between two Rabbit servers once an error is detected in the AMQP connection, but when there is an error in the AMQP connection several errors are thrown in the ConditionalExceptionLogger, several events of type ListenerContainerConsumerFailedEvent, and complicates the fact of switching automatically Between servers.
What could be the best way to do that switching automatically given a number of retries?
Thank you
one with a self-declared queue by the server (amq-gen)
You can't do that; if you use broker-declared queue names, the second broker doesn't know about it, and the container will try to declare it, which is not allowed.
Instead use a Spring AMQP AnonymousQueue, which has the same characteristics as a broker declared queue (auto delete, not durable) but has a name generated by the framework so it can be declared when you fail over.
I create a queue connection factory in Websphere using WebSphere MQ messaging provider.
Using JNDI to get this resource, and try to create queue connection in the same host.
The first time, everything works, but When I will to second time , it will throw a JMS Exception:
javax.jms.JMSException: Failed to create queue connection
at com.ibm.ejs.jms.JMSCMUtils.mapToJMSException(JMSCMUtils.java:141)
at com.ibm.ejs.jms.JMSQueueConnectionFactoryHandle.createQueueConnection(
JMSQueueConnectionFactoryHandle.java:90)
There is SO little information in the post it is hard to do anything but guess. First thing I'd look for is if the application or queue are set for exclusive use. Of course this assumes that you are opening the queue for input and that detail isn't mentioned in the question. Having the linked exception which would provide the actual WMQ reason and completion codes could tell you for sure but these also are not provided in the question.
Many shops consider it a Sev-1 defect if JMS code does not print linked exceptions. This is not a WMQ-specific thing but rather a case of printing out all the diagnostic information available regardless of the transport provider. In case you want more info on this, please see the WMQ Ifocenter JMS exception handling topic.
The Max Connection is there in WAS console. if connection getting more than Max connection and not release the resource (QueueConnection, QueueSender and QueueSession) than at the time of next connection it will fail to get the connection from connection pool. After restarting the Server only you can release the connection. this can be resolve by close all the resource(QueueConnection, QueueSender and QueueSession) properly in code.