I create a queue connection factory in Websphere using WebSphere MQ messaging provider.
Using JNDI to get this resource, and try to create queue connection in the same host.
The first time, everything works, but When I will to second time , it will throw a JMS Exception:
javax.jms.JMSException: Failed to create queue connection
at com.ibm.ejs.jms.JMSCMUtils.mapToJMSException(JMSCMUtils.java:141)
at com.ibm.ejs.jms.JMSQueueConnectionFactoryHandle.createQueueConnection(
JMSQueueConnectionFactoryHandle.java:90)
There is SO little information in the post it is hard to do anything but guess. First thing I'd look for is if the application or queue are set for exclusive use. Of course this assumes that you are opening the queue for input and that detail isn't mentioned in the question. Having the linked exception which would provide the actual WMQ reason and completion codes could tell you for sure but these also are not provided in the question.
Many shops consider it a Sev-1 defect if JMS code does not print linked exceptions. This is not a WMQ-specific thing but rather a case of printing out all the diagnostic information available regardless of the transport provider. In case you want more info on this, please see the WMQ Ifocenter JMS exception handling topic.
The Max Connection is there in WAS console. if connection getting more than Max connection and not release the resource (QueueConnection, QueueSender and QueueSession) than at the time of next connection it will fail to get the connection from connection pool. After restarting the Server only you can release the connection. this can be resolve by close all the resource(QueueConnection, QueueSender and QueueSession) properly in code.
Related
I have a project in which I use Spring AMQP. I have two SimpleMessageListenerContainer, one with a self-declared queue by the server (amq-gen), and one with a queue with a given name.
I use a SimpleRoutingConnectionFactory with two CachingConnectionFactory. For error detection I have a ConnectionListener, ListenerContainerConsumerFailedEvent, and a ConditionalExceptionLogger.
The idea is to switch between two Rabbit servers once an error is detected in the AMQP connection, but when there is an error in the AMQP connection several errors are thrown in the ConditionalExceptionLogger, several events of type ListenerContainerConsumerFailedEvent, and complicates the fact of switching automatically Between servers.
What could be the best way to do that switching automatically given a number of retries?
Thank you
one with a self-declared queue by the server (amq-gen)
You can't do that; if you use broker-declared queue names, the second broker doesn't know about it, and the container will try to declare it, which is not allowed.
Instead use a Spring AMQP AnonymousQueue, which has the same characteristics as a broker declared queue (auto delete, not durable) but has a name generated by the framework so it can be declared when you fail over.
I have a server application which runs on a Linux machine. I can connect this application from Windows/Linux machines and can send/recieve data. After a few hours, something occurs and I get following error on the client side.
On Windows: An existing connection was forcibly closed by the remote host
On Linux: Connection timed out
I have made a search on the web and found some posts which suggest to increase/decrease OS's keep alive time. However, it didin't work for me.
Can I found a soultion to this problem or should I simply try to reconnect to the server when the connection is forcibly closed?
EDIT: I have tracked the situation. I sent a data to the remote node and sent another data after waiting 5 hours. Sending side sent the first data, but whet the sender sent the second data it didn't response. TCP/IP stack of the sender repeated this 5 times by incrementing the times between retries. Finally, sender reset the connection. I can't be sure why this is happening (Maybe because of a firewall or NAT - see Section 2.4) but I applied two different approach to solve this problem:
Use TCP/IP keep alive using setsockopt (Section 4.2)
Make an application level keep alive. This is more reliable since the first approach is OS related.
It depends on what your application is supposed to do. A little more information and perhaps the code you use for listening and handling connections could be of help.
Regardless, technically a longer keep alive time, should prevent the OS from cutting you off. So perhaps it is something else causing the trouble.
Such a thing could be router malfunction or traffic causing your keep-alive packet to get lost.
If you aren't already testing it on a LAN (without heavy trafic) I suggest doing so.
It might also be due to how your socket is handled (which I can't determine from your question)
This article might help.
Non blocking socket with timeout
I'm not used to how connections are handled on Linux, but I expect the OS won't cut off a connection unnecessary.
You can re-establish connection as a recovery, but you need to take into account that not all disconnects are gentle, and therefore you could end up making recovery on a connection you actually wish to be closed.
Since it is TCP, it will do its best to make a gentle disconnect, but you can send a custom message telling the server or client not to re-establish the connection right before disconnecting. That way you be absolutely sure, despite that it should be unnecessary to do so.
i have hosted my project on azure server of Asp.net MVC, and i have used azure sql, its work fine, but number of times while performing any operation , i.e. when fire calls from controller it gives error like,
"An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full Ip"
and after few minutes its starts to work fine,
can anyone tell me why this error is throwing or is there any solution for this??
This is most probably client side issue (ASP.Net app side). This could happen if you do A LOT of simultaneous socket connections or do not dispose connections properly. Please double check your application and make sure that:
You properly close all database connections (use using() or call Dispose()).
You properly close any other socket connection (if any).
If your code is fine, you can try to use Transient Fault Handling App Block. It won't solve the issue itself but could help your app to workaround it.
im using indy10 for my communications, and sometimes when a client disconnects it raises an exception, i was wondering whats the safest way to disconnect a connection (TIdContext) ?
and what should i do on the OnDisconnect even and similar?
thanks.
Raising an exception is normal behavior. Indy is designed to make heavy use of exceptions, not only for error handling but also for internal notifications and such. OnDisconnect is fired when TIdTCPServer detects that the connection is finished, either because the client disconnected (and TIdTCPServer handled the exception for you) or because an uncaught exception occured in your OnExecute handler code. Either way, use OnDisconnect to perform any cleanup you need. TIdTCPServer will close the socket for you after the OnDisconnect event handler exits.
I just want to add something about sockets internal work (TCP) that I know:
All that server and client does, they sends pieces of data to each other. Server differs from the client only so that he is passive until any client don't send a connection request first. But if client want or forced to break the connection, all is need to do is stop send data to server. To gracefully close a connection client may send special data about this event, like saying "goodbay" by phone, but this is not absolutely required. Simply imagine phone call from you (client) to any service (server). You start conversation with "hello" and service worker responds. If you accidently press reset on your phone, call will lost. But service still continue his work. And you may make call it again. Nothing bad happens from that.
All what you need to care about is stable and correct client and server work by itself. Check incorrect sending and receiving data. Try to reconnect when it needed from client. If some exception throws inside client it must be processed as needed and its normal situation when current connection was lost by such forced events.
Everything else has already answered by Remy Lebeau.
I'm looking to detect local connection loss. Is there a mean to do that, as with the events on the Corelabs components ?
Thanks
EDIT:
Sorry, I'm going to try to be more specific:
I'm currently designing a prototype using datasnap 2009. So I've got a thin client, a stateless server app and a database server.
What I would be able to do is to detect and handle connection loss (internet connectivity) between the client and the server app to handle it appropriately, ie: Display an informative error message to the user or to detect a server shutdown to silently redirect on another app server.
In 2-tier I used to manage that with ODAC components, the TOraSession have some events to handle this issues.
Normally there is no event fired when a connection is broken, unless a statement is fired against the database. This is because there is no way of knowing a connection loss unless there is some sort of is-alive pinging going on.
Many frameworks check if a connection is still valid by doing a very small query against the server. Could be getting the time from a server. Especially in a connection pooling environment.
You can implement a connection checking function in your application in some of the database events (beforeexecute?). Or make a timer that checks every 10 seconds.
Spawn a thread on the client which periodically sends some RPC 'Ping' or 'Heartbeat' commands to the server.
if this fails, the client knows that something happened to the connection
if the server does not hear the client anymore for some time period (for example, two times the heartbeat interval), he can conclude that the client disconnected, however this requires a stateful server (and your design is stateless so it would require event processing in a secondary system, which could be fed through a message queue)