I don't understand the application of expiryTimeout field in Activemq PooledConnectionFactory. The java doc said "allow connections to expire, irrespective of load or idle time. This is useful with failover to force a reconnect from the pool, to reestablish load balancing or use of the master post recovery". please give me an example, a real scenario which expiryTimeout field effect in it.
The expiry timeout option is a bit of a legacy feature of the Pool that isn't all that useful in most applications these days. The way it works is that if you configure an expiration time then the Connection that is loaned out and is later closed will be completely closed and dropped should there be no other active users of the Connection, otherwise it stays alive until all active instances are closed, then the underlying Connection object is closed.
This works slightly differently than the Idle timeout which applies to Connection instances that are sitting unused in the pool and are closed after some length of time to release resources on the Broker side.
These days you are better off using a failover URI in the PooledConnectionFactory with broker support for rebalance of cluster clients enabled which would then dynamically redistribute the load in the broker cluster as opposed to the expiry timeout which only closes down Connection instances once everyone that is currently actively using them has released them by calling close on them.
Related
I'm looking at moving my app's deployment to Heroku, and I'd like to determine if it can correctly run there on the basic plan before putting in the effort to migrate. The basic plan limits Redis to 20 connections.
I don't fundamentally understand the Rails/Redis connection architecture. Is there a single connection to Actioncable, which is then distributing the data, or is the connection per actual client (i.e. one connection for every browser tab)?
As per the docs,
An individual user will create one consumer-connection pair per browser tab, window, or device they have open.
ActionCable lets you identify a connection using a connection identifier, typically a global object called current_user in most cases. With this approach, you can later retrieve all open connections by a given user (and potentially disconnect them all if the user is deleted or unauthorized or have too many connections open).
Also, note that ActionCable uses a worker pool to run connection callbacks and channel actions in isolation from your server's main thread.
I'm about to use thales hsm just for doing some aes encryption/decryption with using http://www.pkcs11interop.net/
But, I have one question raised in my mind.
I have two ways to use thales hsm with my server application
One way is that: whenever i need to do aes operation, open a connection, do the job, close the connection.
The other way: open the connection at the start of the server application, do aes operations in the lifetime of the server
application, close the connection whenever the server needs to be
closed.
So my question is, which way is the correct (or suggested) way of using hsm?
It entirely depends on your needs and usage of HSM. If you send 1 message in 5 minutes it is better to open connection for every AES operation and close connection after finishing job. Generally if you send more then 1 message in a minute you should have persistent connections because HSM's limited connection resources could be depleted in a short time.
Thales HSMs default settings allow you to open max 64 connections and check those connections in 60 minutes intervals. If a connection is closed it could understand it after 60 minutes later.
If you open a connection for every request you can reach to 64 connection limit in a short time and generally HSM start to does not allow to open new connections anymore. To get rid of it you can change Hsm settings to 1 Minute check intervals for garbage collection of connections.
I suggest to use persistent connections(pool) for heavily use of HSMs and renew(close-open) all connections in 20 minutes intervals.
If a connection is 'inactive' I guess the Weblogic internal data source manager should recover the connection. Why should 'Inactive Connection Timeout' be a configurable parameter. Is there any use case which requires WL to wait for certain period before an inactive connection is recovered?
Thanks in advance.
This variation could occur depending on what downstream systems are doing, or where a system is sometimes under load and not able to respond in an adequate period of time
A leaked connection is a connection that was not properly returned to the connection pool in the data source. To automatically recover leaked connections, you can specify a value for Inactive Connection Timeout on the JDBC Data Source. ( Configuration: Connection Pool page in the Administration Console.) When you set a value for Inactive Connection Timeout, WebLogic Server forcibly returns a connection to the data source when there is no activity on a reserved connection for the number of seconds that you specify. When set to 0 (the default value), this feature is turned off.
After digging into this.. specific to question "Why should it be configurable..", because by default this feature is turned off, you can turn it on if you believe that the application for some reason is not returning connections back to the pool, i.e.
leaking connections. Depending on the application use cases and the amount of resource leakage, one could set the duration. IOW, you don't want to timeout connections too quickly, because it will force the app server to have to recreate those on next need or cause unnecessary errors. But if the application is leaking a lot of connections, then you can tighten the duration so it keeps cleaning up.
My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?
I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.
Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.
While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents
I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.
Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".
The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.
When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.
Which use of connection management is better while developing a windows based application which uses a Database as its data store? What about web-based applications?
when user loads the first form of an application, the global
connection opens and on closing the last form of the application
the connection closes and disposes.
for each form within the application, there is a local connection
(form scope) and when user wants to perform an operation like
insert, update, delete, search, ... the application uses the
connection and by unloading the form the connection also closes and
disposes.
for every operation within a form of an application, there is a
local connection (procedure scope) and when user wants to perform
an operation like insert, update, delete, search, ... the
application uses procedure connection and at the end of every
procedure within the form, the connection also closes and disposes.
Go with #3
You should try to only ever keep connections open for just as long as is required.
Also have a look at
Understanding Connection Pooling
SQL Server Connection Pooling
(ADO.NET)
Connecting to a database server
typically consists of several
time-consuming steps. A physical
channel such as a socket or a named
pipe must be established, the initial
handshake with the server must occur,
the connection string information must
be parsed, the connection must be
authenticated by the server, checks
must be run for enlisting in the
current transaction, and so on.
In practice, most applications use
only one or a few different
configurations for connections. This
means that during application
execution, many identical connections
will be repeatedly opened and closed.
To minimize the cost of opening
connections, ADO.NET uses an
optimization technique called
connection pooling.
Connection pooling reduces the number
of times that new connections must be
opened. The pooler maintains ownership
of the physical connection. It manages
connections by keeping alive a set of
active connections for each given
connection configuration. Whenever a
user calls Open on a connection, the
pooler looks for an available
connection in the pool. If a pooled
connection is available, it returns it
to the caller instead of opening a new
connection. When the application calls
Close on the connection, the pooler
returns it to the pooled set of active
connections instead of closing it.
Once the connection is returned to the
pool, it is ready to be reused on the
next Open call.
This is quite a broad question. But usually, for any database server and application environment, opening and keeping a new connection is an expensive operation. That's why you definitely don't want to open multiple connections from a single client, and should stick to process-scope for connections.
In a desktop application using a database server, strategy for handling it's single connection depends a lot on the DB usage pattern. Say, if the app reads or writes something a lot within 5 minutes, and then just does nothing with the DB for hours, it makes no sense to keep the connection open all the time (assuming there are many other clients). You may introduce some kind of time-out for closing a connection.
The Web server situation depends a lot on the used technology. Say, in PHP every request is a "fresh start" WRT database connection. You open and close a connection for each mouse click. While popular Java application servers have DB connections pool, reusing the same connection instances for many HTTP request handling threads.