HSM Connection Persistent or Non-persistent - connection

I'm about to use thales hsm just for doing some aes encryption/decryption with using http://www.pkcs11interop.net/
But, I have one question raised in my mind.
I have two ways to use thales hsm with my server application
One way is that: whenever i need to do aes operation, open a connection, do the job, close the connection.
The other way: open the connection at the start of the server application, do aes operations in the lifetime of the server
application, close the connection whenever the server needs to be
closed.
So my question is, which way is the correct (or suggested) way of using hsm?

It entirely depends on your needs and usage of HSM. If you send 1 message in 5 minutes it is better to open connection for every AES operation and close connection after finishing job. Generally if you send more then 1 message in a minute you should have persistent connections because HSM's limited connection resources could be depleted in a short time.
Thales HSMs default settings allow you to open max 64 connections and check those connections in 60 minutes intervals. If a connection is closed it could understand it after 60 minutes later.
If you open a connection for every request you can reach to 64 connection limit in a short time and generally HSM start to does not allow to open new connections anymore. To get rid of it you can change Hsm settings to 1 Minute check intervals for garbage collection of connections.
I suggest to use persistent connections(pool) for heavily use of HSMs and renew(close-open) all connections in 20 minutes intervals.

Related

What's the meaning of expiry timeout in Activemq PooledConnectionFactory?

I don't understand the application of expiryTimeout field in Activemq PooledConnectionFactory. The java doc said "allow connections to expire, irrespective of load or idle time. This is useful with failover to force a reconnect from the pool, to reestablish load balancing or use of the master post recovery". please give me an example, a real scenario which expiryTimeout field effect in it.
The expiry timeout option is a bit of a legacy feature of the Pool that isn't all that useful in most applications these days. The way it works is that if you configure an expiration time then the Connection that is loaned out and is later closed will be completely closed and dropped should there be no other active users of the Connection, otherwise it stays alive until all active instances are closed, then the underlying Connection object is closed.
This works slightly differently than the Idle timeout which applies to Connection instances that are sitting unused in the pool and are closed after some length of time to release resources on the Broker side.
These days you are better off using a failover URI in the PooledConnectionFactory with broker support for rebalance of cluster clients enabled which would then dynamically redistribute the load in the broker cluster as opposed to the expiry timeout which only closes down Connection instances once everyone that is currently actively using them has released them by calling close on them.

Exception class EIdReadTimeout with message 'Read timed out' [Indy-IdFTP]

I want to download a file from FTP. If the file is small (usually under 1000MB) it works. However, if the file is big I get an EIdReadTimeout. Why? Should I keep the connection alive? From what I know reading data has its own channel so I don't have to keep the connection alive.
What is odd is that the exception appears at the end of the Get (after Get successfully downloads the whole file): FTP.Get(Name, TempGzFile, TRUE, FALSE) !!!!
Documentation:
TIdFTP.ReadTimeout - Number of milliseconds to wait for an FTP protocol response.
TIdFTP.TransferTimeout - Timeout value for read operations on the data channel for the FTP
client.
By default ReadTimeout is set to 60sec and TransferTimeout to 10sec.
I a using Delphi XE7 (which I guess uses Indy 10). The Passive property for my IdFTP is set to false.
The FTP protocol uses multiple TCP/IP connections - one for the main command/response connection, and separate connections for data transfers. While a data transfer is in progress, the main command connection sits idle. Once the transfer is finished, the command connection receives a response.
If you are passing through a router/firewall that is not FTP-aware, the command connection is likely to get killed if it sits idle for too long during a large transfer. The connection is usually not killed "gracefully", so even the OS does not know the connection is gone. When TIdFTP then tries to read a transfer response that never arrives, it times out.
To account for that, use the TIdFTP.NATKeepAlive property to enable TCP/IP level keep-alives on the command connection during transfers. Set NATKeepAlive.UseKeepAlive to True, and set NATKeepAlive.IdleTimeMS (the idle timeout before keepalives start sending) and NATKeepAlive.IntervalMS (the interval between each keepalive) to suitable values.
Note, however, that IdleTimeMS and IntervalMS are only implemented for Windows 2000+, Linux, and BSD at this time. Other platforms use defaults provided by the OS (which tend to be very large). If you need to customize the values on those platforms, you can use the TIdFTP.OnDataChannelCreate and TIdFTP.OnDataChannelDestroy events to call TIdFTP.Socket.Binding.SetSocketOption() directly as needed.

idFTP takes too long to give connection result

I developped an application that uses indy component to download updates from a remote server.
The problem is that if the FTP server is down or the IP address is not correct, the idFTP.connect() takes too long to give the result (connection failure).
What is the best way to accelerate the connection answer, or may be checking ip address before connection to idFTP.
Thanks in advance.
You should set ReadTimeout property, by default it is set to one minute.
By default, Indy clients wait as long as it takes for the OS to report whether the connection was successful or not. Yes, that can take a long time, if the OS has to look up the hostname with DNS, do network checks, deal with network latency, etc. If you do not want to wait that long, you can use the Timeout parameter of Connect() in Indy 9 and earlier, or the ConnectTimeout property in Indy 10, to reduce the amount of time waited on. HOWEVER, that only applies to the actual socket connect attempt once the server IP has been determined. If you set the Host property to a non-IP hostname, Indy asks the OS to perform a DNS lookup to get the hostname's IP, and there is no logic available in Connect() to control the time it takes to do that lookup. If you need that much control, then use TIdDNSResolver to get the IP manually and then assign it to the Host property before calling Connect().
Well, native connect() API timeouts are notoriously lengthy by design, (to accommodate high latency links like modems). Artificially shortening the timeout may result in premature failure notification, (though as many developers have never seen a modem, it's not that much of a problem today:).
FTP is a reasonably complex transfer requiring two TCP connections and perhaps a DNS lookup - any of these could conceivably generate long connection delays. TidFTP has an inherited 'ReadTimeout' property and a connect() overload with a timeout parameter, but I'm not sure how effective they are.
Historically, I have always timed out such operations myself using a TTimer or similar - if the FTP thread does not respond with a suitable signal, (eg. TThread.Sychronize or user-defined Windows message SendMessage()'d to the GUI), in time, a 'FTP failed' actions are taken and a flag is set in the FTP thread that tells it to ignore any replies and self-terminate. Don't use PostMessage - if you do, there is a small window of time in which a posted response my be queued up while the TTimer is firing - a race.
Oh - and if you are just plonking a TidFTP onto the form, (or creating one in TForm.FormCreate), and trying to run it from the main GUI thread, (with, or without, TidAntiFreeze), stop doing it and thread off the FTP.

Best practice: Keep TCP/IP connection open or close it after each transfer?

My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?
I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.
Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.
While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents
I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.
Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".
The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.
When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.

Which strategy about connection management should we use when developing an application?

Which use of connection management is better while developing a windows based application which uses a Database as its data store? What about web-based applications?
when user loads the first form of an application, the global
connection opens and on closing the last form of the application
the connection closes and disposes.
for each form within the application, there is a local connection
(form scope) and when user wants to perform an operation like
insert, update, delete, search, ... the application uses the
connection and by unloading the form the connection also closes and
disposes.
for every operation within a form of an application, there is a
local connection (procedure scope) and when user wants to perform
an operation like insert, update, delete, search, ... the
application uses procedure connection and at the end of every
procedure within the form, the connection also closes and disposes.
Go with #3
You should try to only ever keep connections open for just as long as is required.
Also have a look at
Understanding Connection Pooling
SQL Server Connection Pooling
(ADO.NET)
Connecting to a database server
typically consists of several
time-consuming steps. A physical
channel such as a socket or a named
pipe must be established, the initial
handshake with the server must occur,
the connection string information must
be parsed, the connection must be
authenticated by the server, checks
must be run for enlisting in the
current transaction, and so on.
In practice, most applications use
only one or a few different
configurations for connections. This
means that during application
execution, many identical connections
will be repeatedly opened and closed.
To minimize the cost of opening
connections, ADO.NET uses an
optimization technique called
connection pooling.
Connection pooling reduces the number
of times that new connections must be
opened. The pooler maintains ownership
of the physical connection. It manages
connections by keeping alive a set of
active connections for each given
connection configuration. Whenever a
user calls Open on a connection, the
pooler looks for an available
connection in the pool. If a pooled
connection is available, it returns it
to the caller instead of opening a new
connection. When the application calls
Close on the connection, the pooler
returns it to the pooled set of active
connections instead of closing it.
Once the connection is returned to the
pool, it is ready to be reused on the
next Open call.
This is quite a broad question. But usually, for any database server and application environment, opening and keeping a new connection is an expensive operation. That's why you definitely don't want to open multiple connections from a single client, and should stick to process-scope for connections.
In a desktop application using a database server, strategy for handling it's single connection depends a lot on the DB usage pattern. Say, if the app reads or writes something a lot within 5 minutes, and then just does nothing with the DB for hours, it makes no sense to keep the connection open all the time (assuming there are many other clients). You may introduce some kind of time-out for closing a connection.
The Web server situation depends a lot on the used technology. Say, in PHP every request is a "fresh start" WRT database connection. You open and close a connection for each mouse click. While popular Java application servers have DB connections pool, reusing the same connection instances for many HTTP request handling threads.

Resources