Two-way TCP communication in Indy 10? - delphi

I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?

Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.

A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.

You can download the INDY 10 TCP server demo sample code here.

Related

How to use poll with multicast

I have used poll in the past where a server has multiple connected file descriptors, but how does one use poll in the case where one wants to listen in to various multicast groups? From my understanding this would entail multiple upd sockets wanting to call recvfrom after joining a group but never connecting these socket..would one just poll on these descriptors anyways and then call recvfrom when the events trigger? Is there any small simple example of this on the web?
Thanks
The polling is exactly the same - you wait for any of your several sockets to become readable, figure out which one is, and then call recv(2) or whatnot. The difference from TCP is that each read on UDP socket de-queues exactly one datagram, so this is a bit easier.
The sockets you put into poll set are usually set to non-blocking, in which case you'd need to handle EWOULDBLOCK error from recv(2).
Also remember that UDP is not reliable, so if you are not consuming those datagrams fast enough they fill socket receive buffer and kernel starts dropping them.

What do a benefit from changing from blocking to non-blocking sockets?

We have an application server developed with Delphi 2010 and Indy 10. This server receives more than 50 requests per second and it works well. But in some cases, it seems to me that Indy is very obscure. Their components are good, but sometimes I found myself digging into the source code only to understand a simple thing. Indy lacks on good documentation and good support.
The last thing that i came across was a big problem for me: I must detect when a client disconnects non gracefully (When the the client crashes or shutdown, for instance. Not telling the server that it will disconnect) and indy was not able to do that. If I want that, I will have to develop a algorithm like heartbeat, pooling or TCP keep-alive. I do not want to spend more time doing a, at least I think, component job. After a few study, I found out that this is not Indy's fault, but this is an issue of all blocking sockets components.
Now I am really thinking of changing the core of the Server to another good suite. I must admit I am tending to use a non-blocking socket. Based on that, I have some questions:
What do a benefit from changing from blocking to non-blocking sockets?
Will I be able to detect client disconnects (non gracefully)?
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
I know this must be a subjective question, but I really want to hear that from you. My first question is the one I care most. I do not care if I have to pay 100, 500, 1000, 10000 dollars, but I want a complete solution. For now, I am thinking about Ip*works .
EDIT
I think some guys are not understand what I want. I don't want to create my own socket. I have been working with sockets for a long time and I am getting tired of it. Really.
And non-blocking sockets CAN detect client disconnects. That is a fact and it has good documentation all over the internet. A non-blocking socket checks the socket state for new incoming data all the time, and it makes possible to detect that the socket is not valid. This is not a heartbeat algorithm. A heartbeat algorithm is used on client side and it sends periodically packets (aka keep-alive) to the server to tells it is still alive.
EDIT
I am not make myself clear. Maybe because English is not my main language. I am not saying that it is possible to detect a dropped connection without trying to send or receiving data from a socket. What I am saying is that every non-blocking socket is able to do that because they constantly tries to read from the socket for new incoming data. Why is that so hard to understand? If you guys download and run ip*works demos, in special, the echoserver and echoclient ones (both use TCP) you can test by yourselves. I already tested it, and it works like I expected to do. Even if you use the old TCPSocketServer and TCPSocketClient in a non-blocking mode you will see what I meant.
"What do a benefit from changing from blocking to non-blocking sockets? Will I be able to detect client disconnects (non gracefully)?"
Just my two cents to get the ball rolling on this question - I'm not a socket EXPERT, but I do have a good deal of experience with them. If I'm mistaken, I'm sure someone will correct me... :-)
I assume that since you're running a server using blocking sockets with 50 connections per second, you have a threading mechanism in place to handle client requests. If so, you don't really stand to gain anything from non-blocking sockets. On the contrary - you will have to change your server logic to be event driven- based on events fired in your main thread from the non-blocking sockets, or use constant polling to know what your sockets are up to.
Non-blocking sockets can't detect clients disconnecting without notification any more than blocking sockets can - they don't have telepathic powers... The nature of the TCP/IP 'conversation' between client and server is the same - blocking and non-blocking is only with respect to your application's interaction with the socket connection conducting the 'conversation'.
If you need to purge dead connections, you need to implement a heartbeat or timeout mechanism on your socket (I've never seen a modern socket implementation that didn't support timeouts).
What do a benefit from changing from blocking to non-blocking sockets?
Increased speed, availability, and throughput (from my experience). I had an IndySockets client that was getting about 15 requests per second and when I went directly to asynchronous sockets the throughput increased to about 90 requests per second (on the same machine). In a separate benchmark test on a server at a data-center with a 30 Mbit connection I was able to get more than 300 requests per second.
Will I be able to detect client disconnects (non gracefully)?
That's one thing I haven't had to try yet, since all of my code has been on the client side.
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
You can build your own socket client in a couple of days and it can be very robust and fast... much faster than most of the stuff I've seen "off the shelf". Feel free to take a look at my asynchronous socket client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
Update:
(Per Mikey's comments)
I'm asking you for a generic, technical explanation of how NBS increase throughput as opposed to a properly designed BS server.
Let's take a high load server as an example: say your server is supposed to handle 1000 connections at any given time, with blocking sockets you would have to create 1000 threads and even if they're mostly idle, the CPU will still spend a lot of time context switching. As the number of clients increases you will have to increase the number of threads in order to keep up and the CPU will inevitably increase the context switching. For every connection you establish with a blocking socket, you will incur the overhead of spawning of a new thread and you eventually you will incur the overhead of cleaning up after the thread. Of course, the first thing that comes to mind is: why not use the ThreadPool, you can reuse the threads and reduce the overhead of creating/cleaning-up of threads.
Here is how this is handled on Windows (hence the .NET connection): sure you could, but the first thing you'll notice with the .NET ThreadPool is that it has two types of threads and it's not a coincidence: user threads and I/O completion port threads. Asynchronous sockets use the IO completion ports which "allows a single thread to perform simultaneous I/O operations on different handles, or even simultaneous read and write operations on the same handle."(1) The I/O completion port threads are specifically designed to handle I/O in a much more efficient way than you would ever be able to achieve if you used the user threads in ThreadPool, unless you wrote your own kernel-mode driver.
"The com­ple­tion port uses some spe­cial voodoo to make sure only a spe­cif­ic num­ber of threads can run at once — if one thread blocks in ker­nel-​mode, it will au­to­mat­i­cal­ly start up an­oth­er one."(2)
There are other advantages also: "in addition to the nonblocking advantage of the overlapped socket I/O, the other advantage is better performance because you save a buffer copy between the TCP stack buffer and the user buffer for each I/O call." (3)
I am using Indy and Synapse TCP libraries with good results for some years now, and did not find any showstoppers in them. I use the libraries in threads - client and server side, stability and performance was not a problem. (Six thousand request and response messages per second and more with the server running on the same system are typical.)
Blocking sockets are very useful if the protocol is more advanced than a simple 'send a string / receive a string'. Non-blocking sockets cause a higher coupling of message protocol handlers with the socket read / write logic, so I quickly moved away from non-blocking code.
No library can overcome the limitations of the TCP/IP protocol regarding detection of connection loss. Only trying to read or send data can tell wether the connection is still present.
In Windows, there is a third option which is overlapped I/O. Non-blocking sockets are essential a model using Windows messages developed to avoid single-threaded GUI apps to become "blocked" while waiting for data. A modern application IMHO would be better designed using threads and overlapped I/O.
See for example http://support.microsoft.com/kb/181611
Aahhrrgghh - the myth of being able to always detect "dropped" connections. If you pull the power on a machine with a client connection then the server cannot tell, without sending data, that the connection is "dead". The is through the design of the TCP protocol. Don't take my word for it - read this article (Detection of Half-Open (Dropped) TCP/IP Socket Connections).
This article explains the main differences between blocking and non-blocking:
Introduction to Indy, by Chad Z. Hower
Pros of Blocking
Easy to program - Blocking is very easy to program. All user code can
exist in one place, and in a
sequential order.
Easy to port to Unix - Since Unix uses blocking sockets, portable code
can be written easily. Indy uses this
fact to achieve its single source
solution.
Work well in threads - Since blocking sockets are sequential they
are inherently encapsulated and
therefore very easily used in threads.
Cons of Blocking
User Interface "Freeze" with clients - Blocking socket calls do not
return until they have accomplished
their task. When such calls are made
in the main thread of an application,
the application cannot process the
user interface messages. This causes
the User Interface to "freeze" because
the update, repaint and other messages
cannot be processed until the blocking
socket calls return control to the
applications message processing loop.
He also wrote:
Blocking is NOT Evil
Blocking sockets have been repeatedly
attacked with out warrant. Contrary to
popular belief, blocking sockets are
not evil.
It is not is an issue of all blocking sockets components that they are unable to detect a client disconnect. There is no technical advantage on the side of non-blocking components in this area.

How do I use TIdTelnet to send commands?

I am trying to simulate the "new identity" button in Vidalia (the Tor GUI) from my program. I asked about that, based on Rob Kennedy's answer, I tried this in my application:
IdTelnet1.Host:='127.0.0.1';
IdTelnet1.Port:=9051;
IdTelnet1.Connect(-1);
IdTelnet1.SendCmd('SIGNAL NEWNYM');
But it has not worked for me. Even after I send the command, I get the same proxy.
I am using Indy 9.
I don't know whether I don't know how to use TIdTelnet or don't know how to send that specific command.
You cannot use the SendCmd() method with TIdTelnet. TIdTelnet uses an internal reading thread that continuously reads from the socket (since Telnet is an asynchronous protocol that can receive data at any time). SendCmd() does its own internal reading to receive the sent command's response. The two reading operations interfer with each other (this issue also exists in Indy 10's TIdCmdTCPClient component for the same reason).
To send an outgoing command with TIdTelnet, you must use its SendCh() method to send each character individually (if you upgrade to Indy 10, TIdTelnet has a SendString() method whch handles that for you) and then wait until the OnDataAvailable event to process the response as needed.
Unless TOR is actually using the real Telnet protocol (Telnet sequences and all), then you are better off using TIdTCPClient instead of TIdTelnet. TIdTelnet is a Telnet-specific client, not a general purpose TCP/IP client like TIdTCPClient is.

What is the best algorithm/technique to control client connections to the server?

I have over 50 clients connected to one server (low end server, running windows 2003 server), every time there is a power failure or switch failure the clients will disconnect from the server, the server might remain on during this incidents (if power backup is installed), when the clients came back they automatically detect the server and initiate a connection procedure, at this point the server will start dishing out the relevant data to the clients. Its at this point you realize some clients will start freezing becouse the server is not quick enough to dish out data and so it blocks the rest of the clients.
I have implemented a crude method to control this client storm but i was asking if guys out there have better algorithms to perform this kind of task.
NB: Am using Asta sockets components on a delphi application, but i dont mind examples from different fields,
Similar to network collision-detection protocols, perhaps clients could wait a random period of time before initiating their connection at startup?
In addition to the random startup delay suggested by Bremen, implement some sort of "too busy; try again later" message in your protocol. Rejecting a client with a short message should not be a problem for 50, 100, or even 1000 clients. Have the clients respond by doing a random delay and retrying + exponential backoff.
The solution depends on your preferences as well. Is it ok for you to drop down the connections request or send busy message?
Another option can be that you start sending data to the clients in sort of roundrobin manner. To this end you can have different threads responsible for sending data to different clients. Advantage of this case can be that none of the clients will be starved.

Delphi Network programming

I have a classic client/server (fat client and database) program written in Delphi 2006. When certain conditions are met in the client, I need to notify all the other clients very quickly. Up until now this has been done using UDP broadcasts, but this is no longer viable as clients now connect from outside the LAN and the UDP broadcast is limited to the local network.
I'm aware of the Indy libraries but am not really sure of which components to use and how to structure it. I'm guessing I'll need to have a server that the clients connect to which will receive and distribute the messages...? Any samples out there to get me started?
Are there any other component sets or technologies I should look at instead/as well?
The simple answer is that the standard protocols available in Delphi (and other tools) don't allow for notification in reverse. I looked into this for a project where I wanted to use SOAP. They all assume client asks server, server responds and that's it.
For me, the solution was the RemObjects SDK. This allows you to send notifications to clients, and the notification can have any data you like (just like the client to server). Myself I use the SuperTCP connection, but it works with others too. It can still offer a SOAP interface for clients that must use it, but for where you have control of both client and server it works extremely well.
There are a few really easy ways to do this with Delphi, although I am sure the RemObjects SDK works really well too.
Have a central server that has a * TIdTCPServer listening* on it. Then each client has a TIdTCPClient on it. They connect to the server and block on a read waiting for the server to write. Once the server receives a notification via a listening socket it broadcasts to each of the waiting clients. This is pretty much immediate notification of all the clients.
Have a central server that has a TIdTCPServer listening on it. Then each client has a TIdTCPClient on it. Those clients can "ping" the server to ask for updates at a regular interval (use a session token to maintain state). The frequency of the interval determines how quick the notification will be. When once one of the clients needs to notify the others, it just notifies the server. The server then uses a message queue to make a list of all active client sessions and adds a notification for each. Then the next time each of the clients connects it gives it the notification and remove it from the queue.
Maintain a session table in the database where each client updates regularly that they have an active session, and removes itself when it disconnects. You will need a maintenance process that removes dead sessions. Then you have a message queue table that a client can write an update to with one row for each current active session. Then the other clients can regularly ping that table to see if there are any pending notifications for its session, if there are it can read them, act on them and then remove them.
Some sort of peer to peer approach were the clients are aware of each other through information in the database and then they connect directly to each other and notify or ask for notifications (depending on firewall and NAT configurations). A little more complex, but possible.
Obviously the choice of implementation will depend on your setup and needs. Tunning will be necessary to achieve the best results.
The components you need for this are the TIdTCPServer (listener) and TIdTCPClient (sender). Both of which are in the Indy libraries in Delphi.
ICS components from http://www.overbyte.be are great.
a.) Better compatibility than Indy
b.) PostCard ware
Good examples and support. Use TClientSocket and TServerSocket
FirebirdSQL project use the concept of notifications as being server-client connections that send a string to the client. For this, the db server uses an other port. And require the client to register it's interesting of receiving a certain type of notification through an API call.
You could use the same idea.
RabbitMQ should fit your bill. The server is free and ready to use. You just need a client side to connect, push/send out message and get/pull notified message
Server: http://www.rabbitmq.com/download.html
Do a google for client or implement yourself
Cheers
You should be able to use Multicast UDP for the same purpose. The only difference will be to join the multicast group from every client.
http://en.wikipedia.org/wiki/IP_Multicast
http://en.wikipedia.org/wiki/Internet_Group_Management_Protocol
Edit: Just to clarify, multicast let you join a given "group" associated to a multicast ip address. Any packet sent to that address will reach every client who has join the group
You can watch weonlydo wodVPN component which permit you to create a robust UDP hole punching and gain a port-forwading or a normal VPN (with a fornished network adapter) so you can connect two PC behind a NAT.
I'm using this control for our communication program and works very fine.

Resources