We have an application server developed with Delphi 2010 and Indy 10. This server receives more than 50 requests per second and it works well. But in some cases, it seems to me that Indy is very obscure. Their components are good, but sometimes I found myself digging into the source code only to understand a simple thing. Indy lacks on good documentation and good support.
The last thing that i came across was a big problem for me: I must detect when a client disconnects non gracefully (When the the client crashes or shutdown, for instance. Not telling the server that it will disconnect) and indy was not able to do that. If I want that, I will have to develop a algorithm like heartbeat, pooling or TCP keep-alive. I do not want to spend more time doing a, at least I think, component job. After a few study, I found out that this is not Indy's fault, but this is an issue of all blocking sockets components.
Now I am really thinking of changing the core of the Server to another good suite. I must admit I am tending to use a non-blocking socket. Based on that, I have some questions:
What do a benefit from changing from blocking to non-blocking sockets?
Will I be able to detect client disconnects (non gracefully)?
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
I know this must be a subjective question, but I really want to hear that from you. My first question is the one I care most. I do not care if I have to pay 100, 500, 1000, 10000 dollars, but I want a complete solution. For now, I am thinking about Ip*works .
EDIT
I think some guys are not understand what I want. I don't want to create my own socket. I have been working with sockets for a long time and I am getting tired of it. Really.
And non-blocking sockets CAN detect client disconnects. That is a fact and it has good documentation all over the internet. A non-blocking socket checks the socket state for new incoming data all the time, and it makes possible to detect that the socket is not valid. This is not a heartbeat algorithm. A heartbeat algorithm is used on client side and it sends periodically packets (aka keep-alive) to the server to tells it is still alive.
EDIT
I am not make myself clear. Maybe because English is not my main language. I am not saying that it is possible to detect a dropped connection without trying to send or receiving data from a socket. What I am saying is that every non-blocking socket is able to do that because they constantly tries to read from the socket for new incoming data. Why is that so hard to understand? If you guys download and run ip*works demos, in special, the echoserver and echoclient ones (both use TCP) you can test by yourselves. I already tested it, and it works like I expected to do. Even if you use the old TCPSocketServer and TCPSocketClient in a non-blocking mode you will see what I meant.
"What do a benefit from changing from blocking to non-blocking sockets? Will I be able to detect client disconnects (non gracefully)?"
Just my two cents to get the ball rolling on this question - I'm not a socket EXPERT, but I do have a good deal of experience with them. If I'm mistaken, I'm sure someone will correct me... :-)
I assume that since you're running a server using blocking sockets with 50 connections per second, you have a threading mechanism in place to handle client requests. If so, you don't really stand to gain anything from non-blocking sockets. On the contrary - you will have to change your server logic to be event driven- based on events fired in your main thread from the non-blocking sockets, or use constant polling to know what your sockets are up to.
Non-blocking sockets can't detect clients disconnecting without notification any more than blocking sockets can - they don't have telepathic powers... The nature of the TCP/IP 'conversation' between client and server is the same - blocking and non-blocking is only with respect to your application's interaction with the socket connection conducting the 'conversation'.
If you need to purge dead connections, you need to implement a heartbeat or timeout mechanism on your socket (I've never seen a modern socket implementation that didn't support timeouts).
What do a benefit from changing from blocking to non-blocking sockets?
Increased speed, availability, and throughput (from my experience). I had an IndySockets client that was getting about 15 requests per second and when I went directly to asynchronous sockets the throughput increased to about 90 requests per second (on the same machine). In a separate benchmark test on a server at a data-center with a 30 Mbit connection I was able to get more than 300 requests per second.
Will I be able to detect client disconnects (non gracefully)?
That's one thing I haven't had to try yet, since all of my code has been on the client side.
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
You can build your own socket client in a couple of days and it can be very robust and fast... much faster than most of the stuff I've seen "off the shelf". Feel free to take a look at my asynchronous socket client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
Update:
(Per Mikey's comments)
I'm asking you for a generic, technical explanation of how NBS increase throughput as opposed to a properly designed BS server.
Let's take a high load server as an example: say your server is supposed to handle 1000 connections at any given time, with blocking sockets you would have to create 1000 threads and even if they're mostly idle, the CPU will still spend a lot of time context switching. As the number of clients increases you will have to increase the number of threads in order to keep up and the CPU will inevitably increase the context switching. For every connection you establish with a blocking socket, you will incur the overhead of spawning of a new thread and you eventually you will incur the overhead of cleaning up after the thread. Of course, the first thing that comes to mind is: why not use the ThreadPool, you can reuse the threads and reduce the overhead of creating/cleaning-up of threads.
Here is how this is handled on Windows (hence the .NET connection): sure you could, but the first thing you'll notice with the .NET ThreadPool is that it has two types of threads and it's not a coincidence: user threads and I/O completion port threads. Asynchronous sockets use the IO completion ports which "allows a single thread to perform simultaneous I/O operations on different handles, or even simultaneous read and write operations on the same handle."(1) The I/O completion port threads are specifically designed to handle I/O in a much more efficient way than you would ever be able to achieve if you used the user threads in ThreadPool, unless you wrote your own kernel-mode driver.
"The completion port uses some special voodoo to make sure only a specific number of threads can run at once — if one thread blocks in kernel-mode, it will automatically start up another one."(2)
There are other advantages also: "in addition to the nonblocking advantage of the overlapped socket I/O, the other advantage is better performance because you save a buffer copy between the TCP stack buffer and the user buffer for each I/O call." (3)
I am using Indy and Synapse TCP libraries with good results for some years now, and did not find any showstoppers in them. I use the libraries in threads - client and server side, stability and performance was not a problem. (Six thousand request and response messages per second and more with the server running on the same system are typical.)
Blocking sockets are very useful if the protocol is more advanced than a simple 'send a string / receive a string'. Non-blocking sockets cause a higher coupling of message protocol handlers with the socket read / write logic, so I quickly moved away from non-blocking code.
No library can overcome the limitations of the TCP/IP protocol regarding detection of connection loss. Only trying to read or send data can tell wether the connection is still present.
In Windows, there is a third option which is overlapped I/O. Non-blocking sockets are essential a model using Windows messages developed to avoid single-threaded GUI apps to become "blocked" while waiting for data. A modern application IMHO would be better designed using threads and overlapped I/O.
See for example http://support.microsoft.com/kb/181611
Aahhrrgghh - the myth of being able to always detect "dropped" connections. If you pull the power on a machine with a client connection then the server cannot tell, without sending data, that the connection is "dead". The is through the design of the TCP protocol. Don't take my word for it - read this article (Detection of Half-Open (Dropped) TCP/IP Socket Connections).
This article explains the main differences between blocking and non-blocking:
Introduction to Indy, by Chad Z. Hower
Pros of Blocking
Easy to program - Blocking is very easy to program. All user code can
exist in one place, and in a
sequential order.
Easy to port to Unix - Since Unix uses blocking sockets, portable code
can be written easily. Indy uses this
fact to achieve its single source
solution.
Work well in threads - Since blocking sockets are sequential they
are inherently encapsulated and
therefore very easily used in threads.
Cons of Blocking
User Interface "Freeze" with clients - Blocking socket calls do not
return until they have accomplished
their task. When such calls are made
in the main thread of an application,
the application cannot process the
user interface messages. This causes
the User Interface to "freeze" because
the update, repaint and other messages
cannot be processed until the blocking
socket calls return control to the
applications message processing loop.
He also wrote:
Blocking is NOT Evil
Blocking sockets have been repeatedly
attacked with out warrant. Contrary to
popular belief, blocking sockets are
not evil.
It is not is an issue of all blocking sockets components that they are unable to detect a client disconnect. There is no technical advantage on the side of non-blocking components in this area.
Related
I have designed,programmed and implemented a server application based on IO overlapped network programming paradigm on windows operating system. it works well with the expected performance, it is observed that in arbitrary selected clients sometimes nothing happens, the data transferring seems to be freeze. it doesn't even generate any TCP/IP error conditions such as time out or any. in this case server maintains that connection as active connection which in turn a needless resource reservation. What may be the reason for this ?? as a resolution how can i detect such connection. how can i reduce such situations
Thankx
What may be the reason for this?
The client has stopped sending or receiving, it isn't clear from your question which.
as a resolution how can i detect such connection. how can i reduce such situations?
Use a read timeout if you're reading, or non-blocking mode with a timed select() if you're writing.
I have a server application that receives some special TCP packet from a client and needs to react to it as soon as possible by sending an high-level ACK to the client (the TCP ACK won't suite my needs).
However, this server is really network intensive and sometimes the packet will take too long to be sent (like 200ms in a local network, when a simple server application can send it in less than 1ms).
Is there a way to mark this packet with a high-priority tag or something like that in Delphi? Or maybe with the Win32 API?
Thanks in advance.
EDIT
Thanks for all the answers so far. I'll add some details. My product has the following setup: there are several devices that are built upon vehicles with WIFI conectivity. When they arrive at the garage, those device connect to my server and start to transmit data.
Because of hardware limitations, I implemented a high-level ACK to make the device aware that the last packet arrived successfully (please, don't argue about this - the data may be broken even if I got a correct TCP ACK). However, if I use my server software, that communicates with a remote database, to issue this ACK, I get very long delay (>200ms). If I use an exclusive software to do this task, I get small latencies (<1ms). So, I was imagining if I could just tell Windows to send those special packets first, as it seems to me that this package is getting delayed so the database ones can get delivered.
That's the motivation behind my question.
EDIT 2
As requested: this is legacy software and I'm using the legacy dclsockets140.bpl package and Delphi 2010 (14.0.3593.25826).
IMO it is very difficult to realize this. there are a lot of equipment and software involved. first of all, if you communicate between 2 different OS's you got a latency. second, soft and hard firewalls, antiviruses, everything is filtering/delaying your package.
you can try also to 'hack' the system(this involve some very good knowledge on how the frames/segments are packed/send,flow control,congestion,etc), either by altering it from code, either by using some tools like http://half-open.com/ or others.
In short, passing MSG_OOB flag to the send function marks the data as "urgent". Detailed discussion about the OOB in the context of Windows Sockets implementation specifics is available here.
I am using TIdCmdTCPClient and TIdCmdTCPServer. Suddenly I find that I might like to have bi-directional communication.
What would be best? Should I possibly use some other components? If so, which? Or should I kludge and have the 'client' poll the 'server' to ask if it wishes to communciate anything?
This is a very small system. Two clients and ten servers, with a burst of one tarnscation every 30 to 60 seconds for a few minutes once a day, so overhead for polling is inconsequential.
I'm just woder if there is a 'correct' way.
Update: this really is an incredibly simple system. Very little traffic and all of it simple. All transmissions are an indication of even type an an optional single parameter.
<event type> [ <parameter>] e.g. "HERE_IS_SOME_DATA 42"
This can be sent in both directions, hover here is no "reply" as such. Just fire off a message (and hope that it got there)? Receive an Ack with no data? Non-catching of an exception indicates that message was successfully sent?)
Would it be possible (would it be overkill) to use two TIdCmdTCPServer?
Both TIdCmdTCPClient and TIdCmdTCPServer continuously poll their socket endpoints for inbound data during the lifetime of the connection. You do not have to do anything extra for that. So, as soon as a TIdCmdTCPClient connects to the TIdCmdTCPServer, both components will initially be in a reading state until one of them sends a command to the other.
Now, there is a problem with doing that - as soon as either component sends that first command, the receiving component will interpret it as a command and send back a reply, which the other component will interpret as a command and send back a reply, which will be interpretted as a command and send back a reply, and so on, causing an endless cycle of replies back and forth. For that reason, it is not wise to use TIdCmdTCPClient and TIdCmdTCPServer together. You should either use TIdTCPClient with TIdCmdTCPServer, or use TIdCmdTCPClient with TIdTCPServer. Depending on what exactly your protocol looks like, you may have to forgo using TIdCmdTCPClient and TIdCmdTCPServer altogether and just use TIdTCPClient with TIdTCPServer so you have more control over reading and writing on both ends. It is hard to answer with actual code without first knowing what the communication protocol should look like.
A single TCP socket connection can be used in two directions. The server can send data asynchronously to the client at any time. It is up to the client however to read the socket, for asynchronous processing this is done in a listener thread which reads from the socket and synchronizes incoming data operations with the main worker thread.
An example use case in the Indy components is the Telnet client component (TIdTelnet) which has a receive thread listening for server messages.
But you also asked about the 'correct' way - and then the answer depends on other factors such as network stability, guaranteed delivery and how to handle temporary server outages. In enterprise environments, one central messaging hub is preferred in many use cases, so that all parties connect only to this central server which is only responsible for reliable message delivery, and keeps messages until the recipient is available.
You can download the INDY 10 TCP server demo sample code here.
I run a server monitoring site for a video game. It monitors thousands of servers (currently 15,000 or so).
My current setup is a bit janky, and I want to improve it. Currently I use cron to submit every server to a resque job queue. I refill the queue just as soon as it's empty, essentially creating a constantly working queue. The job will then simply try and open a socket connection to the server ip and port in question, and mark it down if it fails to connect.
I have 20 workers, and it gets the job done in about 5 minutes. I feel that this should be able to go MUCH faster.
Is there a better, quicker way of doing this?
So, what you are doing currently I assume is doing a TCP socket connection which pings your game server. The problem with using TCP is obviously that it is a lot slower than UDP.
What I would advise instead is creating a UDP socket that just checks for the game server port.
Here's a nice quote from another question:
> UDP is really faster than TCP, and the simple reason is because
> it's non-existent acknowledge packet (ACK) that permits a continuous
> packet stream, instead of TCP that acknowledges each packet.
Read this question here: UDP vs TCP, how much faster is it?
From my experience with game servers, the majority if not 100% of all modern game servers allow you to query them on a UDP socket. This will then respond with details on the game server. (I used to host a lot of servers myself too).
So basically, make sure that you are using UDP rather than TCP...
Example Query
I'm just searching for this information now and will update my question...when I find some source.. what game is it that you are trying to get information for?
Use typical solutions for typical tasks. This case is about available detection every n seconds - one of daily sysadmin task. It should not be over ICMP, use SNMP over UDP proto. One of complete solution is Nagious/Cacti/Zabbix, which have built-in functionality to combine everything about your servers: LA, HDD, RAM, IO, NET as well as available detection.
You don't mention how you are making the socket connections, but you might want to try using ruby curl bindings: curb instead of net/http.
This will typically be much faster.
Ok I know this is pretty broad, but let me narrow it down a bit. I've done a little bit of client-server programming but nothing that would need to handle more than just a couple clients at a time. So I was wondering design-wise what the most mainstream approach to these servers is. And if people could reference either tutorials, books, or ebooks.
Haha ok. didn't really narrow it down. I guess what I'm looking for is a simple but literal example of how the server side program is setup.
The way I see it: client sends command: server receives command and puts into queue, server has either a single dedicated thread or a thread pool that constantly polls this queue, then sends the appropriate response back to the client. Is non-blocking I/O often used?
I suppose just tutorials, time and practice are really what I need.
*EDIT: Thanks for your responses! Here is a little more of what I'm trying to do I suppose.
This is mainly for the purpose of learning so I'd rather steer away from use of frameworks or libraries as much as I can. Take for example this somewhat made up idea:
There is a client program it does some function and constantly streams the output to a server(there can be many of these clients), the server then creates statistics and stores most of the data. And lets say there is an admin client that can log into the server and if any clients are streaming data to the server it in turn would stream that data to each of the admin clients connected.
This is how I envision the server program logic:
The server would have 3 Threads for managing incoming connections(one for each port listening on) then spawning a thread to manage each connection:
1)ClientConnection which would basically just receive output, which we'll just say is text
2)AdminConnection which would be for sending commands between server and admin client
3)AdminDataConnection which would basically be for streaming client output to the admin client
When data comes in from a client to the server the server parses what is relevant and puts that data in a queue lets say adminDataQueue. In turn there is a Thread that watches this queue and every 200ms(or whatever) would check the queue to see if there is data, if there is, then cycle through the AdminDataConnections and send it to each.
Now for the AdminConnection, this would be for any commands or direct requests of data. So you could request for statistics, the server-side would receive the command for statistics then send a command saying incoming statistics, then immediately after that send a statistics object or data.
As for the AdminDataConnection, it is just the output from the clients with maybe a few simple commands intertwined.
Aside from the bandwidth concerns of the logical problem of all the client data being funneled together to each of the admin clients. What sort of problems would arise from this design due to scaling issues(again neglecting bandwidth between clients and server; and admin clients and server.
There are a couple of basic approaches to doing this.
Worker threads or processes. Apache does this in most of its multiprocessing modes. In some versions of this, a thread or process is spawned for each request when the request arrives; in other versions, there's a pool of waiting threads which are assigned work as it arrives (avoiding the fork/thread create overhead when the request arrives).
Asynchronous (non-blocking) I/O and an event loop. This is basically using the UNIX select call (although both FreeBSD and Linux provide more optimized alternatives such as kqueue). lighttpd uses this approach and is able to achieve very high scalability, but any in-server computation blocks all other requests. Concurrent dynamic request handling is passed on to separate processes (via CGI) or waiting processes (via FastCGI or its equivalent).
I don't have any particular references handy to point you to, but if you look at the web sites for open source projects using the different approaches for information on their design wouldn't be a bad start.
In my experience, building a worker thread/process setup is easier when working from the ground up. If you have a good asynchronous framework that integrates fully with your other communications tasks (such as database queries), however, it can be very powerful and frees you from some (but not all) thread locking concerns. If you're working in Python, Twisted is one such framework. I've also been using Lwt for OCaml lately with good success.