I have created a simple Delphi service application which uses Indy's TIdMappedPortTCP to redirect connection from one port to another port on a remote server.
The application does not process the traffic in any form. It only redirects the traffic to a remote server using DefaultPort, MappedPort and MappedHost properties.
I need to know what is the maximum number of connections it can handle and what are the different factors that would affect the performance such as OS, Architecture(32bit/64bit) ..etc.
My current test shows that TIdMappedPortTCP can handle 1500 connection with over 1500 threads.
I'm expecting that simultaneous connections will increase to 10000 or even 20000.
## Update 1 ##
My tests shows that the application uses 50000KB (50MB) for 1000 TCP connections (1000 threads).
## Update 2 ##
The application is currently using over 6000 threads (6000 TCP connections) and the memory usage is only 250MB with no performance issues!
Related
We have a server that allows thousands of TCP connections to connect on port 52999 and exchange data. Each connection has its own thread. So there are thousands of threads.
If I keep the ThreadPool's default setting of minimum connections, the server can only accept two connections per second. When there were 200 connections waiting to be accepted by the server, it took more than a minute. Most of the connections timed out.
So I did this:
ThreadPool.SetMinThreads(5000, 1000);
After I did this, the server was able to accept 20 connections per second.
Initially, the TCP server was started in Global.asax.cs in an ASP.NET MVC server on IIS. Everything was fine.
Then I decided to let the TCP server run as a Windows Service. Once I changed over, ThreadPool.SetMinThreads returned false.
Why?
Is it possible to test the WebSocket connection test for 100k users on single node? If yes, how?
I am wondering how the 1 million connection test per node was carried out as claimed on EMQX official site.
If the port limit of OS itself is 65536
The server only needs one port.
The clients port limit of the operating system can be bypassed (using multiple network cards in ). You can use JMeter to test the connection. In order to simulate the client, multiple servers may be required. Here is an EMQ official video, which gives some operating procedures.
https://www.bilibili.com/video/BV1yp4y1S7zb
There is a legacy implementation(to an extent company proprietary in Pascal, C with some java macros) which processes TCP Socket based requests from TCP client application. It supports multiple client applications(around 5K) connecting over TCP Socket, however, it only supports single socket connection with backend(database). There are two instances of the server, so in total, it supports 10K client applications over two TCP Socket connection with database. All database related communication happens in synchronous manner over single socket connection. There are massive issues in this application, especially higher RTT(Round Trip Time) and occasional outages due to back-pressure. We have an ops team for such issues. They mostly resolve them by restarting the server. Hardly, we have people in our team who know coding details of this application and there is not much documentation. As this is a critical application we can not afford messing with it. We don't want to touch the code at least for now. This even becomes more critical due to shift in business priorities. There is a need to add another 30K client applications of another business with this setup.
Task before us is to integrate it with another application which is based on microservice architecture with middleware using RabbitMQ. This is a customer facing application sensitive to higher QoS. We can not afford outage & downtime in it. As part of this integration, there is a need to process request messages coming from the above legacy application over TCP Socket before passing them to database. In other words, we want to introduce a component which would process requests of legacy application before handing over to database. This additional process is part of our client request. Some of the processing requirement is very intensive and resource hungry in terms of CPU Cycle, Memory and socket i/o. As a result, there are chances, such processing may lead to server downtime & higher RTT. Our this layer is very flexible, we can easily add more server or replace faulty ones. But, this doesn't sound very efficient in this integration as we are limited with single socket connection of legacy application. So in total at max, we can only have 2(+ 6 for new 30k client application) servers. This is our cause of concern.
I want know, what different possible options are available to address high availability, scalability and latency issues of such integration? Especially with limitation of single TCP socket connection, how can we make this integration efficient, something which can handle back-pressure, better application uptime etc.
We were thinking of leveraging RabbitMQ, Layer 4 Load balancer(like haProxy, NginX), IPVS, NAT etc.. But all lead toward making some changes(or not very efficient technique) in the legacy code, which we don't want.
I'm building a chat server with .NET. I have tried opening about 2000 client connections and my Linksys WRT54GL router (with tomato firmware) drops dead each time. The same thing happens when I have several connections open on my Azureus bit-torrent client.
I have three questions:
Is there a limit on the number of open sockets I can have in Windows Server 2003?
Is the Linksys router the problem? If so is there better hardware recommended?
Is there a way to possibly share sockets so that I can handle more open client connections with fewer resources?
AS I've mentioned before, Raymond Chen has good advice on this sort of question: If you have to ask about OS limits, you're probably doing something wrong. The IP protocol only allows for a maximum of 65535 ports and many of these are reserved and not available for general use. I would suggest that your messaging protocols need to be thought out in more detail so that OS limits are not an issue. I'm sure there are many good resources describing such systems, and there are certainly people here that would have good ideas about it.
EDIT: I'm going to put some thoughts about implementing a scalable chat server.
First off, designate a single port on the server for clients to communicate through. Whenever a client needs to update the chat state (a new user message for example) do the following:
create message packet
open port to server
send packet
close port
The server then does the following:
connection request received
get packet
close connection
process packet
for each client that requires updating
open connection to clients
send update packet
close connection
When a new chat session is started, the client starting the session sends a 'new session' message to the server with the clients user details and IP address for responses. The server creates a new chat session and responds with the session ID. The client then sends packets containing the messages the user types, the server processes them and forwards the message to other clients in the same session. When a client leaves the chat, it sends a 'end session' message to the server. The server removes the client from the session and destroys the session when there are no more clients in the session.
Hope that gets you thinking.
i have found some answers to this that i feel i should share:
Windows 2003 server has a limit on the number of ports that may be used. but this is configurable via a registry tweak to change the MaxUSerPort setting from 5000 to say, 64k( max).
Exploring further, i realize that the 64k port restriction is actually per IP address, hence a single server can easily attain much more ports, and hence TCP connections by either installing multiple network cards, or binding more than one IP address to a network card. that way, you can scale your system to handle n x 64k ports.
Had for days a problem with the available sockets on my Window 7 machine. After reading some articles about socket leaks in Win 7, I applied a Windows patch - nothing changed.
Below there is an article describing windows connection problems in great detail:
http://technet.microsoft.com/en-us/magazine/2007.12.network.aspx
For me it worked the following:
Open Regedit
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters: Create TcpNumConnections, REG_DWORD, decimal value 500 (this can be set according to your needs); EnableConnectionRateLimiting, REG_DWORD, value 0;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip: Create MaxUserPort, REG_DWORD, decimal value 65534
Restart Windows
I am a newbie in network related aspects. I have few basic questions related to tcp/ip protocol and network
If a network switch (in a LAN network) between two PC's running Client and server (that are communicating through async. sockets) is powered down. Can the client and server will be notified that the socket connection is no longer active. Client and server are running on Win XP OS and are coded using C#.
Does network topology play a role in case of half open connection between socket client and socket server. For e.g. Will a disconnect status of either one or both be notified to other end and does it depend on network topology.
Thanks in advance.
A network element such as a router/hub/switch does not activly cause anything anything to happen on the TCP layer if it goes down. The operating system might notice that the physical layer is down and error out all sockets bound on that network card if it's a network element directly connected to the PCs that breaks - this will vary among operating systems/network cards and other things. Other than that, in order to detect that the connection has been severed, you'll have to send something and rely on the TCP timeout mechanisms to error out. This can be done implicittly by enabling TCP Keepalives on the connection.
A disconnect on one side will only be noticed if those messages reach the other side, if the network topology changes or sometinhg breaks in the middle of the connection in such a way that messages no longer reach the other end, a disconnect won't be noticed. (NAT gateways are a big source of problems such as this, they might time out a TCP connection they're tracking and you'll never know the connection is no longer valid unless you try to write something (or enable TCP keepalives) to the connection). Note that most networking APIs require that you Read from the connection to discovver that a the other end has closed the connection - assuming those "close" messages actually reach your side.