ThreadPool.SetMinThreads returns false - windows-services

We have a server that allows thousands of TCP connections to connect on port 52999 and exchange data. Each connection has its own thread. So there are thousands of threads.
If I keep the ThreadPool's default setting of minimum connections, the server can only accept two connections per second. When there were 200 connections waiting to be accepted by the server, it took more than a minute. Most of the connections timed out.
So I did this:
ThreadPool.SetMinThreads(5000, 1000);
After I did this, the server was able to accept 20 connections per second.
Initially, the TCP server was started in Global.asax.cs in an ASP.NET MVC server on IIS. Everything was fine.
Then I decided to let the TCP server run as a Windows Service. Once I changed over, ThreadPool.SetMinThreads returned false.
Why?

Related

Erlang with Chumak dying when being sent a large amount of data at once

I have an OTP-based program that uses Chumak for ZMQ. The server has a pull socket, and the client has a push socket. Everything works well in normal conditions.
The Python client sends several ZMQ messages a second. If the connection between the client and server breaks, the client saves the messages to send after the connection is reestablished.
If the connection between the client and server breaks, and is reestablished quickly (say, within a couple of minutes), everything is fine. If the connection breaks for a relatively long time (about 15 minutes in a test), the Erlang server dies with the message 'Killed'. No other details are provided on the console.
Why might this be?

Indy's TIdMappedPortTCP maximum number of simultaneous connections?

I have created a simple Delphi service application which uses Indy's TIdMappedPortTCP to redirect connection from one port to another port on a remote server.
The application does not process the traffic in any form. It only redirects the traffic to a remote server using DefaultPort, MappedPort and MappedHost properties.
I need to know what is the maximum number of connections it can handle and what are the different factors that would affect the performance such as OS, Architecture(32bit/64bit) ..etc.
My current test shows that TIdMappedPortTCP can handle 1500 connection with over 1500 threads.
I'm expecting that simultaneous connections will increase to 10000 or even 20000.
## Update 1 ##
My tests shows that the application uses 50000KB (50MB) for 1000 TCP connections (1000 threads).
## Update 2 ##
The application is currently using over 6000 threads (6000 TCP connections) and the memory usage is only 250MB with no performance issues!

Is redis client using the long connection

Is redis client using the long connection? If it wasn't long connection, why not use long link to reduce the cost of establishing a connection
Redis clients use TCP connection which is persisted until either side terminates it therefore it's up to the client library how the connection will be handled. I assume most of the clients would try to leave the established connection (or multiple connections in case of a pool) open during the lifetime of application where they are used in order to prevent handshaking before each executed command.

Asyncronously send file over TCP connection

so I'm making an iOS app, but this is more of a general networking question.
So what I have is one phone that acts as the server and then a bunch of phones connect to the phone as the client. Basically it's a game/music sharer.
It's kind of hard to really get into the semantics of it, but that isn't important.
What is important is that the server and client are repeatedly sending each other commands and positions rapidly over a TCP connection, and sometimes the client wants to send the server a music file (4MB usually) to play as the music.
The problem I initially encountered was that when sending the large file, it would hang the sending of commands from the client to the server.
My naive solution was to create another socket to connect to the server to send the file to the server, the server would check the IP of the new socket, and if it has the IP of an existing connection then it would just tie it to that connection, receive the file, and then disconnect the socket.
But the problem with this is that it takes a 1-2 second delay for the socket to connect, and I'm aware that there are man-in-the-middle attacks that can occur.
Is there a more elegant solution to this problem?
I would not call your solution naive, this is largely how FTP works, separating data and control paths is a good design pattern in my view.
I wouldn't worry about the man in the middle thing. If you wanted, you could add a command to the client that it responds to over the data connection with a secret the server supplies, this would let you associate the connections without using the ip addressing.
If the delay is a problem then why not establish both connections at the start, the overhead of a few tcp connections on an operating system is not usually significant.
You could also use the two connections for both commands and data, alternating between them. Since both the server and client know when a connection is busy they can choose to use the idle one. The advantage of this is that it will keep both connections busy to ensure they are both known to be working.
You probably should also use a different thread for each socket but I suspect you are doing this since it won't work too well without it.

How do I keep Advantage Database connections from timing out?

I have a Windows Service that works with an advantage database and occasionally makes some http calls. On rare occasions these calls can be very long. To the tune that my database connection times out. I'm not using a Data Module or anything. Just creating the connection manually.
My primary question is what usually prevents the connection from timing out if I just haven't used it in a while? Do the TAdsComponents send a keep alive message that gets called in the background somehow? Is that dependent on the vcl so I don't have that in my service? Somehow I feel like creating a thread to make my http call, and in the main thread checking for it to finish every few seconds would prevent the connection from dying. Is that ever true?
Yes, there is a keepalive mechanism as you expect. The client (for all communication types, TCP, UDP, Shared memory) sends a "ping" to the server every so often to let the server know that connection is still alive. The frequency of that keepalive ping is based on the server configuration parameter CLIENT_TIMEOUT. With the default settings, I believe the keepalive ping is sent every 30 seconds.
The keepalive logic runs in a separate thread that is started by the code that handles the communication. In other words, it does not depend on any of the VCL components; if you have a connection to the server, then that thread should be running.
One way to check if your connections are timing out is to look in the Advantage error log. There should be 7020 errors corresponding to timed out connections.
Some things that come to mind that might result in timed out connections include:
The client process being suspended for some reason so that the keepalive thread could not run. This seems unlikely.
The keepalive thread was killed for some reason. This also seems unlikely; you would have to go out of your way to make this happen.
A firewall may close the connection if there is no activity for a time. I would think, though, that a 30 second interval would be sufficient to prevent that.
A firewall may disallow the UDP keepalive packets. Firewalls, by nature, are "suspicious" of UDP packets. You might make sure you are using TCP/IP.

Resources