My app is hanging in the call to procedure TIdStackWindows.Connect. When the TCP/IP address exists there is no problem, but if it doesn't, I get the hang. The IP address is a literal- there is no DNS lookup involved. I was expecting the connection attempt to fail after the timeout (TCPClient.ConnectTimeout) I have set of 1 second but the app hangs for up to 30 seconds on this call (the call from my app isn't threaded. I intend to move the TCP connection to a thread, but the long connect timeout will still be an issue).
If I pause execution in the Delphi IDE when the app is unresponsive, I an positioned at:
ntdll.KiUserApcDispatcher:
7C90E450 8D7C2410 lea edi,[esp+$10]
I then F8 a couple of times until I see a stack frame. I am then at:
IdStack.TIdStack.RaiseSocketError(10038)
IdStack.TIdStack.RaiseLastSocketError
IdStack.TIdStack.CheckForSocketError(-1)
IdStackWindows.TIdStackWindows.Connect(912,'10.8.2.170',5001,Id_IPv4)
IdSocketHandle.TIdSocketHandle.Connect
IdIOHandlerStack.TIdConnectThread.Execute
:00451fc1 HookedTThreadExecute + $2D
Classes.ThreadProc($254B910)
System.ThreadWrapper($5456CB0)
:00451ea3 CallThreadProcSafe + $F
:00451f10 ThreadExceptFrame + $3C
:7c80b729 ; C:\WINDOWS\system32\kernel32.dll
I note after a bit of poking around that this topic has received a bit of traffic. The common answer seems to be "put it in a thread". I intend to, but the long timeout will still be problematic. Why does the connect timeout not work? I'm using Indy 10.5.5 and Delphi 2006 - if I upgrade to the latest build of Indy will there be much migration involved?
Blocking sockets have no concept of a connect timeout at the API layer, so Indy's ConnectTimeout is a manually implemented timeout. Indy calls TIdStack.Connect() in an internal worker thread while TIdTCPClient.Connect() runs a sleep loop that waits for that thread to terminate. If the loop detects the ConnectTimeout period has elapsed, it closes the socket, which is supposed to cause the blocked TIdStack.Connect() to exit immediately, but that is not a guarantee. There is OS overhead in creating and terminating a thread as well. It should definitely not take 30 seconds to react to a 1 second timeout, but on the other hand 1 second is usually too small. It is possible that the thread will not even begin running within 1 second. You should typically set the ConnectTimeout to 5-10 seconds at a minimum to give the OS enough time to do its work.
Related
I need to terminate long running SFTP put operations (sometimes the transfer hangs and takes 10-15 minutes to complete).
Is there a way to do this using Apache VFS? The SftpFileSystemConfigBuilder has a setTimeout method, but that only affects the connect operation. What I would need is to be able to terminate a PUT operation if it takes longer than X minutes.
In my logs I have requests to signalr/poll and signalr/connect that take around 30 seconds.
The applications had some isues recently caused by thread starvation. Might these requests be the root cause or is it an expected behaviour and is the duration normal?
When I request the site with chrome I see websocket traffic so I gues it is running ok for most clients.
The applications is accessed via vpn and sometimes the connection is bad. Could this be a reason for falling back to long polling?
If you do not have enough threads, you end up in a deadlock, and the app will start to error out, and not work properly. At that point you would be forced to restart your AppPool, or web server. If your application is falling back to long polling, and a client is connected but not doing anything the poll will remain open until it gets a response, or if there is a timeout configured (default is 30 seconds I believe) will close on timeout. I would try restarting your AppPool and see if that helps, if not there is something wrong in the transportation layer, it should only need to fall back to long polling in extreme circumstances
I understand from reading HikariCP's documentation (see below) that idle connections should be retired from a connection pool.
My question is: why and when should an idle database connection be retired from the connection pool?
This is the part of HikariCP documentation that sparked my question:
idleTimeout:
This property controls the maximum amount of time (in milliseconds)
that a connection is allowed to sit idle in the pool. Whether a
connection is retired as idle or not is subject to a maximum variation
of +30 seconds, and average variation of +15 seconds. A connection
will never be retired as idle before this timeout. A value of 0 means
that idle connections are never removed from the pool. Default: 600000
(10 minutes)
Two main reasons:
a) they take up resources on the server (not terribly much since the connection is idle)
b) sometimes connections timeout themselves after periods of inactivity. You want to either close them before that, or run some periodic "ping" SQL to make sure they are still alive. Otherwise you'd get an error on the next SQL you want to execute.
I have a weird problem with my threaded software.
I start 2 instances of the software. Each instance has 2 threads, one thread creates a socket to use, and the other one is uses the socket for communication.
When one of the threads in one instance calls sleep(3), the other threads in the the other instance sleeps too. And the weirdest thing is that when I rebooted the computer, it works the first time, but after trying a second time, it sleeps like described.
How is this possible? Is it using some shared resource?
Sleep suspends your process, not your thread. See sleep vs pthead_yield for more details.
Have you tried wrapping your sleep function in a prophylactic?
My application has a long running request that takes over a minute. If I'm using Chrome or Firefox I just need to be patient. If I use IE however, at the one minute mark I get the popup that says I've reached a Network Connection Timeout.
Why is that?
The default Internet Explorer time out is 1 minute. Since your process is a long-running one, IceFaces doesn't send the response and it times out.
You can avoid this by spawning a new thread for your long running process and returning the response immediately. IceFaces has plenty of polling or push options available to you to let your client know when the long-running process is done.