How can I send multiple post requests using TIdHTTP at the same time?
lHTTP1.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
lHTTP2.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
lHTTP3.Post('http://'+cURL+'/build.php?',lParamList, ResponseContent);
I tried using three threads to do that, but there is a one second delay between every post message.
How can I send all the post messages in the same second?
Since TIdHTTP is a blocking component, using separate threads is the correct approach. The 1s delay on each post could be related to how the OS schedules threads, or it might be related to network delays, or you might be using a version of Indy that has internal delays (for instance, if an HTTP server sends a 3xx response to a POST request, TIdHTTP waits up to 5s to make sure the server sends a proper response body - some buggy servers do not). It is difficult to know where your 1s delay is actually occurring. You will have to debug/profile your project to find out, we can't do that for you.
Related
Is there an easy way to get the time it took Indy to connect, and the time it took to receive data, in a TIdHTTP.Get() or TIdHTTP.Put() operation?
EDIT:
I want to get statistics to determine which timeouts are best to use for the ReceiveTimeout and ConnectTimeOut properties.
For timing a connect, you can use the OnStatus(hsConnecting) and OnStatus(hsConnected)/OnConnected events. Note that if a URL involves a hostname (which is the usual case), there is also an OnStatus(hsResolving) event that precedes the OnStatus(hsConnecting) event. However, DNS resolution does not play into ConnectTimeout handling at this time.
For timing the receive, that is a bit trickier, since there are no events for detecting the end of sending a request, or the beginning/ending of reading a response 1. And also that a given HTTP request may involve multiple steps (redirects, authentication, etc), which may also involve multiple disconnects/re-connects since HTTP is a stateless protocol not dependent on a persistent connection, like most other protocols are. So, about the only way I can think of accomplishing this is to attach an Intercept component to the TIdHTTP.Intercept property and then manually parse the HTTP messages as they are being exchanged.
1 Actually, that is not entirely true. There is a TIdHTTP.OnHeadersAvailable event, which is fired after the HTTP response headers have been read, and before the HTTP response body is read, at least. So, if you don't care about the timing of the headers, you can use that event to start timing the receiving of the body data, at least. And then stop the timing when Get()/Post() exits. For each multi-step that requires TIdHTTP to repeat a request, you should get a new OnHeadersAvailable event, which you can use to reset your timer. That way, you end up with the time of the final response only.
However, note that ReceiveTimeout is a per-byte timeout, so an alternative might be to use a custom TStream (or Indy's TIdEventStream) to receive the HTTP response data into, and then you can time the durations between individual writes to that stream by overwriting its Write() method (or using the OnWrite event).
I am making a client program in Delphi 7 with Indy 10.
It must connect to the server with TIdTCPClient and keep alive the connection for sending and getting commands and replies until the program is closed.
The server can maintain only one constant connection per client to send info-messages.
TIdTCPClient is listening through a reading thread.
QUESTION:
I am sending a request to the server (using WriteLn) from some procedure to get a list of strings, for example. How can I get the answer (reply) for that request in the same procedure, without leaving it? Like using TIdHTTP.
I see 2 solutions:
making the request from one procedure and handle it in other - the code and logic will be more complicated.
for each request in a procedure, create a new TIdTCPClient (Connect, WriteLn, ReadLn, Disconnect, Free) and handle request. But I do not like this solution as it causes large overhead.
Since a reading thread is involved, it does complicate things a little. The reading thread needs to be the one to receive all of the replies and then it can dispatch them to handlers as needed.
Your first solution is fine, if you don't mind breaking up your code. This is the simplest solution, and the best one if the main thread is the one making the requests. You should never block the main thread.
As you mentioned, your second solution is not a very good one.
Another solution would be to create a TEvent for each request, and put each request into a list/queue somewhere. Have the reading thread find and signal the appropriate event when a response is received. The sending procedure can then wait on the event until it is signaled (TThread.Synchronize() works this way, for example). If the procedure is running in the main thread, use MsgWaitForMultipleObjects() to do the wait, so you can still service the main message queue while waiting.
I have trouble with Delphi XE2 app. Sometimes WinInet call to ASMX service blocks and never returns - user must terminate process from task manager to close app.
To connect to ASMX service app uses service generated by WSDLImp tool.
During its work, app makes a lot of calls to web service (~1000-2000). And at some moment (last time it was 782 request item, first time it was near the end) app freezes. After some digging, logging I find out that app blocks on
WinInetResult := HttpSendRequest(Request, nil, 0, DatStr.Bytes, DatStr.Size);
In Soap.SOAPHTTPTrans unit
First guess was it is some server-side problem – server hangs on request processing. But on trials server was processing requests from other clients, while target one was blocked. And, when you use Fiddler to debug http traffic from app everything works as expected, no locks. Also, WinInet’s SendTimeout, ReceiveTimeout, ConnectTimeout has no effect – there are no timeout errors. One more point, app blocks not on specific method call, but on different ones.
After googling, I find out that HttpSendRequest can block on max parallel connections exceeded. But there are no parallel execution in app – each action is performed in main GUI thread.
My next try was to use Indy for HTTP communication instead of WinInet. And with Indy, app does its work as it should, no locks. But downside is performance degradation – app’s work takes two times longer with Indy.
This is not very good. So, I want to go back to WinInet. But for this I need to find reason of blocking. Does anybody know why HttpSendRequest can block?
P.S.
It is strange that with Indy we have such performance degradation. Maybe there are some properties, parameters to increase performance?
So, I have finally fixed this issue.
After all trials with no success, I've re-implemented SOAP calls using WinHTTP instead of WinInet.
With WinHTTP everything works normally.
I have this piece of code that I have been trying to port. The code works 100% fine on windows using a WinHTTP implementation. On IOS 7 simulator, I am using NSURLSession. For regular HTTPS get/post seems to work fine.
Things start breaking down when I use a "streaming" HTTP. In this case, the content length is unknown, because the data is streaming in continuously.
I have a blocking synchronous below call that will wait until the current request completes. When I use the first command the synchronous loop will exit after the delegate is hit. However if I replace with the commented second line the synchronous loop hangs.
[m_pDelegate.session invalidateAndCancel];
// [m_pDelegate.session finishTasksAndInvalidate];
blockUntilOperationsComplete();
Eventually it will exit, and I do get my data callbacks. I believe the callbacks finally trigger MINUTES later because small keep-alive messages (16 bytes long) eventually overflow the buffer and trigger a delegate call. Is there a way to reduce the buffering threshold?
After wasting 2 days on this I'll leave this for the next soul that comes by. There is no way to reduce this buffer through existing NSURL* classes. It turns out that current implementation (on iOS7, and it seems that it's like that since forever) for chunked encoding buffers incoming data by waiting for 512 bytes of chunk encoded payload to gather, and only after that callbacks will occur - important part follows - if Content-Type is "text/html". After that all following traffic triggered callbacks will happen in real time.
However if server changes Content-Type header to "application/json" it will not be buffered and your callbacks will fire as soon as something is actually received.
This is an issue that's making me question my own sanity, but I'm posting the question in case it's something real rather than a problem of my own making.
I have an iOS app that is making use of the NSURLConnection class to send a request to a webserver. The object is instantiated and instructed to call back the delegate, which receives the corresponding notifications didReceiveResponse / didReceiveData / didFinishLoading / didFailWithError. Effectively the same code that is posted on Apple's dev page for using the class. The requests are all short POST transmissions with JSON data; the responses are also JSON-formatted, and come back from an Apache Tomcat Java Servlet.
For the most part it all works as advertised. The app sends a series of requests to the server in order to start a job and poll for partial results. Most of the exhanges are short, but sometimes the results can be up to about 100-200Kb maximum when there are partial results available.
The individual pieces of data get handed back by the operating system in chunks of about 10Kb each time, give or take. The transport is essentially instantaneous, as it is talking to a test server on the LAN.
However: after a few dozen polling operations, the rate of transport grinds to a near standstill. The sequence of response/data.../finished works normally: the webserver has delivered its payload, but the iOS app is receiving exactly 2896 bytes, with a periodicity of 20-30 seconds in between chunks. It is the correct data, and waiting about 5 minutes for 130Kb of data does confirm that it's operating correctly.
Nothing I do seems to conveniently work around it. I tried switching to the "async" invocation method with a response block; same result. Talking to a remote website rather than my LAN test deployment gets the same result. Running in simulator or iPhone gets the same result. The server returns content-length and doesn't try to do anything weird like keeping the connection alive.
Changing the frequency of the polling achieves little, unless I crank up the delay in between polling to 50 seconds, then everything works fine, presumably because it only ends up polling once or twice.
A hypothesis that fits this observation is that the NSURLConnection object hangs around long after it has been released, and chews up resources. Once a certain limit is hit, the progress rate grinds to a near halt. If the slowed down connection actually completes, subsequent connections work normally again, presumably because they've been cleaned up.
So does this sound familiar to anyone?