When receiving data from a indy TCPServers execute method i generally handle it by reading the data by doing the following:
AContext.Connection.IOHandler.ReadLn;
and then process the data in the execute method. Most of the data that comes through are small JSon strings.
In the future i will need to handle larger data chunks and was wondering what the best pratice is to do so?
Is it a good idea to add the incoming data to a TidContext class and process it using some worker thread? Any thoughts or code samples would be appreciated. I use Indy 10 and Delphi XE3
The OnExecute event is already triggered in a worker thread, so it doesn't matter how long it takes to receive the data. As long at the data only has 1 (CR)LF at the end of it, ReadLn() will not care how long the string actually is (subject to the IOHandler's MaxLineAction and MaxLineLength properties, which you can tweak if needed). However, if the data has more than 1 (CR)LF in it, then you will have to either:
transmit the string length before transmitting the actual string, then use ReadLongInt() and ReadString() instead of ReadLn() in the receiving code.
terminate the string with a different delimiter than (CR)LF at the end, and then pass that delimiter to ReadLn() so it knows when to stop reading.
If the server has to receive incoming requests at a guaranteed rate, without blocking the consumers by slow request processing, saving the big data chunks to a datastore (file, database) for later processing could be a solution.
This would make the HTTP server thread available for the next request as soon as possible.
It also allows to do the data processing by multiple worker servers, with load balancing.
Related
Is there an easy way to get the time it took Indy to connect, and the time it took to receive data, in a TIdHTTP.Get() or TIdHTTP.Put() operation?
EDIT:
I want to get statistics to determine which timeouts are best to use for the ReceiveTimeout and ConnectTimeOut properties.
For timing a connect, you can use the OnStatus(hsConnecting) and OnStatus(hsConnected)/OnConnected events. Note that if a URL involves a hostname (which is the usual case), there is also an OnStatus(hsResolving) event that precedes the OnStatus(hsConnecting) event. However, DNS resolution does not play into ConnectTimeout handling at this time.
For timing the receive, that is a bit trickier, since there are no events for detecting the end of sending a request, or the beginning/ending of reading a response 1. And also that a given HTTP request may involve multiple steps (redirects, authentication, etc), which may also involve multiple disconnects/re-connects since HTTP is a stateless protocol not dependent on a persistent connection, like most other protocols are. So, about the only way I can think of accomplishing this is to attach an Intercept component to the TIdHTTP.Intercept property and then manually parse the HTTP messages as they are being exchanged.
1 Actually, that is not entirely true. There is a TIdHTTP.OnHeadersAvailable event, which is fired after the HTTP response headers have been read, and before the HTTP response body is read, at least. So, if you don't care about the timing of the headers, you can use that event to start timing the receiving of the body data, at least. And then stop the timing when Get()/Post() exits. For each multi-step that requires TIdHTTP to repeat a request, you should get a new OnHeadersAvailable event, which you can use to reset your timer. That way, you end up with the time of the final response only.
However, note that ReceiveTimeout is a per-byte timeout, so an alternative might be to use a custom TStream (or Indy's TIdEventStream) to receive the HTTP response data into, and then you can time the durations between individual writes to that stream by overwriting its Write() method (or using the OnWrite event).
I am making a client program in Delphi 7 with Indy 10.
It must connect to the server with TIdTCPClient and keep alive the connection for sending and getting commands and replies until the program is closed.
The server can maintain only one constant connection per client to send info-messages.
TIdTCPClient is listening through a reading thread.
QUESTION:
I am sending a request to the server (using WriteLn) from some procedure to get a list of strings, for example. How can I get the answer (reply) for that request in the same procedure, without leaving it? Like using TIdHTTP.
I see 2 solutions:
making the request from one procedure and handle it in other - the code and logic will be more complicated.
for each request in a procedure, create a new TIdTCPClient (Connect, WriteLn, ReadLn, Disconnect, Free) and handle request. But I do not like this solution as it causes large overhead.
Since a reading thread is involved, it does complicate things a little. The reading thread needs to be the one to receive all of the replies and then it can dispatch them to handlers as needed.
Your first solution is fine, if you don't mind breaking up your code. This is the simplest solution, and the best one if the main thread is the one making the requests. You should never block the main thread.
As you mentioned, your second solution is not a very good one.
Another solution would be to create a TEvent for each request, and put each request into a list/queue somewhere. Have the reading thread find and signal the appropriate event when a response is received. The sending procedure can then wait on the event until it is signaled (TThread.Synchronize() works this way, for example). If the procedure is running in the main thread, use MsgWaitForMultipleObjects() to do the wait, so you can still service the main message queue while waiting.
Most server framework/examples using sockets and I/O completion ports makes notifications in a way I couldn't completely figure out the purpose.
Upon read packets are processed, usually they are reordered to circumvent thread scheduling issues processing packets out of order no matter IOCP ensure a FIFO queue.
The problem is when a socket is closed gracefully or by an error. I saw in both situation, and again by the o.s. thread scheduler, the close notification may be sent to the application (i.e. http server using the framework) "before" the notification of data previously readed.
I think that the close notification should be queued in such way so the application receives it after previous reads.
Is there any intended use in most code I saw or my behavior may be correct depending on the situation?
What you suggest makes sense and I would imagine that any code that handles graceful close (a read returning 0 bytes) would do so by processing it after any proceeding successful read. Errors coming out of GetQueuedCompletionStatus(), such as connection reset errors, etc, are harder to integrate into the receive flow as they occur out of band as far as the receive data is concerned. Your question's a bit vague and depends very much on the code you're using and how you (or the people who wrote that code) want to handle these things. There is no single correct way, IMHO.
I am using a VCL TCPServer components which fires events everytime a data is
received on a tcp port. Within the event data is available into text
parameter of the procedure. Because I want to save this data into mysql
database I am wondering which is the best approach to this problem.
Directly use an INSERT SQL command within the procedure for every data received
or store the data in a memory (i.e. TStrings) and then calling
a function every X (using Timer) minutes excute the INSERT command?
Thanks
Ok Sorry.
It's not really an answer to your question, but do consider the risk of your application failing (for any reason) between receiving the data and executing the INSERT.
If you use a local store of some kind as an intermediate to mitigate this risk, consider the risk of a crash while that store is being updated.
RDBMS vendors go to great lengths to ensure that data that has been accepted by successful completion of an INSERT, UPDATE or similar command will not be lost or corrupted. Can you put in similar effort?
Generally speaking, if you are accepting data piecemeal rather than in bulk, I would probably keep an open connection to the database and insert data as you receive it, and only send an acknowledgement back to the consumer of your application once it has been pushed to the database.
I am writing an app that sends e-mail messages using Indy.
Every message is sent by a thread.
Currently I am connecting to TidSMTP inside the thread, so for sending 10 mails, I need 10 threads and I connect 10 times.
Is it safe (which are the drawbacks?) of having a single TidSMTP (outside of the thread), call Connect once and then call TidSMTP.Send inside the thread?
Will TidSMTP manage thing correctly?
Note: the idea is to avoid to connect every time (if possible), in case of many emails to be sent it could be an advantage. (does it makes sense to get worried for this, or calling Connect in every thread is pefectly ok?).
Why don't you use only 1 thead in which you have a TIdSMTP and a TList in which you store TIdMessage's and after each send you free the TIdMessage from the list, in this case you avoid overhead and keep it simple.
What if you want to send 200 e-mails, well if you start 200 threads then your application will use over 200 Mb only for those 200 threads not to mention that there can be problems starting that many threads in your application.
Bottom line add a TList in which you temporarily store prepared TIdMessages and inside the thread a while loop that will check if the list has any messages to send, if it has then grab, send and remove from list.
Technically, you can call Connect() in one thread and then call Send() in other threads. However, you would have to serialize access to Send(), otherwise the sending threads can overlap each other and corrupt the SMTP communication. Dorin's suggestion to move all of the SMTP traffic to a single thread with a queue is the best choice. However, the queue itself needs to be accessed in a thread-safe manner, so using a plain TList or TQueue by itself it not good enough. Either use TThreadList (or Indy's own TIdThreadSafeList) instead of TList, or wrap the TQueue with a separate TCriticalSection.