Should i use they with TIdTcpServer, and where they can improve something on my application?
i ask it because maybe they can improve some speed/agility action on TIdTcpServer since i use a Queue.
TIdTCPServer runs a thread for every client that is connected. Those threads are managed by the TIdScheduler that is assigned to the TIdTCPServer.Scheduler property. If you do not assign a scheduler of your own, a default TIdSchedulerOfThreadDefault is created internally for you.
The difference between TIdSchedulerOfThreadDefault and TIdSchedulerOfThreadPool is:
TIdSchedulerOfThreadDefault creates a new thread when a client connects, and then terminates that thread when the client disconnects.
TIdSchedulerOfThreadPool maintains a pool of idle threads. When a client connects, a thread is pulled out of the pool if one is available, otherwise a new thread is created. When the client disconnects, the thread is put back in the pool for reuse if the scheduler's PoolSize will not be exceeded, otherwise the thread is terminated.
From the OS's perspective, creating a new thread is an expensive operation. So in general, using a thread pool is usually preferred for better performance, but at the cost of using memory and resources for idle threads hanging around waiting to be used.
Whichever component you decide to use will not have much effect on how the server performs while processing active clients, only how it performs while handling socket connects/disconnects.
Related
I'm making a Rails 4 app, where I need to keep multiple TCP connections open towards another server. What would be the best way to implement the background threads serving these connections? Or should I use non-blocking sockets?
I have features that require the HTTP serving thread to wait for the response in the background connection, but other times the background threads need to write the DB.
Most libraries handling background threads are pretty much job oriented (e.g. Sidekiq), it seems they could not handle spawning a new thread for each new connection. I tried to make it with the Thread class and mutexes, but strange stuff seems to happen (like background threads stopping for no reason, then reappearing, but that might be a quirk of byebug).
I have a windows service which runs a separate background thread. Inside the thread it starts a TCP server which listens to clients using TcpListener.
I'd like to know how I can close the service down gracefully when there is a blocking read like so:
listener.AcceptTcpClient();
I've found that apparently a windows service can abort any other threads as long as they are set-up as background threads, but what if one of the threads is blocking? Does this make a difference and if so, what is the best way to handle this situation?
Best way will be to call listener.Close() on service's stopping event. It will abort blocking call with SocketException.
State of the thread (blocked or running) does not affect the fact that thread is background. So if you call listener.AcceptTcpClient() from a background thread it will still be aborted when service stops,
Suppose there is no kernel level support for threads. A process has 10 threads running and one of them requests I/O.
Does the phread library declares the I/O request to the kernel right away or starts executing threads in it's ready queue?
(If it declares it's I/O request then it will be preempted from the CPU, hence rendering multi-threading useless for I/O intensive tasks).
First, i think although there are 10 threads "running", in fact only one of them is active each time. And context switch happens in order to create this concurrent running illusion.
Therefore, only the actual running thread can issue i/o request to the kernel.
If it is blocking I/O, yes, the whole system is waiting for the i/o response.
If it is non-blocking i/o, kernel will switch to next ready thread and continue execution.Only when the I/o has finished, interrupt is generated to inform kernel that the previous thread can be put back into ready queue again.
But I/O intensive task is indeed slow.
I have a asp.net program, and I need to send a lot of emails.
when i call http://localhost:70/sendemails
system will send the emails one by one in seperated threads(just like async).
I don't know if this is the best method.
but what i want to know is: I schedule to send 10000 emails, and after i call the link, and then close the browser(means that the session will be closed too),
Then if the threads i created will also be terminated ?
What is the best method to send lots of emails ?
I would suggest that the ASP.NET application writes the information into a queue of some description - whether that's a message queue or just a table in the database.
Then have a separate service running to process the queue and send the emails. That way you don't need to wait until the emails have been sent before you respond, but you can still be sure that by the time the page responds, the request has been persisted.
Another alternative would just be to start a new thread to do the email sending within the ASP.NET application, but that means that if the application were to fall over (or be recycled) the request would be lost. It's easier to put the persistence and fault tolerance in a separate service, IMO.
The thread shouldn't be terminated by the closing of the browser. What could terminate the thread would be a recycle of the application (if memory grows too much, or other specific conditions arise).
The best method for mail sending would be to have a separate windows service, but if that's not possible then using a thread might be a workable idea, providing that you have a mechanism to restart the thread on an application recycle and restart the sending from where it was left. The problem with the restart is that you need a request to a page to get the thread back - you could do this from a computer you control using a scheduled task for example. Could work, but not very reliable compared to a windows service.
I have a controller action that aggregates data from multiple sources: web service, database, file lookups, etc... and passes results to the view. So in order to render the page all tasks must have completed. Currently they are performed sequentially but as they are independent I am thinking of running them in parallel as this could improve performance.
So what would be the best approach to achieve this? For each task start a new thread and block the main thread as all tasks are finished? Should I use a thread from the thread pool or spawn a new thread manually? Using threads from the thread pool would limit my web server's capability of serving new requests so this might not be a good idea. Spawning new threads manually could be expensive, so at the end of the day would there be a net gain in performance by paralleling these tasks or just leave them run sequentially?
If it's between spawning your own threads or using the thread pool threads, I'd say use the ones from the thread pool. You can always adjust your server settings to allow for more threads in the pool if you find that you are running out of threads.
The only way to answer your final question would be to actually test it out, as we don't know how complicated the separate aggregation tasks are. If you want to give the illusion of a responsive UI, you could always display the loading page and kick off the aggregation with AJAX. Even non-threaded, this may placate your users sufficiently.