Does Mono.block() or Flux.block() block the thread (and keep it waiting) or only block execution flow and release the thread in reactive spring? - project-reactor

I have a restful api microservice, that calls another remote rest api and respond back to the clients. The remote api takes time for each requests and the threads are blocked until the response is received in traditional resttemplate. I want to improve the performance by using the reactor. Does Mono.block() or Flux.block() block the thread (and keep it waiting) or only block execution flow and release the thread in reactive spring?

Related

How to share variable value between requests in Rails?

I'm using Ruby on Rails 4.2. In a controller I have a method which took a lot of time to complete making some heavy calculations. I want to inform the user of calculations progress. My idea was to have #progress variable which is updated during calculations and is read by different action processing AJAX requests from frontend. But this idea fails - I always have the default 0 value in AJAX action while the variable is updating in long method. I've tried ##progress, $progress and session[:progress] but with the exactly same results.
Now I'm considering to make a model for storing progress in database and reading it from there, but I can't believe it couldn't be done by some more simple means.
Please share your thoughts!
Theoretical:
The usual approach for these cases is to perform the job asynchronously from the HTTP handler process (so the end-user is not waiting too long for a response from the webserver).
This means:
delegate the heavy work to a background job,
somehow make the client-side aware of when the job is done (2 options here).
Practical (application of the theoretical above in a context of a Rails app):
Background job: The rails community provide a wide variety of gems (+ built-in solution ActiveJob) to do async jobs (= background tasks). They can be divided into 2 main categories:
persisted state: write a file on disk with the current state so the queue can be resumed if server reboots (DelayedJob, Que)
in-memory state: usually faster, but the queue is lost if server reboots (Resque, Sidekiq)
surface to client-side:
There are two main options here:
polling: client-side AJAX call to the back-end every X seconds to check if the background job is done
subscribing via web socket: client-side connecting via web socket to the server and listening to an event triggered when the job is done (ex: ActionCable as pointed out by #Vasilisa)
Opinion-based:
If you want to keep it simple, I would go with a very simple implementation: Resque for the back-end and a polling system for the front-end.
If you want something complete, capable of resisting server reboots and restoring the queue where it was before the crash, I would use a persisted version (DelayedJob for example) or wrap the in-memory solution with your own persisting logic.

TIdSchedulerOfThreadDefault or TIdSchedulerOfThreadPool why use and what do

Should i use they with TIdTcpServer, and where they can improve something on my application?
i ask it because maybe they can improve some speed/agility action on TIdTcpServer since i use a Queue.
TIdTCPServer runs a thread for every client that is connected. Those threads are managed by the TIdScheduler that is assigned to the TIdTCPServer.Scheduler property. If you do not assign a scheduler of your own, a default TIdSchedulerOfThreadDefault is created internally for you.
The difference between TIdSchedulerOfThreadDefault and TIdSchedulerOfThreadPool is:
TIdSchedulerOfThreadDefault creates a new thread when a client connects, and then terminates that thread when the client disconnects.
TIdSchedulerOfThreadPool maintains a pool of idle threads. When a client connects, a thread is pulled out of the pool if one is available, otherwise a new thread is created. When the client disconnects, the thread is put back in the pool for reuse if the scheduler's PoolSize will not be exceeded, otherwise the thread is terminated.
From the OS's perspective, creating a new thread is an expensive operation. So in general, using a thread pool is usually preferred for better performance, but at the cost of using memory and resources for idle threads hanging around waiting to be used.
Whichever component you decide to use will not have much effect on how the server performs while processing active clients, only how it performs while handling socket connects/disconnects.

Spring AMQP sync to asynchronous by time

RabbitMQ Spring AMQP sync to async conversion
I need to convert synchronous message invocation into asynchronous if the request is running too long. If there a way to achieve this by using Spring AMQP client for RabbitMQ? Thanks.
You can hand off to another thread at any time in your code and the container thread will immediately ack the message. But the logic to do the hand off has to be in your listener.
If you want to control the ack, use mode MANUAL and you'll need a SessionAwareMessageListener so you have access to the Channel to do the basicAck.

Execute task after several async connections

I have several concurrent asynchronous network operations, and want to be notified after all of them finished receiving the data.
The current thread shall not be blocked, synchronous connections aren't an option, and all operations shall be executed concurrently.
How can I achieve that? Or is it more performant to execute all network operations successively, particularly looking at mobile devices?

How to terminate windows service with blocking call using TcpListener

I have a windows service which runs a separate background thread. Inside the thread it starts a TCP server which listens to clients using TcpListener.
I'd like to know how I can close the service down gracefully when there is a blocking read like so:
listener.AcceptTcpClient();
I've found that apparently a windows service can abort any other threads as long as they are set-up as background threads, but what if one of the threads is blocking? Does this make a difference and if so, what is the best way to handle this situation?
Best way will be to call listener.Close() on service's stopping event. It will abort blocking call with SocketException.
State of the thread (blocked or running) does not affect the fact that thread is background. So if you call listener.AcceptTcpClient() from a background thread it will still be aborted when service stops,

Resources