how to lock an asp.net mvc action? - asp.net-mvc

I've written a controller and action that I use as a service.
This service runs quite a costly action.
I'd like to limit the access to this action if there is already a currently running action.
Is there any built in way to lock an asp.net mvc action?
Thanks

Are you looking for something like this?
public MyController : Controller
{
private static object Lock = new object();
public ActionResult MyAction()
{
lock (Lock)
{
// do your costly action here
}
}
}
The above will prevent any other threads from executing the action if a thread is currently processing code within the lock block.
Update: here is how this works
Method code is always executed by a thread. On a heavily-loaded server, it is possible for 2 or more different threads to enter and begin executing a method in parallel. According to the question, this is what you want to prevent.
Note how the private Lock object is static. This means it is shared across all instances of your controller. So, even if there are 2 instances of this controller constructed on the heap, both of them share the same Lock object. (The object doesn't even have to be named Lock, you could name it Jerry or Samantha and it would still serve the same purpose.)
Here is what happens. Your processor can only allow 1 thread to enter a section of code at a time. Under normal circumstances, thread A could begin executing a code block, and then thread B could begin executing it. So in theory you can have 2 threads executing the same method (or any block of code) at the same time.
The lock keyword can be used to prevent this. When a thread enters a block of code wrapped in a lock section, it "picks up" the lock object (what is in parenthesis after the lock keyword, a.k.a. Lock, Jerry, or Samantha, which should be marked as a static field). For the duration of time where the locked section is being executed, it "holds onto" the lock object. When the thread exits the locked section, it "gives up" the lock object. From the time the thread picks up the lock object, until it gives up the lock object, all other threads are prevented from entering the locked section of code. In effect, they are "paused" until the currently executing thread gives up the lock object.
So thread A picks up the lock object at the beginning of your MyAction method. Before it gives up the lock object, thread B also tries to execute this method. However, it cannot pick up the lock object because it is already held by thread A. So it waits for thread A to give up the lock object. When it does, thread B then picks up the lock object and begins executing the block of code. When thread B is finished executing the block, it gives up the lock object for the next thread that is delegated to handle this method.
... but I'm not sure if this is what you are looking for...
Using this approach will not necessarily make your code run any faster. It only ensures that a block of code can only be executed by 1 thread at a time. It is usually used for concurrency reasons, not performance reasons. If you can provide more information about your specific problem in the question, there may be a better answer than this one.
Remember that the code I presented above will cause other threads to wait before executing the block. If this is not what you want, and you want the entire action to be "skipped" if it is already being executed by another thread, then use something more like Oshry's answer. You can store this info in cache, session, or any other data storage mechanism.

I prefer to use SemaphoreSlim because it support async operations.
If you need to control the read/write then you can use the ReaderWriterLockSlim.
The following code snip uses the SemaphoreSlim:
public class DemoController : Controller
{
private static readonly SemaphoreSlim ProtectedActionSemaphore =
new SemaphoreSlim(1);
[HttpGet("paction")] //--or post, put, delete...
public IActionResult ProtectedAction()
{
ProtectedActionSemaphore.Wait();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
[HttpGet("paction2")] //--or post, put, delete...
public async Task<IActionResult> ProtectedActionAsync()
{
await ProtectedActionSemaphore.WaitAsync();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
}
I hope it helps.

Having read and agreed with the above answer I wanted a slightly different solution:
If you want to detect a second call to an action, use Monitor.TryEnter:
if (!Monitor.TryEnter(Lock, new TimeSpan(0)))
{
throw new ServiceBusyException("Locked!");
}
try
{
...
}
finally {
Monitor.Exit(Lock);
}
Use the same static Lock object as detailed by #danludwig

You can create a custom attribute like [UseLock] as per your requirements and put it before your Action

i have suggestions about that.
1- https://github.com/madelson/DistributedLock
system wide lock solution
2- Hangfire BackgroundJob.Enqueue with [DisableConcurrentExecution(1000)] attribute.
Two solution are pending for process to be finished. i don't want to throw error when request same time.

The simplest way to do that would be save to the cache a Boolean value indicating the action is running the required BL already:
if (System.Web.HttpContext.Current.Cache["IsProcessRunning"])
{
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = true;
// run your logic here
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = false
}
Of course you can do this, or something similar, as an attribute as well.

Related

How to restore runOn Scheduler used in previous operator?

Folks, is it possible to obtain currently used Scheduler within an operator?
The problem that I have is that Mono.fromFuture() is being executed on a native thread (AWS CRT Http Client in my case). As result all subsequent operators are also executed on that thread. And later code wants to obtain class loader context that is obviously null. I realize that I can call .publishOn(originalScheduler) after .fromFuture() but I don't know what scheduler is used to materialize Mono returned by my function.
Is there elegant way to deal with this?
fun myFunction(): Mono<String> {
return Mono.just("example")
.flatMap { value ->
Mono.fromFuture {
// invocation of 3rd party library that executes Future on the thread created in native code.
}
}
.map {
val resource = Thread.currentThread().getContextClassLoader().getResources("META-INF/services/blah_blah");
// NullPointerException because Thread.currentThread().getContextClassLoader() returns NULL
resource.asSequence().first().toString()
}
}
It is not possible, because there's no guarantee that there is a Scheduler at all.
The place where the subscription is made and the data starts flowing could simply be a Thread. There is no mechanism in Java that allows an external actor to submit a task to an arbitrary thread (you have to provide the Runnable at Thread construction).
So no, there's no way of "returning to the previous Scheduler".
Usually, this shouldn't be an issue at all. If your your code is reactive it should also be non-blocking and thus able to "share" whichever thread it currently runs on with other computations.
If your code is blocking, it should off-load the work to a blocking-compatible Scheduler anyway, which you should explicitly chose. Typically: publishOn(Schedulers.boundedElastic()). This is also true for CPU-intensive tasks btw.

How are Cold Streams able to work properly in a concurrent environment, while obeying "Item 79" in "Effective Java"?

In summary:
The cascade effect nature of the Cold Stream, from Inactive to Active, Is in itself an "alien" execution (alien to the reactive design) that MUST BE EXECUTED WITHIN THE SYNCRHONIZED REGION, and this is unavoidable, going against Item 79 of Effective Java.
Effective Java: Item 79:
"..., to avoid deadlock and data corruption, never call an alien
method from within a synchronized region. More generally, keep the
amount of work that you do from within synchronized to a minimum."
never call an alien
method from within a synchronized region
An add(Consumer<T> observer) AND remove(Consumer<T> observer) WILL BE concurrent (because of switchMaps that react to asynchronous changes in values/states), BUT according to Item 79, it should not be possible for a subscribe(Publisher p); method to even exist.
Since a subscribe(publisher) MUST WORK as a callback function that reacts to additions and removals of observers...
private final Object lock = new Object();
private volatile BooleanConsumer suscriptor;
public void subscribe(Publisher p) {
syncrhonized (lock) {
suscriptor = isActive -> {
if (isActive) p.add(this);
else p.remove(this);
}
}
}
public void add(Consumer<T> observer) {
syncrhonized (lock) {
observers.add(observer);
if (observer.size() > 0) suscriptor.accept(true);
}
}
I would argue that using a volatile mediator is better than holding on to the Publisher directly, but holding on to the publisher makes no difference at all, because by altering its state (when adding ourselves to the publisher) we are triggering the functions (other possible subscriptions to publishers) within it!!!, There really is no difference.
Doing it via misdirection is the proper answer, and doing so is the main idea behind the separation of concerns principle!!
Instead, what Item 79 is asking, is that each time an observer is added, we manually synchronize FROM THE OUT/ALIEN-SIDE, and deliberately check whether a subscription must be performed.
synchronized (alienLock) {
observable.add(observer);
if (observable.getObserverSize() > 0) {
publishier.add(observable);
}
}
and each time an observer is removed:
synchronized (alienLock) {
observable.remove(observer);
if (observable.getObserverSize() == 0) {
publishier.remove(observable);
}
}
Imagine those lines repeated EACH and EVERY TIME a node forks or joins on to a new one (in the reactive graph), it would be an insane amount of boilerplate defeating the entire purpose.
Reading carefully the item, you can see that the rule is there to prevent a "something" done wrong by the user that hangs the thread preventing accesses.
And this answer will be part me trying to justify why this is possible but also a non-issue in this case.
Binary Events.
In this case an event that involves only 2 states, isActive == true || false;
This means that if the consumption gets "hanged" on true, the only other option that may be waiting is "false", BUT even worst...
IF one of the two becomes deadlocked, it means the entire system is deadlocked anyways... in reality the issue is outside the design, not inside.
What I mean is that out of the 2 possible options: true or false. the time it takes for either of them to execute is meaningless since the ONLY OTHER OPTION IS STILL REQUIRED TO WAIT regardless.
Enclosed functionality of the lock.
Even if subscribe(Publisher p) methods can be concatenated, the only thing the user has access to, IS NOT THE lock per se, but the method.
So even If we are executing "alien methods" with functions, inside our synchronized bodies, THOSE FUNCTIONS ARE STILL OURS, and we know what they have inside them and how they work and what exactly they will perform.
In this case the only uncertainty in the system is not what the functions will do, but HOW MANY CONTATENATIONS THE SYSTEM HAS.
What's wrong in my code:
Finally, the only thing (I see) wrong, is that observers and subscriptions MOST DEFINITEY WORK IN SEPARATE LOCKS, because observers MUST NOT, under any circumstance should allow themselves to get locked while a subscription domino effect is taking place.
I believe that's all...

Does async operation in iOS create a new thread internally, and allocate task to it?

Does async operation in iOS, internally create a new thread, and allocate task to it ?
An async operation is capable to internally create a new thread and allocate task to it. But in order for this to happen you need to run an async operation which creates a new thread and allocates task to it. Or in other words: There is no direct correlation.
I assume that by async you mean something like DispatchQueue.main.async { <#code here#> }. This does not create a new thread as main thread should already be present. How and why does this work can be (if oversimplified) explained with an array of operations and an endless loop which is basically what RunLoop is there for. Imagine the following:
Array<Operations> allOperations;
int main() {
bool continueRunning = true;
for(;continueRunning;) {
allOperations.forEach { $0.run(); }
allOperations.clear();
}
return 0;
}
And when you call something like DispatchQueue.main.async it basically creates a new operation and inserts it into allOperations. The same thread will eventually go into a new loop (within for-loop) and call your operation asynchronously. Again keep in mind that this is all over-simplified just to illustrate the idea behind all of it. You can from this also imagine how for instance timers work; the operation will evaluate if current time is greater then the one of next scheduled execution and if so it will trigger the operation on timer. That is also why timers can not be very precise since they depend on rest of execution and thread may be busy.
A new thread on the other hand may be spawned when you create a new queue DispatchQueue(label: "Will most likely run on a new thread"). When(if) exactly will a thread be made is not something that needs to be fixed. It may vary from implementations and systems being run on. The tool will only guarantee to perform what it is designed for but not how it will do it.
And then there is also Thread class which can generate a new thread. But the deal is same as for previous one; it might internally instantly create a new thread or it might do it later, lazily. All it guarantees is that it will work for it's public interface.
I am not saying that these things change over time, implementation or system they run on. I am only saying that they potentially could and they might have had.

Why private static variable becomes null at some point and what can I do to resolve?

A picture is worth a thousand words:
On first page load, result is not null but at some point, after some time, when Gmail action is called from Javascript, it becomes null (after one of these 10 minute interval calls). It is declared as private static, initialized in Index action and should be alive (not null) all the time.
I managed to catch it by leaving the app running it in a Debug mode for a few hours.
Thank you.
Why dont you just save the cancellation token and recreate the "result" instance on Gmail() function call ?
private CancellationToken token = token; (on index call)
public ActionResult Gmail() {
result = new Authresult(token);
...
}
To diagnose the problem first it is work to double check whether you are accessing the variable in the same AppDomain where it was initialized - to check this you could just add some logging. It could be possible that this is a different AppDomain, because some event triggered IIS AppDomain pool recycling.
If it is the case, then you have 2 options:
either store the state using another mechanism or
have lazy initialization on demand with a null check, so the value can be initialized each time it's needed

To wait or not to wait inside an AsyncController's Async method

I've seen 2 flavors of working with asyncronous operations in mvc controllers.
First:
public void GetNewsAsync()
{
AsyncManager.OutstandingOperations.Increment();
using (ManualResetEvent mre = new ManualResetEvent(false))
{
//Perform the actual operation in a worker thread
ThreadPool.QueueUserWorkItem((object _mre) =>
{
//do some work in GetFeed that takes a long time
var feed = GetFeed();
AsyncManager.Parameters["Feed"] = feed;
AsyncManager.OutstandingOperations.Decrement();
mre.Set();
}, mre);
//Wait for the worker thread to finish
mre.WaitOne(TimeSpan.FromSeconds(SomeNumberOfSecondsToWait));
}
}
Second:
public void GetNewsAsync()
{
AsyncManager.OutstandingOperations.Increment();
//Perform the actual operation in a worker thread
ThreadPool.QueueUserWorkItem((object x) =>
{
//do some work in GetFeed that takes a long time
var feed = GetFeed();
AsyncManager.Parameters["Feed"] = feed;
AsyncManager.OutstandingOperations.Decrement();
}, null);
}
The first blocks GetNewsAsync for SomeNumberOfSecondsToWait, the second does not. Both perform the work inside a of a worker thread and the results passed to GetNewsCompleted.
So my question is, which is the correct way to handle an Ajax call to GetNews; Wait, or don't wait?
I don't know where did you see the first example but that's a total anti-pattern that completely defeats the purpose of an asynchronous controller. The whole point of an asynchronous operation is to execute asynchronously and free the main thread as fast as possible.
This being said if GetFeed is a blocking call (which is what its name supposes it is) you get strictly 0 benefit from an asyncrhonous controller so the second example is also wrong for me. You could use a standard synchronous controller action in this case. With the second example you draw a thread from the pool and instead of blocking inside the main thread you block inside the other thread so the net effect is almost the same (in reality it's worse) if you had used a standard synchronous controller action.
So both those examples will bring more overhead than any benefit.
Where asynchronous controllers are useful is when you have some I/O intensive API such as a database or web service call where you could take advantage of IO Completion Ports. The following article provides a good example of this scenario. The newsService used there is providing real asynchronous methods and there is no blocking during the I/O network call. No worker thread being jeopardized.
I would also recommend you reading the following article. Even if it is for classic WebForms it still contains some very useful information.

Resources