Multiple Thread access the same code in multithread application - windows-services

I am in the middle of the windows service multithread project where I need some inputs from you guys to run it successfully. below is the code and describe what I am trying to do and problem.
// I created a new thread and call MyTimerMethod() from the Main method.
private void MyTimerMethod()
{
timer = Timers.Timer(5000)
timer.Elapsed += new ElapsedEventHandler(OnElapsedTime);
timer.Start();
// make this thread run every time.
Application.Run();
}
private void OnElapsedTime(object source, ElapsedEventArgs e)
{
for(int i = 0; i < SomeNum; i++) //SomeNum > 0
ThreadPool.QueueUserWorkItem(WaitCallback(MyWorkingMethod),null);
}
private void MyWorkingMethod(object state)
{
// each thread needs to go and check the status and print if currentStatus = true.
// if currentStautus = true then that jobs is ready to print.
// FYI ReadStatusFromDB() from the base class so I cannot modify it.
ReadStatusFromDB(); // ReadStatusFromDB() contains jobs to be printed.
// after doing some work store procedure update the currentStatus = false.
//do more stuff.
}
Long story in short, program runs every five seconds and check if there is more work to do. If there is then create a new thread from the threadpool and push into the queue. Now my problem is when there is more than one threads in the queue. Even the currentStatus = false multiple threads grab the same jobs and tries to print.
let me know if you need further information.

I would suggest creating a BlockingCollection of work items, and structure your program as a producer/consumer application. When a job is submitted (either by the timer tick, or perhaps some other way), it's just added to the collection.
You then have one or more persistent threads that are waiting on the collection with a TryTake. When an item is added to the collection, one of those waiting threads will get and process it.
This structure has several advantages. First, it prevents multiple threads from working on the same item. Second, it limits the number of threads that will be processing items concurrently. Third, the threads are doing non-busy waits on the collection, meaning that they're not consuming CPU resources. The only drawback is that you have multiple persistent threads. But if you're processing items most of the time anyway, then the persistent threads isn't a problem at all.

Related

Safety of using an empty reference instance across multiple threads

Background
I have a class Data that stores multiple input parameters and a single output value.
The output value is recalculated whenever one of the input parameters is mutated.
The calculation takes a non-trivial amount of time so it is performed asynchronously.
If one of the input parameters changes during recalculation, the current calculation is cancelled, and a new one is begun.
The cancellation logic is implemented via a serialized queue of calculation operations and a key (reference instance) (Data.key). Data.key is set to a new reference instance every time a new recalculation is added to the queue. Also only a single recalculation can occur at a time — due to the queue. Any executing recalculation constantly checks if it was the most recently initiated calculation by holding a reference to both the key that what was created with it when it was initiated and the currently existing key. If they are different, then a new recalculation has been queued since it began, and it will terminate.
This will trigger the next recalculation in the queue to begin, repeating the process.
The basis for my question
The reassignment of Data.key is done on the main thread.
The current calculation constantly checks to see if its key is the same as the current one. This means another thread is constantly accessing Data.key.
Question(s)
Is it safe for me to leave Data.key vulnerable to being read/written to at the same time?
Is it even possible for a property to be read and written to simultaneously?
Yes Data.Key vulnerable to being read/written to at the same time.
Here is example were i'm write key from main thread and read from MySerialQueue.
If you run that code, sometimes it would crash.
Crash happens because of dereference of pointer that point to memory released during writing by main queue.
Xcode have feature called ThreadSanitizer, it would help to catch such problems.
Discussion About Race condition
func experiment() {
var key = MyClass()
var key2 = MyClass()
class MyClass {}
func writer() {
for _ in 0..<1000000 {
key = MyClass()
}
}
func reader() {
for _ in 0..<1000000 {
if key === key2 {}
}
}
DispatchQueue.init(label: "MySerialQueue").async {
print("reader begin")
reader()
print("reader end")
}
DispatchQueue.main.async {
print("writer begin")
writer()
print("writer end")
}
}
Q:
Is it safe for me to leave Data.key vulnerable to being read/written to at the same time?
A:
No
Q:
Is it even possible for a property to be read and written to simultaneously?
A:
Yes, create a separate queue for the Data.Key that only through which you access it. As long as any operation (get/set) is restricted within this queue you can read or write from anywhere with thread safety.

Does async operation in iOS create a new thread internally, and allocate task to it?

Does async operation in iOS, internally create a new thread, and allocate task to it ?
An async operation is capable to internally create a new thread and allocate task to it. But in order for this to happen you need to run an async operation which creates a new thread and allocates task to it. Or in other words: There is no direct correlation.
I assume that by async you mean something like DispatchQueue.main.async { <#code here#> }. This does not create a new thread as main thread should already be present. How and why does this work can be (if oversimplified) explained with an array of operations and an endless loop which is basically what RunLoop is there for. Imagine the following:
Array<Operations> allOperations;
int main() {
bool continueRunning = true;
for(;continueRunning;) {
allOperations.forEach { $0.run(); }
allOperations.clear();
}
return 0;
}
And when you call something like DispatchQueue.main.async it basically creates a new operation and inserts it into allOperations. The same thread will eventually go into a new loop (within for-loop) and call your operation asynchronously. Again keep in mind that this is all over-simplified just to illustrate the idea behind all of it. You can from this also imagine how for instance timers work; the operation will evaluate if current time is greater then the one of next scheduled execution and if so it will trigger the operation on timer. That is also why timers can not be very precise since they depend on rest of execution and thread may be busy.
A new thread on the other hand may be spawned when you create a new queue DispatchQueue(label: "Will most likely run on a new thread"). When(if) exactly will a thread be made is not something that needs to be fixed. It may vary from implementations and systems being run on. The tool will only guarantee to perform what it is designed for but not how it will do it.
And then there is also Thread class which can generate a new thread. But the deal is same as for previous one; it might internally instantly create a new thread or it might do it later, lazily. All it guarantees is that it will work for it's public interface.
I am not saying that these things change over time, implementation or system they run on. I am only saying that they potentially could and they might have had.

Does await Task.Delay; really enable web server to process more simultaneous requests?

From Pro Asynchrnous Programming with .Net:
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
Thread.Sleep(2000);
}
While sleeping, the thread doesn’t consume any CPU-based resources,
but the fact that the thread is alive means that it is still consuming
memory resources. On a desktop application this is probably no big
deal, but on a server application, having lots of threads sleeping is
not ideal because if more work arrives on the server, it may have to
spin up more threads, increasing memory pressure and adding additional
resources for the OS to manage.
Ideally, instead of putting the thread to sleep, you would like to
simply give it up, allowing the thread to be free to serve other
requests. When you are ready to continue using CPU resources again,
you can obtain a thread ( not necessarily the same one ) and continue
processing. You can solve this problem by not putting the thread to
sleep, but rather using await on a Task that is deemed to be completed
in a given period.
for (int nTry = 0; nTry < 3; nTry++)
{
try
{
AttemptOperation();
break;
}
catch (OperationFailedException) { }
await Task.Delay(2000);
}
I don't follow author's reasoning. While it's true that calling await Task.Delay will release this thread ( which is processing a request ), but it's also true that task created by Task.Delay will occupy some other thread to run on. So does this code really enable server to process more simultaneous requests or is the text wrong?!
Task.Delay does not occupy some other thread. It gives you a task without blocking. It starts a timer that completes that task in its callback. The timer does not use any thread while waiting.
It is a common myth that async actions like delays or IO just push work to a different thread. They do not. They use OS facilities to truly use zero threads while the operation is in progress. (They obviously need to use some thread to initiate and complete the operation.)
If async was just pushing work to a different thread it would be mostly useless. It's value would be just to keep the UI responsive in client apps. On the server it would only cause harm. It is not so.
The value of async IO is to reduce memory usag (less thread stacks), context switching and thread-pool utilization.
The async version of the code you posted would scale to literally tens of thousands of concurrent requests (if you increase the ASP.NET limits appropriately, which is a simple web.config change) with small memory usage.

Launching multiple async futures in response to events

I would like to launch a fairly expensive operation in response to a user clicking on a canvas element.
mouseDown(MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
future.then((int value) => redrawCanvas(value);
print("done event handler");
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
My understanding of M4 Dart, is that this future constructor should launch "expensiveFunction" asynchronously, aka on a different thread from the main one. And it does appear this way, as "done event handler" is immediately printed into my output window in the IDE, and then some time later "redrawing canvas" is printed. However, if I click on the element again nothing happens until my "expensiveFunction" is done running from the previous click.
How do I use futures to simply launch an compute intensive function on new thread such that I can have multiple of them queued up in response to multiple clicks, even if the first future is not complete yet?
Thanks.
As mentioned in a different answer, Futures are just a "placeholder for a value that is made available in the future". They don't necessarily imply concurrency.
Dart has a concept of isolates for concurrency. You can spawn an isolate to run some code in a parallel thread or process.
dart2js can compile isolates into Web Workers. A Web Worker can run in a separate thread.
Try something like this:
import 'dart:isolate';
expensiveOperation(SendPort replyTo) {
var result = doExpensiveThing(msg);
replyTo.send(result);
}
main() async {
var receive = new ReceivePort();
var isolate = await Isolate.spawn(expensiveOperation, receive.sendPort);
var result = await receive.first;
print(result);
}
(I haven't tested the above, but something like it should work.)
Event Loop & Event Queue
You should note that Futures are not threads. They do not run concurrently, and in fact, Dart is single-threaded. All Dart code runs in an event loop.
The event loop is a loop that runs as long as the current Dart isolate is alive. When you call main() to start a Dart application, the isolate is created, and it is no longer alive after the main method is completed and all items on the event queue are completed as well.
The event queue is the set of all functions that still need to finish executing. Because Dart is single threaded, all of these functions need to run one at a time. So when one item in the event queue is completed, another one begins. The exact timing and scheduling of the event queue is something that's way more complicated than I can explain myself.
Therefore, asynchronous processing is important to prevent the single thread from being blocked by some long running execution. In a UI, a long process can cause visual jankiness and hinder your app.
Futures
Futures represent a value that will be available sometime in the Future, hence the name. When a Future is created, it is returned immediately, and execution continues.
The callback associated with that Future (in your case, expensiveFunction) is "started" by being added to the event queue. When you return from the current isolate, the callback runs and as soon as it can, the code after then.
Streams
Because your Futures are by definition asynchronous, and you don't know when they return, you want to queue up your callbacks so that they remain in order.
A Stream is an object that emits events that can be subscribed to. When you write canvasElement.onClick.listen(...) you are asking for the onClick Stream of MouseEvents, which you then subscribe to with listen.
You can use Streams to queue up events and register a callback on those events to run the code you'd like.
What to Write
main() {
// Used to add events to a stream.
var controller = new StreamController<Future>();
// Pause when we get an event so that we take one value at a time.
var subscription = controller.stream.listen(
(_) => subscription.pause());
var canvas = new CanvasElement();
canvas.onClick.listen((MouseEvent e) {
print("entering event handler");
var future = new Future<int>(expensiveFunction);
// Resume subscription after our callback is called.
controller.add(future.then(redrawCanvas).then(subscription.resume()));
print("done event handler");
});
}
expensiveFunction() {
for(int i = 0; i < 1000000000; i++){
//do something insane here
}
}
redrawCanvas(int value) {
//do stuff here
print("redrawing canvas");
}
Here we are queuing up our redrawCanvas callbacks by pausing after each mouse click, and then resuming after redrawCanvas has been called.
More Information
See also this great answer to a similar question.
A great place to start reading about Dart's asynchrony is the first part of this article about the dart:io library and this article about the dart:async library.
For more information about Futures, see this article about Futures.
For Streams information, see this article about adding to Streams and this article about creating Streams.

how to lock an asp.net mvc action?

I've written a controller and action that I use as a service.
This service runs quite a costly action.
I'd like to limit the access to this action if there is already a currently running action.
Is there any built in way to lock an asp.net mvc action?
Thanks
Are you looking for something like this?
public MyController : Controller
{
private static object Lock = new object();
public ActionResult MyAction()
{
lock (Lock)
{
// do your costly action here
}
}
}
The above will prevent any other threads from executing the action if a thread is currently processing code within the lock block.
Update: here is how this works
Method code is always executed by a thread. On a heavily-loaded server, it is possible for 2 or more different threads to enter and begin executing a method in parallel. According to the question, this is what you want to prevent.
Note how the private Lock object is static. This means it is shared across all instances of your controller. So, even if there are 2 instances of this controller constructed on the heap, both of them share the same Lock object. (The object doesn't even have to be named Lock, you could name it Jerry or Samantha and it would still serve the same purpose.)
Here is what happens. Your processor can only allow 1 thread to enter a section of code at a time. Under normal circumstances, thread A could begin executing a code block, and then thread B could begin executing it. So in theory you can have 2 threads executing the same method (or any block of code) at the same time.
The lock keyword can be used to prevent this. When a thread enters a block of code wrapped in a lock section, it "picks up" the lock object (what is in parenthesis after the lock keyword, a.k.a. Lock, Jerry, or Samantha, which should be marked as a static field). For the duration of time where the locked section is being executed, it "holds onto" the lock object. When the thread exits the locked section, it "gives up" the lock object. From the time the thread picks up the lock object, until it gives up the lock object, all other threads are prevented from entering the locked section of code. In effect, they are "paused" until the currently executing thread gives up the lock object.
So thread A picks up the lock object at the beginning of your MyAction method. Before it gives up the lock object, thread B also tries to execute this method. However, it cannot pick up the lock object because it is already held by thread A. So it waits for thread A to give up the lock object. When it does, thread B then picks up the lock object and begins executing the block of code. When thread B is finished executing the block, it gives up the lock object for the next thread that is delegated to handle this method.
... but I'm not sure if this is what you are looking for...
Using this approach will not necessarily make your code run any faster. It only ensures that a block of code can only be executed by 1 thread at a time. It is usually used for concurrency reasons, not performance reasons. If you can provide more information about your specific problem in the question, there may be a better answer than this one.
Remember that the code I presented above will cause other threads to wait before executing the block. If this is not what you want, and you want the entire action to be "skipped" if it is already being executed by another thread, then use something more like Oshry's answer. You can store this info in cache, session, or any other data storage mechanism.
I prefer to use SemaphoreSlim because it support async operations.
If you need to control the read/write then you can use the ReaderWriterLockSlim.
The following code snip uses the SemaphoreSlim:
public class DemoController : Controller
{
private static readonly SemaphoreSlim ProtectedActionSemaphore =
new SemaphoreSlim(1);
[HttpGet("paction")] //--or post, put, delete...
public IActionResult ProtectedAction()
{
ProtectedActionSemaphore.Wait();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
[HttpGet("paction2")] //--or post, put, delete...
public async Task<IActionResult> ProtectedActionAsync()
{
await ProtectedActionSemaphore.WaitAsync();
try
{
//--call your protected action here
}
finally
{
ProtectedActionSemaphore.Release();
}
return Ok(); //--or any other response
}
}
I hope it helps.
Having read and agreed with the above answer I wanted a slightly different solution:
If you want to detect a second call to an action, use Monitor.TryEnter:
if (!Monitor.TryEnter(Lock, new TimeSpan(0)))
{
throw new ServiceBusyException("Locked!");
}
try
{
...
}
finally {
Monitor.Exit(Lock);
}
Use the same static Lock object as detailed by #danludwig
You can create a custom attribute like [UseLock] as per your requirements and put it before your Action
i have suggestions about that.
1- https://github.com/madelson/DistributedLock
system wide lock solution
2- Hangfire BackgroundJob.Enqueue with [DisableConcurrentExecution(1000)] attribute.
Two solution are pending for process to be finished. i don't want to throw error when request same time.
The simplest way to do that would be save to the cache a Boolean value indicating the action is running the required BL already:
if (System.Web.HttpContext.Current.Cache["IsProcessRunning"])
{
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = true;
// run your logic here
System.Web.HttpContext.Current.Cache["IsProcessRunning"] = false
}
Of course you can do this, or something similar, as an attribute as well.

Resources