First, could someone with 1500+ "reputation" please create a tag for "ContinueWith" (and tag this question with it)? Thanks!
Sorry for the length of this post but I don't want to waste the time of anyone trying to help me because I left out relevant details. That said, it may still happen. :)
Now the details. I am working on a service that subscribes to a couple of ActiveMQ queue topics. Two of the topics are somewhat related. One is a "company update" and one is a "product update". The "ID" for both is the CompanyID. The company topic includes the data in the product topic. Required because other subscribers need the product data but don't want/need to subscribe to the product topic. Since my service is multi-threaded (requirement beyond our discretion), as the messages arrive I add a Task to process each one in a ConcurrentDictionary using AddOrUpdate where the update parm is simply a ContinueWith (see below). Done to prevent simultaneous updates which could happen because these topics and subscribers are "durable" so if my listener service goes offline (whatever reason) we could end with multiple messages (company and/or product) for the same CompanyID.
Now, my actual question (finally!) After the Task (whether just one task, or the last in a chain of ContinueWith tasks) is finished, I want to remove it from the ConcurrentDictionary (obviously). How? I have thought of and gotten some ideas from coworkers but I don't really like any of them. I am not going to list the ideas because your answer might be one of those ideas I have but don't like but it may end up being the best one.
I have tried to compress the code snippet to prevent you from having to scroll up and down too much, unlike my description. :)
nrtq = Not Relevant To Question
public interface IMessage
{
long CompantId { get; set; }
void Process();
}
public class CompanyMessage : IMessage
{ //implementation, nrtq }
public class ProductMessage : IMessage
{ //implementation, nrtq }
public class Controller
{
private static ConcurrentDictionary<long, Task> _workers = new ConcurrentDictionary<long, Task>();
//other needed declarations, nrtq
public Controller(){//constructor stuff, nrtq }
public StartSubscribers()
{
//other code, nrtq
_companySubscriber.OnMessageReceived += HandleCompanyMsg;
_productSubscriber.OnMessageReceived += HandleProductMsg;
}
private void HandleCompanyMsg(string msg)
{
try {
//other code, nrtq
QueueItUp(new CompanyMessage(message));
} catch (Exception ex) { //other code, nrtq }
}
private void HandleProductMsg(string msg)
{
try {
//other code, nrtq
QueueItUp(new ProductMessage(message));
} catch (Exception ex) { //other code, nrtq }
}
private static void QueueItUp(IMessage message)
{
_workers.AddOrUpdate(message.CompanyId,
x => {
var task = new Task(message.Process);
task.Start();
return task;
},
(x, y) => y.ContinueWith((z) => message.Process())
);
}
Thanks!
I won't "Accept" this answer for a while because I am eager to see if anyone else can come up with a better solution.
A coworker came up with a solution which I tweaked a little bit. Yes, I am aware of the irony (?) of using the lock statement with a ConcurrentDictionary. I don't really have the time right now to see if there would be a better collection type to use. Basically, instead of just doing a ContinueWith() for existing tasks, we replace the task with itself plus another task tacked on the end using ContinueWith().
What difference does that make? Glad you asked! :) If we had just done a ContinueWith() then the !worker.Value.IsCompleted would return true as soon as the first task in the chain is completed. However, by replacing the task with two (or more) chained tasks, then as far as the collection is concerned, there is only one task and the !worker.Value.IsCompleted won't return true until all tasks in the chain are complete.
I admit I was a little concerned about replacing a task with itself+(new task) because what if the task happened to be running while it is being replaced. Well, I tested the living daylights out of this and did not run into any problems. I believe what is happening is that since task is running in its own thread and the collection is just holding a pointer to it, the running task is unaffected. By replacing it with itself+(new task) we maintain the pointer to the executing thread and get the "notification" when it is complete so that the next task can "continue" or the IsCompleted returns true.
Also, the way the "clean up" loop works, and where it is located, means that we will have "completed" tasks hanging around in the collection but only until the next time the "clean up" runs which is the next time a message is received. Again, I did a lot of testing to see if I could cause a memory problem due to this but my service never used more than 20 MB of RAM, even while processing hundreds of messages per second. We would have to receive some pretty big messages and have a lot of long running tasks for this to ever cause a problem but it is something to keep in mind as your situation may differ.
As above, in the code below, nrtq = not relevant to question.
public interface IMessage
{
long CompantId { get; set; }
void Process();
}
public class CompanyMessage : IMessage
{ //implementation, nrtq }
public class ProductMessage : IMessage
{ //implementation, nrtq }
public class Controller
{
private static ConcurrentDictionary<long, Task> _workers = new ConcurrentDictionary<long, Task>();
//other needed declarations, nrtq
public Controller(){//constructor stuff, nrtq }
public StartSubscribers()
{
//other code, nrtq
_companySubscriber.OnMessageReceived += HandleCompanyMsg;
_productSubscriber.OnMessageReceived += HandleProductMsg;
}
private void HandleCompanyMsg(string msg)
{
//other code, nrtq
QueueItUp(new CompanyMessage(message));
}
private void HandleProductMsg(string msg)
{
//other code, nrtq
QueueItUp(new ProductMessage(message));
}
private static void QueueItUp(IMessage message)
{
lock(_workers)
{
foreach (var worker in Workers)
{
if (!worker.Value.IsCompleted) continue;
Task task;
Workers.TryRemove(worker.Key, out task);
}
var id = message.CompanyId;
if (_workers.ContainsKey(id))
_workers[id] = _workers[id].ContinueWith(x => message.Process());
else
{
var task = new Task(y => message.Process(), id);
_workers.TryAdd(id, task);
task.Start();
}
}
}
Related
There are some database operations I need to execute before the end of the final attempt of my Hangfire background job (I need to delete the database record related to the job)
My current job is set with the following attribute:
[AutomaticRetry(Attempts = 5, OnAttemptsExceeded = AttemptsExceededAction.Delete)]
With that in mind, I need to determine what the current attempt number is, but am struggling to find any documentation in that regard from a Google search or Hangfire.io documentation.
Simply add PerformContext to your job method; you'll also be able to access your JobId from this object. For attempt number, this still relies on magic strings, but it's a little less flaky than the current/only answer:
public void SendEmail(PerformContext context, string emailAddress)
{
string jobId = context.BackgroundJob.Id;
int retryCount = context.GetJobParameter<int>("RetryCount");
// send an email
}
(NB! This is a solution to the OP's problem. It does not answer the question "How to get the current attempt number". If that is what you want, see the accepted answer for instance)
Use a job filter and the OnStateApplied callback:
public class CleanupAfterFailureFilter : JobFilterAttribute, IServerFilter, IApplyStateFilter
{
public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
try
{
var failedState = context.NewState as FailedState;
if (failedState != null)
{
// Job has finally failed (retry attempts exceeded)
// *** DO YOUR CLEANUP HERE ***
}
}
catch (Exception)
{
// Unhandled exceptions can cause an endless loop.
// Therefore, catch and ignore them all.
// See notes below.
}
}
public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction)
{
// Must be implemented, but can be empty.
}
}
Add the filter directly to the job function:
[CleanupAfterFailureFilter]
public static void MyJob()
or add it globally:
GlobalJobFilters.Filters.Add(new CleanupAfterFailureFilter ());
or like this:
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new CleanupAfterFailureFilter () };
};
app.UseHangfireServer(options, storage);
Or see http://docs.hangfire.io/en/latest/extensibility/using-job-filters.html for more information about job filters.
NOTE: This is based on the accepted answer: https://stackoverflow.com/a/38387512/2279059
The difference is that OnStateApplied is used instead of OnStateElection, so the filter callback is invoked only after the maximum number of retries. A downside to this method is that the state transition to "failed" cannot be interrupted, but this is not needed in this case and in most scenarios where you just want to do some cleanup after a job has failed.
NOTE: Empty catch handlers are bad, because they can hide bugs and make them hard to debug in production. It is necessary here, so the callback doesn't get called repeatedly forever. You may want to log exceptions for debugging purposes. It is also advisable to reduce the risk of exceptions in a job filter. One possibility is, instead of doing the cleanup work in-place, to schedule a new background job which runs if the original job failed. Be careful to not apply the filter CleanupAfterFailureFilter to it, though. Don't register it globally, or add some extra logic to it...
You can use OnPerforming or OnPerformed method of IServerFilter if you want to check the attempts or if you want you can just wait on OnStateElection of IElectStateFilter. I don't know exactly what requirement you have so it's up to you. Here's the code you want :)
public class JobStateFilter : JobFilterAttribute, IElectStateFilter, IServerFilter
{
public void OnStateElection(ElectStateContext context)
{
// all failed job after retry attempts comes here
var failedState = context.CandidateState as FailedState;
if (failedState == null) return;
}
public void OnPerforming(PerformingContext filterContext)
{
// do nothing
}
public void OnPerformed(PerformedContext filterContext)
{
// you have an option to move all code here on OnPerforming if you want.
var api = JobStorage.Current.GetMonitoringApi();
var job = api.JobDetails(filterContext.BackgroundJob.Id);
foreach(var history in job.History)
{
// check reason property and you will find a string with
// Retry attempt 3 of 3: The method or operation is not implemented.
}
}
}
How to add your filter
GlobalJobFilters.Filters.Add(new JobStateFilter());
----- or
var options = new BackgroundJobServerOptions
{
FilterProvider = new JobFilterCollection { new JobStateFilter() };
};
app.UseHangfireServer(options, storage);
Sample output :
I have a task that runs in a different thread and requires the session. I've done:
public GenerateDocList(LLStatistics.DocLists.DocList docs)
{
this.docs = docs;
context = HttpContext.Current;
}
and
public void StartTask()
{
//this code runs in a separate thread
HttpContext.Current = context;
/* rest of the code */
}
Now the thread has knowledge of the session and it works for a while but at some point in my loop HttpContext.Current.Session becomes null. Any ideas what can I do about this?
public static LLDAC.DAL.DBCTX LLDB
{
get
{
LLDAC.DAL.DBCTX currentUserDBContext = HttpContext.Current.Session["LLDBContext"] as LLDAC.DAL.DBCTX;
if (currentUserDBContext == null)
{
currentUserDBContext = new LLDAC.DAL.DBCTX();
HttpContext.Current.Session.Add("LLDBContext", currentUserDBContext);//this works only for a few loop iterations
}
return currentUserDBContext;
}
}
In general, this is a very fragile pattern for a multi-threaded operation. Long-running tasks (which I assume this is) are best suited to instance methods in a class rather than static methods such that the class can maintain any dependent objects. Also, since the session state is not thread safe and can span multiple requests you are getting into some very risky business by cashing your DB context in the session at all.
If you are convinced this is best done with static methods and stored in the session, you may be able to do something like this:
public static HttpSessionState MySession { get; set; }
public GenerateDocList(LLStatistics.DocLists.DocList docs)
{
this.docs = docs;
MySession = HttpContext.Current.Session;
}
Then:
public static LLDAC.DAL.DBCTX LLDB
{
get
{
LLDAC.DAL.DBCTX currentUserDBContext = MySession["LLDBContext"] as LLDAC.DAL.DBCTX;
if (currentUserDBContext == null)
{
currentUserDBContext = new LLDAC.DAL.DBCTX();
if (MySession == null)
{
thow new InvalidOperaionException("MySession is null");
}
MySession.Add("LLDBContext", currentUserDBContext);
}
return currentUserDBContext;
}
}
Note that you could still run into issues with the session since other threads could still modify the session.
A better solution would probably look something like this:
public class DocListGenerator : IDisposable
{
public LLDAC.DAL.DBCTX LLDB { get; private set; }
public DocListGenerator()
{
LLDB = new LLDAC.DAL.DBCTX();
}
public void GenerateList()
{
// Put loop here.
}
public void Dispose()
{
if (LLDB != null)
{
LLDB.Dispose();
}
}
}
Then your calling code looks like this:
public void StartTask()
{
using (DocListGenerator generator = new DocListGenerator()
{
generator.GenerateList();
}
}
If you really want to cache something, you could cache your instance like this:
HttpContext.Current.Sesssion.Add("ListGenerator", generator);
However, I still don't think that is a particularly good idea since your context could still be disposed or otherwise altered by a different thread.
Using anything related to the HttpContext.Current on anything besides the main Request thread is generally going to get you into trouble in ASP.net.
The HttpContext is actually backed on a thread belonging to a Thread Pool and the thread may very well get reused on another request.
This is actually a common issue with using the new Async/Await keywords in ASP.net as well.
In order to help you, it would help to know why you're attempting this in the first place?
Is this a single server or a web farm with multiple load balanced servers?
Are you hosting it yourself, or is it the site hosted by a provider?
What is the SessionState implementation (SQL Server, State Server, In-Process, or something custom like MemCached, Redis, etc...)
What version of ASP .net?
Why are you starting a new thread instead of just doing the processing on the request thread?
If you really can't (or shouldn't) use session. Then you could use something like a correlation ID.
Guid correlationID = Guid.NewGuid();
HttpContext.Current.Session["DocListID"] = correlationID;
DocList.GoOffAndGenerateSomeStuffOnANewThread(correlationID);
... when process is done, store the results somewhere using the specified ID
// Serialize the result to SQL server, the file system, cache...
DocList.StoreResultsSomewhereUnderID();
... later on
DocList.CheckForResultsUnderID(HttpContext.Current.Session["DocListID"]);
i am working on UI-application that handles multiple entry point approach.
I am referring the link and try for make a demo.
Here is the code :-
public class DemoApp extends UiApplication implements RealtimeClockListener
{
private static DemoApp dmMain ;
private static final long dm_APP_ID = 0x6ef4b845de59ecf9L;
private static DemoApp getDemoApp()
{
if(dmMain == null)
{
RuntimeStore dmAppStore = RuntimeStore.getRuntimeStore();
dmMain = (DemoApp)dmAppStore.get(dm_APP_ID);
}
return dmMain;
}
private static void setDemoApp(DemoApp demoAppMain)
{
RuntimeStore dmAppStore = RuntimeStore.getRuntimeStore();
dmAppStore.remove(dm_APP_ID);
dmAppStore.put(dm_APP_ID, demoAppMain);
}
public static void main(String[] args)
{
Log.d(" Application argument "+args);
if( args.length > 0 && args[ 0 ].equals( "Demo_Alternate" ) )
{
Log.d("Running Demo_Alternate #### Running Demo_Alternate #### Running Demo_Alternate");
dmMain = new DemoApp();
dmMain.enterEventDispatcher();
setDemoApp(dmMain);
}
else
{
Log.d("Running Demo #### Running Demo #### Running Demo #### Running Demo");
getDemoApp().initializeMain();
}
}
public DemoApp()
{
this.addRealtimeClockListener(this);
}
private void initializeMain()
{
UiApplication.getUiApplication().invokeLater(new Runnable()
{
public void run()
{
try
{
pushScreen(new DemoMainScreen());
} catch (Exception e)
{
Log.e(e.toString());
}
}
});
}
public void clockUpdated()
{
showMessage("DemoAppClock Updated");
Log.d("DemoAppClock Updated #### DemoAppClock Updated #### DemoAppClock Updated");
}
private void showMessage(String message)
{
synchronized (Application.getEventLock())
{
Dialog dlg = new Dialog(Dialog.D_OK, message, Dialog.OK, null, Manager.FIELD_HCENTER);
Ui.getUiEngine().pushGlobalScreen(dlg, 1, UiEngine.GLOBAL_QUEUE);
}
}
}
:- I have created an alternate entry point named Demo_Alternate , that runs at start up.
:- If the application has separate entry points, that means a separate process the link
Now my questions are :-
While running the code, I am getting "Uncaught exception : no application instance".
I just want to make one application instance - don't want separate processes.
Can we use (Application) Singleton approach for alternate entry-points?
Only looked briefly at this code, but see an obvious problem here:
dmMain.enterEventDispatcher();
setDemoApp(dmMain);
enterEventDispatcher never returns, so you never put your Application instance in RuntimeStore.
I suggest you review the following KB article, you might find its approach to accessing a RuntimeStore maintained object easier to use. Or not.
Singleton using RuntimeStore
Update
If this solution does not work, please update your original post with the corrected code.
I certainly agree with Peter, that calling setDemoApp(dmMain) after enterEventDispatcher() means it doesn't get called.
That said, I think you have a more basic misunderstanding here.
Using alternate entry points will create multiple processes. See here for more.
But, you say that you don't want separate processes. Can you tell us why not?
Separate BlackBerry processes that are designed to work together can still share data, using the RuntimeStore, for example.
Maybe you could tell us more about what your "Demo" and "Demo Alternate" are supposed to do.
So I'm trying to get my head around this new 'async' stuff in .net 4.5. I previously played a bit with async controllers and the Task Parallel Library and wound up with this piece of code:
Take this model:
public class TestOutput
{
public string One { get; set; }
public string Two { get; set; }
public string Three { get; set; }
public static string DoWork(string input)
{
Thread.Sleep(2000);
return input;
}
}
Which is used in a controller like this:
public void IndexAsync()
{
AsyncManager.OutstandingOperations.Increment(3);
Task.Factory.StartNew(() =>
{
return TestOutput.DoWork("1");
})
.ContinueWith(t =>
{
AsyncManager.OutstandingOperations.Decrement();
AsyncManager.Parameters["one"] = t.Result;
});
Task.Factory.StartNew(() =>
{
return TestOutput.DoWork("2");
})
.ContinueWith(t =>
{
AsyncManager.OutstandingOperations.Decrement();
AsyncManager.Parameters["two"] = t.Result;
});
Task.Factory.StartNew(() =>
{
return TestOutput.DoWork("3");
})
.ContinueWith(t =>
{
AsyncManager.OutstandingOperations.Decrement();
AsyncManager.Parameters["three"] = t.Result;
});
}
public ActionResult IndexCompleted(string one, string two, string three)
{
return View(new TestOutput { One = one, Two = two, Three = three });
}
This controller renders the view in 2 seconds, thanks to the magic of the TPL.
Now I expected (rather naively) that the code above would translate into the following, using the new 'async' and 'await' features of C# 5:
public async Task<ActionResult> Index()
{
return View(new TestOutput
{
One = await Task.Run(() =>TestOutput.DoWork("one")),
Two = await Task.Run(() =>TestOutput.DoWork("two")),
Three = await Task.Run(() =>TestOutput.DoWork("three"))
});
}
This controller renders the view in 6 seconds. Somewhere in the translation the code became no longer parallel. I know async and parallel are two different concepts, but somehow I thought the code would work the same. Could someone point out what is happening here and how it can be fixed?
Somewhere in the translation the code became no longer parallel.
Precisely. await will (asynchronously) wait for a single operation to complete.
Parallel asynchronous operations can be done by starting the actual Tasks but not awaiting them until later:
public async Task<ActionResult> Index()
{
// Start all three operations.
var tasks = new[]
{
Task.Run(() =>TestOutput.DoWork("one")),
Task.Run(() =>TestOutput.DoWork("two")),
Task.Run(() =>TestOutput.DoWork("three"))
};
// Asynchronously wait for them all to complete.
var results = await Task.WhenAll(tasks);
// Retrieve the results.
return View(new TestOutput
{
One = results[0],
Two = results[1],
Three = results[2]
});
}
P.S. There's also a Task.WhenAny.
No, you stated the reason that this is different already. Parallel and Async are two different things.
The Task version works in 2 seconds because it runs the three operations at the same time (as long as you have 3+ processors).
The await is actually what it sounds like, the code will await the execution of the Task.Run before continuing to the next line of code.
So, the big difference between the TPL version and the async version are that the TPL version runs in any order because all of the tasks are independent of each other. Whereas, the async version runs in the order that the code is written. So, if you want parallel, use the TPL, and if you want async, use async.
The point of async is the ability to write synchronous looking code that will not lock up a UI while a long running action is happening. However, this is typically an action that all the processor is doing is waiting for a response. The async/await makes it so that the code that called the async method will not wait for the async method to return, that is all. So, if you really wanted to emulate your first model using async/await (which I would NOT suggest), you could do something like this:
MainMethod()
{
RunTask1();
RunTask2();
RunTask3();
}
async RunTask1()
{
var one = await Task.Factory.StartNew(()=>TestOutput.DoWork("one"));
//do stuff with one
}
async RunTask2()
{
var two= await Task.Factory.StartNew(()=>TestOutput.DoWork("two"));
//do stuff with two
}
async RunTask3()
{
var three= await Task.Factory.StartNew(()=>TestOutput.DoWork("three"));
//do stuff with three
}
The code path will go something like this (if the tasks are long running)
main call to RunTask1
RunTask1 awaits and returns
main call to RunTask2
RunTask2 awaits and returns
main call to RunTask3
RunTask3 awaits and returns
main is now done
RunTask1/2/3 returns and continues doing something with one/two/three
Same as 7, except less the one that already completed
Same as 7, except less the two that already completed
****A big disclaimer about this, though. Await will run synchronously if the task is already completed by the time that the await is hit. This saves the runtime from having to perform its vudu :) since it is not needed. This will make the code flow above incorrect as the flow is now synchronous****
Eric Lippert's blog post on this explains things much better than I am doing :)
http://blogs.msdn.com/b/ericlippert/archive/2010/10/29/asynchronous-programming-in-c-5-0-part-two-whence-await.aspx
Hopefully, that helps dispel some of your questions about async versus TPL? The biggest thing to take away is that async is NOT parallel.
I have a system whereby users can upload sometimes large(100-200 MB) files from within an MVC3 application. I would like to not block the UI while the file is uploading, and after some research, it looked like the new AsyncController might let me do what I'm trying to do. Problem is - every example I have seen isn't really doing the same thing, so I seem to be missing one crucial piece. After much futzing and fiddling, here's my current code:
public void CreateAsync(int CompanyId, FormCollection fc)
{
UserProfile up = new UserRepository().GetUserProfile(User.Identity.Name);
int companyId = CompanyId;
// make sure we got a file..
if (Request.Files.Count < 1)
{
RedirectToAction("Create");
}
HttpPostedFileBase hpf = Request.Files[0] as HttpPostedFileBase;
if (hpf.ContentLength > 0)
{
AsyncManager.OutstandingOperations.Increment();
BackgroundWorker worker = new BackgroundWorker();
worker.DoWork += (o, e) =>
{
string fileName = hpf.FileName;
AsyncManager.Parameters["recipientId"] = up.id;
AsyncManager.Parameters["fileName"] = fileName;
};
worker.RunWorkerCompleted += (o, e) => { AsyncManager.OutstandingOperations.Decrement(); };
worker.RunWorkerAsync();
}
RedirectToAction("Uploading");
}
public void CreateCompleted(int recipientId, string fileName)
{
SystemMessage msg = new SystemMessage();
msg.IsRead = false;
msg.Message = "Your file " + fileName + " has finished uploading.";
msg.MessageTypeId = 1;
msg.RecipientId = recipientId;
msg.SendDate = DateTime.Now;
SystemMessageRepository.AddMessage(msg);
}
public ActionResult Uploading()
{
return View();
}
Now the idea here is to have the user submit the file, call the background process which will do a bunch of things (for testing purposes is just pulling the filename for now), while directing them to the Uploading view which simply says "your file is uploading...carry on and we'll notify you when it's ready". The CreateCompleted method is handling that notification by inserting a message into the users's message queue.
So the problem is, I never get the Uploading view. Instead I get a blank Create view. I can't figure out why. Is it because the CreateCompleted method is getting called which shows the Create view? Why would it do that if it's returning void? I just want it to execute silently in the background, insert a message and stop.
So is this the right approach to take at ALL? my whole reason for doing it is with some network speeds, it can take 30 minutes to upload a file and in its current version, it blocks the entire application until it's complete. I'd rather not use something like a popup window if I can avoid it, since that gets into a bunch of support issues with popup-blocking scripts, etc.
Anyway - I am out of ideas. Suggestions? Help? Alternate methods I might consider?
Thanks in advance.
You are doing it all wrong here. Assume that your action name is Create.
CreateAsync will catch the request and should be a void method and returns nothing. If you have attributes, you should apply them to this method.
CreateCompleted is your method which you should treat as a standard controller action method and you should return your ActionResult inside this method.
Here is a simple example for you:
[HttpPost]
public void CreateAsync(int id) {
AsyncManager.OutstandingOperations.Increment();
var task = Task<double>.Factory.StartNew(() => {
double foo = 0;
for(var i = 0;i < 1000; i++) {
foo += Math.Sqrt(i);
}
return foo;
}).ContinueWith(t => {
if (!t.IsFaulted) {
AsyncManager.Parameters["headers1"] = t.Result;
}
else if (t.IsFaulted && t.Exception != null) {
AsyncManager.Parameters["error"] = t.Exception;
}
AsyncManager.OutstandingOperations.Decrement();
});
}
public ActionResult CreateCompleted(double headers1, Exception error) {
if(error != null)
throw error;
//Do what you need to do here
return RedirectToAction("Index");
}
Also keep in mind that this method will still block the till the operation is completed. This is not a "fire and forget" type async operation.
For more info, have a look:
Using an Asynchronous Controller in ASP.NET MVC
Edit
What you want here is something like the below code. Forget about all the AsyncController stuff and this is your create action post method:
[HttpPost]
public ActionResult About() {
Task.Factory.StartNew(() => {
System.Threading.Thread.Sleep(10000);
if (!System.IO.Directory.Exists(Server.MapPath("~/FooBar")))
System.IO.Directory.CreateDirectory(Server.MapPath("~/FooBar"));
System.IO.File.Create(Server.MapPath("~/FooBar/foo.txt"));
});
return RedirectToAction("Index");
}
Notice that I waited 10 seconds there in order to make it real. After you make the post, you will see the it will return immediately without waiting. Then, open up the root folder of you app and watch. You will notice that a folder and file will be created after 10 seconds.
But (a big one), here, there is no exception handling, a logic how to notify user, etc.
If I were you, I would look at a different approach here or make the user suffer and wait.