Timeout in JBI on Apache Camel - timeout

I'm developing a JBI component which calls another JBI component through Camel route. The second component performs some kind of long operation that sometimes may hang up. I would like to configure my Camel route so that it would be restricted with timeout (let's say 5 seconds). It should somehow behave as asynchronous call... Is it possible?

Actually I don't think that's possible due the asynchronous nature of JBI.

Related

C# 5 .NET MVC long async task, progress report and cancel globally

I use ASP.Net MVC 5 and I have a long running action which have to poll webservices, process data and store them in database.
For that I want to use TPL library to start the task async.
But I wonder how to do 3 things :
I want to report progress of this task. For this I think about SignalR
I want to be able to left the page where I start this task from and be able to report the progression across the website (from a panel on the left but this is ok)
And I want to be able to cancel this task globally (from my panel on the left)
I know quite a few about all of technologies involved. But I'm not sure about the best way to achieve this.
Is someone can help me about the best solution ?
The fact that you want to run long running work while the user can navigate away from the page that initiates the work means that you need to run this work "in the background". It cannot be performed as part of a regular HTTP request because the user might cancel his request at any time by navigating away or closing the browser. In fact this seems to be a key scenario for you.
Background work in ASP.NET is dangerous. You can certainly pull it off but it is not easy to get right. Also, worker processes can exit for many reasons (app pool recycle, deployment, machine reboot, machine failure, Stack Overflow or OOM exception on an unrelated thread). So make sure your long-running work tolerates being aborted mid-way. You can reduce the likelyhood that this happens but never exclude the possibility.
You can make your code safe in the face of arbitrary termination by wrapping all work in a transaction. This of course only works if you don't cause non-transacted side-effects like web-service calls that change state. It is not possible to give a general answer here because achieving safety in the presence of arbitrary termination depends highly on the concrete work to be done.
Here's a possible architecture that I have used in the past:
When a job comes in you write all necessary input data to a database table and report success to the client.
You need a way to start a worker to work on that job. You could start a task immediately for that. You also need a periodic check that looks for unstarted work in case the app exits after having added the work item but before starting a task for it. Have the Windows task scheduler call a secret URL in your app once per minute that does this.
When you start working on a job you mark that job as running so that it is not accidentally picked up a second time. Work on that job, write the results and mark it as done. All in a single transaction. When your process happens to exit mid-way the database will reset all data involved.
Write job progress to a separate table row on a separate connection and separate transaction. The browser can poll the server for progress information. You could also use SignalR but I don't have experience with that and I expect it would be hard to get it to resume progress reporting in the presence of arbitrary termination.
Cancellation would be done by setting a cancel flag in the progress information row. The app needs to poll that flag.
Maybe you can make use of message queueing for job processing but I'm always wary to use it. To process a message in a transacted way you need MSDTC which is unsupported with many high-availability solutions for SQL Server.
You might think that this architecture is not very sophisticated. It makes use of polling for lots of things. Polling is a primitive technique but it works quite well. It is reliable and well-understood. It has a simple concurrency model.
If you can assume that your application never exits at inopportune times the architecture would be much simpler. But this cannot be assumed. You cannot assume that there will be no deployments during work hours and that there will be no bugs leading to crashes.
Even if using http worker is a bad thing to run long task I have made a small example of how to manage it with SignalR :
Inside this example you can :
Start a task
See task progression
Cancel task
It's based on :
twitter bootstrap
knockoutjs
signalR
C# 5.0 async/await with CancelToken and IProgress
You can find the source of this example here :
https://github.com/dragouf/SignalR.Progress

How can I make a method run in the background ASP MVC

I have a particularly long running method that I need to execute from my controller. The Method is in it's own Model. I am using an async controller, and I have the method setup using asyncFunc library to make it asynchronous. I have also tried invoking it on it's own process. The problem is I want to controller to go ahead and return a view so the user can continue doing other things as the method will notify the user it is completed or has any errors via e-mail.
The problem is even thogh it is an asynchronous method the controller will not move forward to return the view until the process is done. 15+ mins. and if you navigate to a different page the method stops trying to execute.
so how can I get the method to execute as a worker and free up the controller?
Any Help would be greatly appreciated.
all the best,
Chase Q, Aucoin
Use ThreadPool.QueueUserWorkItem() as a fire-and-forget approach in the ASPX page.
Do the long-running work in the WaitCallback you pass to QUWI.
when the work is complete, that WaitCallback can send an email, or whatever it wants.
You need to take care to handle the case that the w3wp.exe is stopped during the 15 minute run. What will you do if the work is 2/3 complete? Some options are, making the work restartable, or just allowing the interrupted work to be forgotten.
Making it restartable might mean, when w3wp.exe restarts, your ASP.NET logic makes sure to begin again, any work that was interrupted. It might mean that your ASP.NET logic sets "syncpoints" so that it knows where to restart.
If you want the restartable option, you might think about Workflow, which is specifically designed for this purpose - maintaining state of long-running workflows, restarting automatically, and so on. If you use Workflow, you can set it to run asynchronously, and you may decide you do not need QueueUserWorkItem.
see also:
Moving a time taking process away from my asp.net application
the Workflow Foundation tag
This will help > http://msdn.microsoft.com/en-us/library/ms227433.aspx
It is the standard way of running a background process on the server in the .NET stack.
I don't know why, but I still live in conviction that this should not be done. Executing background threads in ASP.NET smells. You will also steal threads from ASP.NET thread pool which is controlled by IIS. It can decide that something is wrong with your worker process and restart it any time just to keep memory consumption, processing time consumption or thread consumption low. If you need background logic create custom NT service and call the process on that service either via old .NET remoting or WCF.
Btw. approach I described is used frequently in commercial applications and those which doesn't use it often self-host the whole web server.

How do I implement a request timeout in grails?

I'd like to be able to set a configurable (by controller/action) request timeout in grails. The objective is to handle a rare high-load failure mode in a deterministic way. For example, I know that if a given controller/action doesn't return in 30 seconds, then something is horribly wrong and I don't want to keep the user hanging.
I'd like to handle this within the application logic if possible, as there might be reasonable recoveries or messaging depending upon the conditions of the event.
Filters don't work because the time might be reached anywhere in the request processing lifecycle.
I don't think this is easily achievable. You're probably limited to the capabilities of the Servlet container you're using. For example, with tomcat you could set a connectionTimeout. Unfortunately, this may not give you the control that you're asking for since the timeout and response are more at the mercy of the container.
There's probably a way you could do it with background threads, timers, interrupts, and some black magic, but that would probably be an ill-advised thing.
A couple mailing list discussions I found on the topic:
http://grails.1312388.n4.nabble.com/How-to-change-request-timeout-td1356007.html
Quote from within by Peter Ledbrook:
I don't know of a Grails feature for
this. It supports the session time
out, but not a request time out.
Servlet containers have connection
timeouts.
http://www.mail-archive.com/users#tomcat.apache.org/msg38090.html

Database resiliency

I'm designing an application that relies heavily on a database. I need my application to be resilient to short losses of connectivity to the database (network going down for a few seconds for example). What is the usual patterns that people use for these kind of problems. Is there something that I can do on the database access layer to handle gracefully a small glitch in the network connection to the db (i'm using hibernate + oracle jdbc + dbcp pool).
I'll assume you have hidden every database access behind a DAO or something similiar.
Now create wrappers around these DAOs, that try to call them, and in case of an exception wait a second and retry. Of course this will cause 'hanging' of the application during db-outage, but it will come back to live when the database becomes available.
If this is not acceptable you'll have to move the cut up closer to the ui layer. Consider the following approach.
User causes a
request.
wrap all the request information in a message and put it in the queue.
return to the user, telling him that his request will get processed in a short time.
A worker registered on the queue will process the request, retrying when database problems exist.
Note that you are now deep in concurrency land. So you must handle things like requests referencing an entity which already got deleted.
Read up on 'eventual consistency'
Since you are using hibernate, you'll have to deal with lazy loading. An interruption in connectivity will kill your session, so for you it might be best not to use lazy loading at all, but work with detached objects.

How IIS requests are parallelized using COMET?

I have an ASP.NET MVC 2 Beta application where I need to block incoming requests for a specific action until I have some data available to return or just release the request after 30 seconds with no new data available.
In order to accomplish this, I'm using AutoResetEvent.WaitOne(30000);
The big issue is that IIS does not seem to be accepting any new request while the thread is blocked at the WaitOne instruction. New requests get hung till the thread releases.
I need to be able to parallelize the requests while still keeping the WaitOne behavior.
Async handlers are what you're looking for. If you're building a comet solution, you may want to check out our .NET implementation of a comet server here, it'll save you some time. If you're wanting to roll your own, you'll definately need to use the async handlers to avoid hitting upper concurrency limits by the time you get past 60 or 70 users, but even with the async handlers, you'll still have to do some fancy footwork. Basically, you're still going to hit some upper limits in the threadpool unless you hand off the requests into a bounded thread pool that can basically manage all the incoming requests for you.
Good luck!
You should not be blocking incoming requests at all. If the data you need are not ready, then return an empty response, or perhaps return an error code.
For a web application, it is more advisable (not a hard rule) to return a message to tell the users to retry again later due to whatever reason you want to call it.
Stalling/blocking the requests by 'waiting' doesn't really help much as the wait is undeterministic, unless of course you have a mechanism to make it so.
I do not know the nature/context/traffic pattern of your website. 30 seconds can be a number that works for you. Perhaps my points above are not really relevant, just my 2 cents.
Actually, it turns out that this behavior only happens with ASP.NET MVC 2 Beta. I had this working fine with MVC 2 Preview 2 and rolled back to this version to re-test and confirmed that the application worked fine with that version.
Now, the question is: Why am I seeing this different behavior between these two MVC release versions, and what is the correct behavior I should expect to get in this scenario?

Resources