Twilio Flow Http Request Timeout - twilio

Is there a way to specify the timeout for the Http Request widget? If so, how can you adjust it. It seems incredibly short if background processing is needed.
Thanks!

The HTTP request widget within Studio has a hard timeout of 5 seconds. I'm not certain, but the SDKs might give you some configuration control that Studio does not.
One possible work-around (depending upon your application) is to make this part of your flow asynchronous.

Related

spring rabbitMQ blocking handler

I am facing problems when resource limits are reached with rabbitMQ , I saw the post
Spring AMQP: Register BlockedListener to Connection
There was a suggestion for a Jira issue , any improvement in this direction ?
especially it would have been nice if I can configure a blocking handler from XML side also.
Is there any way before a send I can check the channel status ( blocking ) since I get into an infinite blocked state if I send on a blocking channel since no timeout is available.
Your question isn't clear. There is no JIRA, so no support for that feature out-of-the-box. All you need to do is that workaround provided by Gary.
It is indeed isn't possible to configure BlockedListener via XML configuration, but that isn't too hard to enhance the connectionFactory after injection to some your bean via provided hook.
We will be appreciate if you raise a JIRA and provide the feedback how that should work from the Framework perspective.

How to find out if a previous request is running in asp.net MVC?

i have an android application that sends requests to an asp.net website and receives the response.
asp.net mvc controller receives the request and starts the android emulator on server and does something and sends the response.
The problem is when two simultaneous requests arrive I want to either queue the second request or find out if previous request is running and if so, wait for a specified time and then start doing its thing (running emulator).
The second solution is simpler, so I wanna know if there's a way to know if a previous request is running in asp.net.
thanks all.
You could use a static boolean which you set when the process starts and clear when the process stops.
Keep in mind to check and set in a thread safe way, eg by using "lock"

ASP.NET MVC Async Controller vs Server Push(COMET/Reverse Ajax)

I'm building an ASP.NET MVC site in which the clients(browser) can make API calls that take upto 30 minutes(or more..) to process. Obviously I couldn't use normal MVC Controllers to do this as a few such requests would block all my IIS worker threads leaving other faster calls blocked.
I've looked at the following two options :
ASP.NET MVC's Asynchronous controllers
PokeIn Library which allows server push via Reverse AJAX(long holding HTTP requests for older browsers) or WebSockets(from HTML5 specification for newer browsers)
Now both of it seems like a good feasible option.
Option 1 seems easiest for me to implement. With Asynchronous Controllers, my IIS worker threads wouldn't be blocked hence allowing my other faster API calls to go through seamlessly. However from the Async Controller documentation, I perceive that, it spawns of another non IIS thread which would be blocked/waiting for my long running(30~ mins) process to complete. I've read that, "If you block or sleep in a controller no matter whether it is async or not async it is very bad."
In Option 2, if my clients are using newer browsers, which supports WebSockets, this would perhaps be most performant as I do not need to have any blocking thread on the server side. When the client triggers a slow API call I'd raise an event, on the completion of which(say 30~ mins later) I'd raise another event to update all my client's browsers with the updated content.
However with PokeIn library, if part of my clients do not have WebSocket supporting browsers(older ones..), I'm not sure If they'd be hogging one of my IIS worker threads.
Is Option 2 an overkill for my requirement ?
In Option 1 is it bad to have my Async Controller wait on the slow process ?
One other disadvantage with Option 1 is that if the user Refreshes the page before the request completes, He'd no longer get the update of the job, once it completes !
Any ideas, suggestions are welcome.
Thanks
PokeIn uses same in-memory/thread pools to push the messages for websocket and ajax connections since it has internal websocket server. The delivery time certainly differs for ajax and websocket but whatever method/option you pick, you will have that difference. Besides, probably you already know but Pokein fallback to comet ajax in case a client doesn't support websocket and you don't have to deal with it.
Hope this answers your question for option 2.

Canceling a request when connection to client is lost

I noticed that in a standard grails environment, a request is always executed to the end, even when the client connection is lost and the result can't be delivered anymore.
Is there a way to configure the environment in such a way that execution of a request is canceled as soon as the client connection is lost?
Update: Thanx fo the answers. Yes - most of the problems I am trying to avoid can be avoided by better coding:
caching can make nearly every page fast
a token can help to avoid submitting something twice
but there are some requests which still could consume some time. Let's take a map service as example. Calculating a route will take some time. One solution to avoid resubmitting the request could be a "calculationInProgress" flag together with a message to the user. But then it is still possible to create a lot of sessions and thus a lot of requests in order to do a DOS attack...
I am still curious: is there no way to configure the server to cancel the request? I used to develop on a system where the server behaved this way and it was great :-)
Probably there is no such way. And I'm sure grails (and your webcontainer) is designed to
accept incoming request
process it on server side
send response
if something happened during phase 2, i'll know about it only on send response phase. Actually you can send data to HttpSerlvetRespone by yourself, handle IOException, etc - but it will be too much low-level way, I think. And it will not help you with canceling your DB operations, while you're preparing data to send.
Btw, it's common pattern to use an web frontend, like nginx, that accepts incomming request and and handle all this problems with cancelled requests, slow requests (i guess it's the real problem?), etc.
According to your comment it is reload and multiple clicks that you are trying to avoid. The proper technique should be to use Grails support for handling multiple form submissions:
http://grails.org/doc/2.0.x/guide/theWebLayer.html#formtokens

How do I stop 401 responses from TFS 2008

Whenever a web request is made by Visual Studio to TFS, Fiddler will show a 401 Unauthorized error. Visual Studio will then try again with a proper Authorization Negotiate header in place with which TFS will respond with the proper data and a 200 status code.
How can I get the correct headers to be sent the first time to stop the 401?
This is how the process of Windows Integrated Authentication (NTLM) works. NTLM is a connection based authentication mechanism and actually involves 3 calls to establish the authenticated session.
The TFS API then goes to extra-ordinary lengths to make sure that this handshake is done in the most efficient way possible. It will keep the authenticated connection open for a period of time to avoid this hand-shake where possible. It will also do the initial authentication using a HTTP payload with minimal content and then send the real message if the message you were going to send is over a certain length. It does a bunch of other tricks as well to optimise the connection to TFS.
Basically, I would just leave it alone as it works well.
You will see that a web browser also does this when communicating with a web site. It will always try to give away the minimum amount of detail with the first call. If this fails, it will reveal a little more about you.
This is by design and for a very good reason.
This is how it's always done - request, get the 401 back, then send the authorization. It's part of the authentication protocol for http.

Resources