A GET request is constructed to test the network and it is configurable to purposely wait for 15 minutes within the IIS process before responding.
When the request is issued in Chrome without Fiddler it completes successfully.
However, with Fiddler it is usually either 5 or 10 minutes. When replayed from Fiddler it is 5 minutes (only a few trials so not conclusive).
Within the 10 minute cases, inside the Fiddler Statistics tab is
Request was retried after a Receive operation failed
This related SO answer is for POSTS and a prior Fiddler version.
Why do the 504's only occur when Fiddler is running. Is it Fiddler that is issuing the retry logic--does the browser even see the broken/reset connection?
Fiddler Version: v4.6.2.0
Related
Requests from iOS are being sent filled from the app but received as null on the server and it is random, sometimes the same request is received correct, and sometimes it is received as null, other channels (android and web) are not facing this issue
to note that the front-end connects to a load-balancer (layer 7) that redirects the requests to one of the 3 nodes, and the issue is happening on the 3 nodes, the traffic on the WAF and the load-balancer seems normal, we installed a new SSL certificate in feb, but i don't think it is related
we tried to simulate the case on the dev environment with the same application and the server server (code and infrastructure) and it did not happen
i am stuck here, and unable to identify the reason of such a behavior on production
any suggestions?
We are using keycloak as auth server. We have jetty as app server which has keycloak configured as auth resource server (using keycloak adapter).
We have java client desktop applications which connect to jetty for REST calls. Whenever client app starts, it asks for login (using KeycloakInstalled Java plugin). It gets the token and passes it in every subsequent requests to Jetty. This whole flow works smoothly with no issue.
Now, when I check Jetty http logs, I see one call to KC server for every incoming call to Jetty. This call is /auth/realms/{realm_name}/protocol/openid-connect/token
As I increase the load on Jetty server, above KC call starts failing with below error:
org.apache.http.NoHttpResponseException: <jetty_url> failed to respond
java.lang.RuntimeException: Failed to enforce policy decisions.
Due to this every call becomes unresponsive and system goes down.
Now I have few questions:
1) Why KC adapter is firing above call to KC on every request? Is there setting that KC adapter can cache
the result for some time.
2) I believe above exception is some issue in httpclient which KC adapter uses internally. Is there something that we can change here?
3) Whenever I fire rest call from browser (which internally redirects to KC and asks for login), the above KC /token call never happens. The difference between desktop client vs browser request is that browser sets cookie which has JSESSIONID and session_state. Should our desktop client also use JSESSIONID instead of token? what is the correct way? If yes, how to get the JSESSIONID. I could get the session_state from token.
Please help.
I've been running into some issues with the twilio and bot framework channel integration.
In a nutshell, a large number of incoming messages and conversations that happen through the twilio channel time out and the user never receives a response. Then, after a few minutes, all the piled up responses will arrive at the same time - almost as iff the responder hangs and then continues. The error occurs only with the twilio channel - the bot it working perfectly when embedded in site, when tested in azure portal, and when connected to slack.
When I first connected twilio to the bot, it was running completely fine for a few days, and now I am getting the following error on roughly 70-80% of the messages which occur through that channel.
On a high level, the error I'm getting specific to the channel is: 'There was an error sending this message to your bot: HTTP status code GatewayTimeout'
Inside of the app logs, the error recording is far more detailed, but still provides no insight into what specifically is causing the error:
HTTP Error 500.1013 - Internal Server Error
The page cannot be displayed because an internal server error has occurred.
Most likely causes:
•IIS received the request; however, an internal error occurred during the processing of the request. The root cause of this error depends on which module handles the request and what was happening in the worker process when this error occurred.
•IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly.
•IIS was not able to process configuration for the Web site or application.
•The authenticated user does not have permission to use this DLL.
•The request is mapped to a managed handler but the .NET Extensibility Feature is not installed.
Things you can try:
•Ensure that the NTFS permissions for the web.config file are correct and allow access to the Web server's machine account.
•Check the event logs to see if any additional information was logged.
•Verify the permissions for the DLL.
•Install the .NET Extensibility feature if the request is mapped to a managed handler.
•Create a tracing rule to track failed requests for this HTTP status code. For more information about creating a tracing rule for failed requests, click here.
On the twilio side, I get the following error
Error - 11200
HTTP retrieval failure
Possible Causes
Web server returned a 4xx or 5xx HTTP response to Twilio
Misconfigured Web Server
Network disruptions between Twilio and your web server
No Content-Type header attached to response
Content-Type doesn't match actual content, e.g. an MP3 file that is being served with Content-Type: audio/x-wav, instead of Content-Type: audio/mpeg
Possible Solutions
Double check that your TwiML URL does not return a 4xx or 5xx error
Make certain that the URL does not perform a 302 redirect to an invalid URL
Confirm the URL requested is not protected by HTTP Auth
Make sure your web server allows HTTP POST requests to static resources (if the URL refers to .xml or .html files)
Verify your web server is up and responsive
Check to see that the URL host is not a private or local IP address
Verify the ping times and packet loss between your web server and www.twilio.com
Twilio sends a request to Bot Framework, and gets the following info back
Msg "Bad Gateway"
sourceComponent "14100"
ErrorCode "11200"
EmailNotification "false"
httpResponse "502"
LogLevel "ERROR"
url "https://sms.botframework.com/api/sms"
Twilio was unable to fetch content from: http://sms.botframework.com/api/sms
Error: Total timeout is triggered. Configured tt is 15000ms and we attempted 1 time(s)
Account SID: redacted
SID: redacted
Request ID: redacted
Remote Host: sms.botframework.com
Request Method: POST
Request URI: http://sms.botframework.com/api/sms
URL Fragment: true
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Some additional information:
Bot is working perfectly when embedded in website, when tested in the azure portal, and when connected to slack.
Error seems to occur at the locations in code where await context.sendActivity(messageToReturnToUser) or await dialogContext.beginDialog(this.id) or basically anywhere where we send something back to the user.
After a few minutes, bot framework will send all the piled up messages to the end user and they'll get a chunk of sms messages back to back.
Error cannot be replicated in any other channel or in the bot framework emulator.
Error does not occur with every message. An arbitrary number of messages will go through fine and get responses immediately, but then an arbitrary number of messages will be subject to the delays.
I am using paid twilio numbers, no trial errors happening here!
Has anyone else had this problem? Any input or help would be appreciated!
This issue has been mitigated on the Azure/BotFramework side. If you are still having issues, please let me know.
In my application, the browser http requests are queued.
On http request to server, the client should be notified by server that the request is been accepted (say with http status as 202 or just a message "In Progress"), so that client side queue can send the second request to server.
Once the first request executes completely, the client should be again notified by server saying the request is success (say http status as 200).
Using promises didn't help as two times rendering was not possible; one with actual request-response and the other when the thread completes the work.
Though I know one request and multiple response are not possible. But is there a way to render the text at least twice for a request?
One solution is to do it as multi step process.
So, suppose we are using Rabbit MQ as our messaging queue. Let's follow steps below:
Queue sent out a request to server that process some resource.
Server accepted the request and started processing it and sent out a return message with code 202 / in process. Also, it did send a message to rabbit mq to process the request meanwhile sending out the message code to client.
The other message got consumed and process completed and pushed the message 200 to say success queue with some identification number to identify the request from client e.g. customer id, urn no. etc. Or rather than pushing just put a message status in database and use another call from client to check if the message status is updated to the expected one.
Client can now easily check the status of it's request by checking up the queue or database.
You may use ajax requests as well to track if some process is completed or not as server side.
Hope it helps.
I am running a ruby on rails app with unicorn server on Heroku.
Scenario: Client sends a HTTP POST request with a large request body.
My understanding:
Heroku router establishes a HTTP connection with client and forwards it to the dyno
30 sec counter starts
Dyno starts reading the request body from client through the connection
if client is slow and takes greater than 30 secs to transfer the request body Heroku issues a HTTP 503 error and closes the connection
Is my understanding right? Or is it the case that Heroku only starts the timeout counter after the dyno has read the request body?
According to Heroku's docs:
HTTP requests have an initial 30 second window in which the web
process must return response data (either the completed response or
some amount of response data to indicate that the process is active).
Processes that do not send response data within the initial 30-second
window will see an H12 error in their logs.
I think it's designed to prevent dynos being tied up for any particular length of time
My understanding is the timer starts as soon as you send a request to the server. Once the request is routed, the timer begins to count down until you start getting data back