I have a rails 3.2.11 app deployed on Heroku that has been fairly stable over time. In the last 24 hours, pingdom has been reporting Timeouts which I can't find any "H1X" related errors in the logs at the same time.
I am occassionally able to reproduce the timeouts in google chrome. where I would get this message after about 30 seconds of requesting any page:
Chrome browser error
No data received
Unable to load the webpage because the server sent no data.
Here are some suggestions:
Reload this webpage later.
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
The app will then begin serving requests normally until it happens again.
I know this is not enough info, but I can't find anything useful yet in newrelic or scanning the logs that correlates to when the error occured.
In one instance, i was reproducing the error in the browser while viewing the heroku logs and when the timeout occurred, there was no evidence of the request showing up in the logs. Its like the failed requests never make it into the app.
Related
My Rails app hosted on Heroku is experiencing a spike of failed requests and I can't figure out why. In particular, I'm receiving many emails saying:
The percentage of failed requests for [app] has exceeded your threshold setting of 5.0%
Moreover, I can't seem to connect the time I'm receiving these emails from Heroku with any system errors in my Heroku monitoring tools. For example, I received an email at 10:47 p.m. yesterday that the failed requests had exceeded 5%, but in Heroku there's no corresponding error (indicated where it should be on the graph below).
Afterward receiving the email described above, I'll receive an email a few minutes later indicating that the failed requests is back below the threshold. This has now started happening 1-2 times per day.
I need to capture these failed requests so that I can diagnose and fix what's happening. I'm open to any form of solution, even paying for a product, but these errors are so rare that trailing my logs isn't a feasible solution.
Thanks in advance. Any advice is appreciated!
We suddenly experience a problem that somehow seems to be related to iOS 14 as we haven't had those errors in prior versions.
At app start, we do quite a lot of network requests to different webservices. This sums up to 158 GET, POST and PUT requests until a user is fully logged in. The app uses 260MB of memory until then. When a user switches to a different account, the login process starts again and another 158 requests are sent out. Now if the user again decides to login with a new account, the login procedure starts yet again. But this time, network requests randomly start canceling with such error messages:
Error Domain=NSPOSIXErrorDomain Code=28 "No space left on device" UserInfo={_NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <EA2DAE7D-F7AD-4979-8215-E716163FA725>.<1>, _kCFStreamErrorDomainKey=1, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalDataTask <EA2DAE7D-F7AD-4979-8215-E716163FA725>.<1>"), _kCFStreamErrorCodeKey=28}
So within a timeframe of two minutes and approximately 400 - 500 HTTP requests, the network layer starts canceling those due to a lack of memory space. Although it could use up 3GB.
The app network logic hasn't changed much before and after we started experiencing such errors. We are also using one SessionManager instance only. It seems to me as if the network stack would start drowning by the number of requests and therefore starts canceling them. Perhaps iOS 14 became more strict in such regards? Has anyone else possibly experienced a similar issue?
We use AFNetworking on our basic network layer.
Any help is much appreciated.
After a while of investigation, it turned out to be a library we're using for debugging purposes. This library, DBDebugToolkit, comes with a network logger feature which we had turned on by default. In situations of high traffic, network logger quickly increased memory usage until our requests got canceled. Now it's turned off by default, but can be switched on in our debugging menu.
I have a TFDConection running on a track every 10 seconds. It checks the connection to a remote PostgresSQL database. When I'm debugging, if the server goes down, the connection error is triggered constantly, disrupting my debugging process. How can I silently receive this error in Delphi?
I'm getting 'Incomplete response received from application' when testing my rails application. It disappears when I refresh the page..
I check my apache error logs and I found this line:
[ W 2018-08-06 07:55:32.1636 126806/T8 age/Cor/Con/InternalUtils.cpp:96 ]: [Client 1-4] Sending 502 response: application did not send a complete response
Any one faced the same issue ?
This issue has some history. Best you can do is add some debug to your application.
This happens when your application exits prematurely. To understand what this means, consider that Passenger works by sitting between the client and the app. Passenger acts like a reverse proxy, so it forwards the request to your app, then processes the response that the app sends.
client <-----> Passenger <-----> app
If, after Passenger has sent the request, the app crashes or otherwise exits before sending a response, then you will see "application did not send a complete response".
So the question is actually: why does the application exit? Unfortunately I do not know, and neither does Passenger. Passenger only starts your app and expects your app to respond to requests as normal. Maybe there is a bug in the app, or the app encountered some sort of fatal error. Normally the app will print an error message when that happens, but Passenger did not encounter any such messages, or it would have printed them.
So the best thing I can recommend you to do is that you insert debugging statements inside your app and find out what makes it exit.
I am developing an iOS app that makes an API request to my server hosted in Heroku.
In my slow internet connection environment, the API request (via Http Get) sometimes results in a timeout. The response time is usually 2000 ms if not timing out.
By "sometimes", I mean about one in 10 requests times out (I do not get any meaningful error code).
I also tested this timeout with 2 devices. When one device is waiting for the server to respond for longer than 2000 ms, I use another device to call the api, to which the server responds normally. But the first device still results in a timeout.
I am not quite sure what is to blame here. My internet connection? My api server on Heroku? I also tested this timeout on Postman and got the same results.
PS. I am based out of Bangkok. The ISP with which I experience the most timeouts is True Broadband.
Any and all advice is appreciated.
Thanks in advance
PPS. In response to comments warning that the question is too broad: Let's ask it this way. If our api calls randomly time out, how can we detect whether it is due to a slow internet connection, or if the fault lies in our own server (or something else)?