I've got an application that's been working for 9 months with no issue, but as of this morning (with no programming changes on my part), I'm timing out with SQL Server Error 10060.
In checking the server via SQL Studio there is some latency, and I've also had the same message, but less frequently.
In looking at the portal.azure page I'm assured that everything is OK.
What should one do when this happens? My ISP looks fine; I've got the same speed as usual.
Related
I have a weird issue that just started popping up for our customers. The portal they've been using for years has started freezing on some of the pages that the user navigates to. I tried restarting the IIS Server, the site within and the Application Pool under which the site is site is running. No difference.
In Chrome Dev Tools I can see that it is always one of these three calls that take time to complete:
When it happens, one of those three calls will report that the request is not finished, like this:
When eventually the call completes, I can see that the Content Download took 3.8 minutes. Not sure whether it is relevant or not, but it is always 3.8 minutes:
Did anyone else encounter a similar situation? Is there a suggestion on how to figure out what is happening all of a sudden that triggers these type of behaviours?
TIA,
Ed
Edit: The resource that fails to load after 3.8 minutes always generates a net::ERR_CONNECTION_RESET error:
Edit2: Thanks to all of you trying to help. A little update: I was able to isolate to problem to an issue with the server not serving some of the files. either *.css or *.js. The setting is that of two identical servers placed behind a load balancer. Apparently, the load balancer software was recently updated and right after that we started having these issues. I am working closely with the IT department of our client, trying to figure out what is the impact of the newer version that seems to have triggered all this drama.
The last couple of days I've tried to debug a network error from d00m. I'm starting to run out of ideas/leads and my hope is that other SO users have valuable experience that might be useful. I hope to be able to provide all relevant information, but I'm not personally in control of the server environments.
The whole thing started by users noticing a couple of "network errors" in our app. The error seemed to occur randomly, without any noticeable pattern related to internet connectivity, iOS version or backend updates. The two errors that occurs behind the scenes are:
Error Domain=NSURLErrorDomain Code=-1001 "The request timed out."
and more frequently:
Error Domain=kCFErrorDomainCFNetwork Code=-1005 "The network connection was lost.
After debugging it for a couple of days, I've managed to reproduce these errors (occurring at random) by firing approx. 10 random (GET and POST) requests towards our backend with a random sleep timer between each request (set at 1-20 seconds). However, it only occurs in periods. What I've experienced the last couple of days is that when a "period of error" starts, I get one of the two errors every once or twice I run the code (meaning an error rate of 1/10 or 1/20 requests). This error rate continues for a couple of hours and then the error disappears for a couple of hours and then it starts all over.
Some quick facts about the setup:
Happens on device and simulator
Happens on iOS 8.4 and iOS 7.1 - although v. 8.4 is the main one I use for testing.
We use NSURLSession for our network requests. We also have AFNetworking included (updated to latest version), but we only use the Security part for SSL Pinning. Even with SSL pinning totally turned off, the error still occurs.
Some findings I've written down during the last couple of days:
It seems to only happen on our production environments which has some different configuration as our staging environments. This lead me to think that it might be related to the keep-alive bug as discussed here and here. However, our ops department have set up a new staging environment sending the same keep-alive header as the production environments, but this did not make the error occur on the staging environment.
Our Android version of the app were unable to reproduce the error using the same setup of requests. Further, we've not received any customer issues on "network errors" in the Android app.
My gut feeling says that it's related to the server environment and the HTTP implementation in iOS. I'm however unable to track down a convincing pattern that proves anything. I've made the same setup using a simple Rails script, and when the next "error period" occur, I'll be ready to try and reproduce it outside of iOS land. I'll update the question when this happens.
I'm not looking for solutions involving resetting wifi settings, shutting down the simulator or similar as I do not see this as feasible solutions in a production environment. I've also considered making the retry-loop-fix as mentioned in the GitHub issue, but I see this as a last resort.
Please let me know if you need any more information.
In my experience, those sorts of problems usually point to massive packet loss, particularly over a cell network, where minor variations in multipath interference and other issues can make the difference between reliably passing traffic and not.
Another possibility that comes to mind is poor-quality NAT implementations, in the unlikely event that your server's timeout interval is long enough to cause the NAT to give up on the TCP connection.
Either way, the only way to know for sure what's happening is to take a packet trace. To do that, connect a Mac to the Internet via a wired connection, enable network sharing over Wi-Fi, and connect the iOS device to that Wi-Fi network. Then run Wireshark and tell it to monitor the bridge interface. Instructions here:
http://www.howtogeek.com/104278/how-to-use-wireshark-to-capture-filter-and-inspect-packets/
From there, you should be able to see exactly what is being sent and when. That will probably go a long way towards understanding why it is failing.
Ok, I lost a lot of time investigeting similar issue.
1005 could be coused by known iOS bug and there are couple fixes. For example add header
"Connection" with value "close".
More info
1001 is a different story. In my case the problem was strange (bad?) firewall on the server. It was banning device when there was many (not so many) requests in short period of time.
I believe you can do easy test if you are facing similar issue.
Send a lot of (depends of firewall settings) requests in loop (let's say 50 in 1 second).
Close/Kill app (this will close connections to server)
(OPTIONAL) Wait a while (lets say 60 seconds)
Start app again and try send request
If you now got timeout for all next requests, you probably have same issue and you should talk with sever guys.
PS: if you don't have access to server, you can give user info that he should restart wifi on device to quit that timeout loop. It could be last resort in some cases.
We have a website created in asp-mvc4 running on iss on windows server 2012 and using MSSQL 2012 as data storage. Connections are done using entity framework-6... Very standard stuff.
We are not a high volume website (max 3000 users around the world so hitting it in different timezones)
The issue is that sometimes without warning the site becomes unresponsive (browser does not show it and time out). Nothing special but here is the strange issues:
The server itself is working fine if you terminal server into it
Restarting the ISS does not help er there are no error logs
SQL server have around 100 connections from the website all sleeping (but killing theses processes does not make the site recover)
SQL server at the time show half of them as waiting tasks but it is still responsive if executing sql from SSMS or even remote from excel (remote reporting)
Looking at SQL Profiler website is still sending in SQL request despite being down but they are all request like this: if db_id('dbname') is not null else select... (Not something specifically written in the website)
the really strange one: If we restart the SQL server the website becomes responsive again)
I know this is not a lot to go on but we are very puzzled and don't really know how to proceed. Northing indicate error in any kind of log (website, iss, sql server or windows). I can deduct it must be the website thinking SQL cannot give it what it need because connection pool or something is used up but why it is only freed up with a complete sql server restart and not just killing the processes really puzzles me, and why the connection pool buildup happen in the first place since and sql is handled in entity framework
Any advice on how to debug further is most welcome
I am migrating an existing site from Win2003/.Net 4.0 to Win2012 as lift and shift (no change in code).
The problem is that the connections are not released from the connection pool and errors after reaching 100 very quickly. When I checked the SQL Server(also 2012) using sp_who2, I can see the connections sleeping and not getting released.
This code properly closes connections and uses Enterprise library Data blocks and works fine in the old environment. Any clues on where the issue is?
After much research, I found out that the issue was in a section of the code where data readers were not being disposed off correctly in certain scenarios.
What is still not clear is why it works without the connection leaking in Win2003/iis6 combination. What was ruled out is any external factors that would contribute such a behavior in 2012 Server environment.
I've been working with TFS for months without a problem, now suddently I can't check-in "big" files (2kb files are ok, but 50kb files or multiple files are not). TFS is hosted on a server in the same network.
When I try to check-in, it gives-me an error like: "Check In: Operation not performed : The underlying connection was closed: An unexpected error occurred on a receive.
Please refer to the Output window for more information". The "more information" in output window is just the same error.
The event viewer of the server shows nothing, and I've been looking in Google for the past couple hours and turned out nothing yet.
The error message The underlying connection was closed is an indicator that something between the client and the server is dropping the connection unexpectedly.
Some things to investigate/try:
Is the application pool on the server restarting? Look at the Application Event Log on the AT server. Look for ASP.NET and W3SVC warnings/errors that indicate the app pool has restarted.
How is the client connecting to the server? Is there a HTTP proxy in the middle? Is the server behind a load balancer or firewall device? What is the idle timeout set to? Is it honoring HTTP Keep-Alive settings?
Is it failing for all clients? Can you checkin the same file on the TFS server itself?
If none of those seem to lead you in the right direction, you'll need to setup a Fiddler or NetMon trace on your client and/or your server.
We had a similar problem in TFS 2008, where the lock table was getting big and causing some problems. Take a look at tbl_lock in your TfsVersionControl database. This should be a very small number of rows.
In our experience when this number approached or exceeded 1,000 rows, we started seeing significant issues with check-ins.
Our resolution to this was to turn on merging for the binary files that we were storing.