I'm creating an IOS app using swift.
Recently, I've encountered a weird bug.
Im trying to check if a url is valid, therefore, I'm creating a request with the url and check for the response. I do this task with dataTaskWithRequest of NSUrlSession.
The weird bug is that if the URL is alibaba ,the response returns after a long time(more than 20 seconds sometimes).
Why does that happen?
As far as i concerned it happens only with this specific url.
Here is some code although its not necessary .
let request = NSMutableURLRequest(URL: validatedUrl)
request.HTTPMethod = "HEAD"
let session = NSURLSession.sharedSession()
let task = session.dataTaskWithRequest(request){ data, response, error in
// The response here returns after a very long time
let url = request.URL!.absoluteString
I would appreciate some help guys!
You're retrieving the contents of a URL over the Internet. The speed at which this happens is arbitrary. It depends on the speed of the DNS server that looks up the hostname, the speed of the web server that responds to the request, the speed of the user's Internet connection, and the speed of every network in between.
You can safely assume that it will either succeed or time out within three minutes. Twenty seconds isn't even slightly unusual over a cellular network.
You should probably rethink what you're doing with this URL and why you're doing it, or at least try to figure out a way to avoid keeping the user waiting while you fetch the URL.
Related
I want to set different timeouts for different requests. My request routine looks like:
var request = URLRequest(url: url,
cachePolicy: .reloadIgnoringLocalCacheData,
timeoutInterval: timeout)
// setting headers and body...
sessionTask = localURLSession.dataTask(with: request)
sessionTask?.resume()
where localURLSession is defined as public var:
public var localURLSession: Foundation.URLSession {
return Foundation.URLSession(configuration: localConfig, delegate: self, delegateQueue: nil)
}
public var localConfig: URLSessionConfiguration {
let res = URLSessionConfiguration.default
res.timeoutIntervalForRequest = Self.ordinaryRequestsTimeout // 20 seconds
return res
}
Then I have 2 problems:
When I make 2 simultaneous requests with 100% loss Network Link
Conditioner (first with 20 seconds timeout and second – with 40
seconds), both requests fails after 8 seconds. I don't understand
why.
When I make one request for the first time with 100% loss
Network Link Conditioner, it fails in timeout like expected, but
retrying this request fails in 1 second. I want to wait all the
timeout every time.
In all likelihood, for the 8-second failure, the DNS request is timing out, so you aren't connecting at all.
For the 1-second failure, the OS has probably concluded that the host is unreachable, and won't even try again until the network changes or it successfully makes at least one request to some host somewhere (negative DNS caching).
That said, without a packet trace, I can't be certain of either of those statements.
I'm trying to fetch some weather data (Location forecast). This request return a couple of kb. OK, so I would only like to request the first part or terminate the request when i get the line
<temperature id="TTT" unit="celsius" value="xxxx"/>
Is it possible to do a URLRequest in iOS Swift that request a certain number of bytes/chars. Or is put into chunks and then terminated when temperature is received. I could setup a server acting as proxy and cut down the data exchanged to my app, but it would be nice if I could avoid this.
I am using Alamofire to do a basic requests to an API endpoint. I noticed that the more often I do these tests, the longer the Alamofire request seems to take.
I can reproduce this behaviour with the code sample below. This does a bunch of requests and prints the duration of the request to the console. The last request is about .5 seconds slower than the first one. The amount of slow down seems to be related to the amount of JSON the API returns (our API returns much more data and the slow down is much more significant)
Am I hitting some kind of caching mechanism here?
let testURL = "https://httpbin.org/get"
for var i = 0; i < 100; i++ {
let startDate = NSDate()
Alamofire.request(.GET, testURL)
.responseJSON { response in
print("Duration of request: \(NSDate().timeIntervalSinceDate(startDate))")
}
}
The problem here is not Alamofire, but how you are measuring the latency. You are queueing 100 requests, so the time it takes for each is very small. But since they are queued up, when the last request runs, will depend on the majority of previous requests finishing.
You should use the request timeline object to obtain the latency, with
request.timeline.totalDuration, for example.
Doing scraping I've found that some urls failed. After check the url looked ok in the browser and see in wireshark the remote server was answering with a 200 I've finally found that the url:
http://www.segundamano.es/electronica-barcelona-particulares/galaxy-note-3-mas.htm
was failing with
Net::HTTP::Persistent::Error: too many bad responses after 0 requests on 42319240, last used 1414078471.6468294 seconds ago
More weird is that if you remove a character from the last part, it works. If you add the character in another place, it fails again.
Update 1
The "code"
agent = Mechanize.new
page = agent.get("http://www.segundamano.es/electronica-barcelona-particulares/galaxy-note-3.htm")
Net::HTTP::Persistent::Error: too many bad responses after 0 requests on 41150840, last used 1414079640.353221 seconds ago
This is a network error which normally occurs if you make too many requests to a certain source from the same IP and thus the page takes too long to load. You could try adding a custom timeout to your connection agent, keep the connection alive and ignore bad chunking (potentially bad):
agent = Mechanize.new
agent.keep_alive = true
agent.ignore_bad_chunking = true
agent.open_timeout = 25
agent.read_timeout = 25
page = agent.get("http://www.segundamano.es/electronica-barcelona-particulares/galaxy-note-3.htm")
But that is not giving you a guarantee that the connection will be successfull, it just increases the chances.
It's hard to say why you get the error on one url and not on another. When you remove the 3 you request a different page; one that might be easier for the server to process? My point being: There is nothing wrong with your Mechanize setup but with the response you are getting back.
Agree with Severin, the problem was in the other side. As I can't do anything in the server, I was trying different libs to fetch the data. It was weird that some of them worked and others don't. Trying different setups for mechanize, at the end I've found a good one:
agent = Mechanize.new { |agent|
agent.gzip_enabled = false
}
Using NSURLSession's default caching, how do I invalidate the cache for a particular URL?
I note NSURLCache's removeCachedResponseForRequest: method, but that takes an NSURLRequest object, which I don't have for the original request. Do I need to store those as I create them so I can then pass them back into removeCachedResponseForRequest: or can I just create a new one with the appropriate URL which will then serve as equivalent for the purpose, even if it doesn't have the same header fields and other properties as the original?
If you want to go further you could reset the cached response for the url request you want to force the reload. Doing the following:
let newResponse = NSHTTPURLResponse(URL: urlrequest.URL!, statusCode: 200, HTTPVersion: "1.1", headerFields: ["Cache-Control":"max-age=0"])
let cachedResponse = NSCachedURLResponse(response: newResponse!, data: NSData())
NSURLCache.sharedURLCache().storeCachedResponse(cachedResponse, forRequest: urlrequest)
As the cache-control header of the response hax a max age of 0 (forced) the response will never be returned when you do this request.
Your answer works fine for forcing a single request, but if you want to have two versions of the request one forcing and another relying on the cached response, removing the cached one once you force a request is desired.
The solution turns out not to be invalidating the cache for an existing URL, but to set:
request.cachePolicy = NSURLRequestReloadIgnoringLocalCacheData;
When you make the next request for the resource you know to be invalid. There are options to ignore the local cache only, or to request that upstream proxies ignore their caches too. See the NSURLRequest/NSMutableURLRequest documentation for details.
Here's what has been working for me:
request.cachePolicy = NSURLRequestCachePolicy.ReloadIgnoringCacheData
Here are all the options listed regarding chache policy, so you may find one that better suits your need:
Using Swift 2.2 and Xcode 7.3