I'm watching the Stanford CS193p lecture video. The instructor used both downloadTaskWithRequest: and downloadTaskWithURL: when downloading some photos from Flickr. I'm a bit confused.
I looked up the former in the documentation, which speaks of the NSURLRequest as "An NSURLRequest object that provides the URL, cache policy, request type, body data or body stream, and so on".
I have no idea what "body data" or "body stream" means. It would be fantastic if anyone could help a bit on that, but more important is the problem below.
It seems to me that either method would work just fine according to my experience (which isn't much).
I'm intrigued to know what, if any, is the difference between the two, and on which occasions should I pick one over another.
If you use the NSURLRequest version, all the details you mentioned can be explicitly set by you. If you use the NSURL version then the default values will be used instead. The default values will cover the majority of cases, but not everything - it really depends on what you're doing.
The body data / body stream (where a stream is a source of data) is some piece of information that needs to be sent to the server for it to understand and process the request. By default no data will be sent. Often you will use query parameters in the URL instead of body data, but again, it depends what you're doing as to what API you need to leverage.
Related
We set timeout interval for a request by NSMutableURLRequest timeoutInterval. As Apple's document described, it specifies the limit between packets, not the whole request. When we analyse our requests logs, some timeout request exceeded the seconds we set to timeoutInterval. We need timeout the requests accurately.
By reading document and blogs, the timeoutIntervalForRequest property in NSURLSessionConfiguration is the same as timeoutInterval. But the timeoutIntervalForResource property seems fit our requirement.
However, Mattt says in objc.io that timeoutIntervalForResource "should only really be used for background transfers". Can it be used in normal request? Such as query user info. Is it appropriate in this situation?
Thanks very much.
It can be used, but it rarely makes sense to do so.
The expected user experience from an iOS app is that when the user asks to download or view some web-based resource, the fetch should continue, retrying/resuming as needed, until the user explicitly cancels it.
That said, if you're talking about fetching something that isn't requested by the user, or if you are fetching additional optional data that you can live without, adding a resource timeout is probably fine. I'm not sure why you would bother to cancel it, though. After all, if you've already spent the network bandwidth to download half of the data, it probably makes sense to let it finish, or else that time is wasted.
Instead, it is usually better to time out any UI that is blocked by the fetch, if applicable, but continue fetching the request and cache it. That way, the next time the user does something that would cause a similar fetch to occur, you already have the data.
The only exception I can think of would be fetching video fragments or something similar, where if it takes too long, you need to abort the transfer and switch to a different, lower-quality stream. But in most apps, that should be handled by the HLS support in iOS itself, so you don't have to manage that.
Bear with me on this one please.
When setting response of a WinJS.xhr response I can set it to, among other things, to 'ms-stream' or blob. I was hoping to leverage the stream concept when downloading a file in such a way that I don't have to keep the whole response in memory (video files can be huge).
However, all I can do with 'ms-stream' object is read it with an MSStreamReader. This would be great if I could say to it 'consume 1024 bytes from the stream, and 'loop' this, until stream is exhausted. However from reading the docs (haven't tried this, so correct me if I'm wrong), it appears I can only read from the stream once (e.g. readAsBlob method) and I can't set the start position. This means I need to read the whole response into memory as a blob. Which I can achieve with responseType set to 'blob' in the first place. So what is the point of MSStream anyway?
Well, it turns out that the method msDetachStream gives access to underlying stream and doesn't interrupt the download process. I initially thought that any data that was not downloaded was lost when calling this since the docs mention that MSStream object is closed.
I wrote a blog post a while back to help answer questions about MSStream and other oddball object types that you encounter in WinRT and the host for JavaScript apps. See http://www.kraigbrockschmidt.com/2013/03/22/msstream-blob-objects-html5/. Yes, you can use MSStreamReader to for some work (it's a synchronous API), but you can also pass an MSStream to URL.createObjectURL to assign it to an img.src and so forth.
With MSStream, here's some of what I wrote: "MSStream is technically an extension of this HTML5 File API that provides interop with WinRT. When you get MSStream (or Blob) objects from some HTML5 API (like an XmlHttpRequest with responseType of “ms-stream,” as you’d use when downloading a file or video, or from the canvas’ msToBlob method), you can pass those results to various WinRT APIs that accept IInputStream or IRandomAccessStream as input. To use the canvas example, the msRandomAccessStream in a blob from msToBlob can be fed into APIs in Windows.Graphics.Imaging for transform or transcoding. A video stream can be similarly worked with using the APIs in Windows.Media.Transcoding. You might also just want to write the contents of a stream to a StorageFile (that isn’t necessarily on the file system) or copy them to a buffer for encryption."
So MSStreamReader isn't the end-all. The real use of MSStream is to pass the object into WinRT APIs that accept the aforementioned interface types, which opens many possibilities.
Admittedly, this is an under-documented area, which is exactly why I wrote my series of posts under the title, Q&A on Files, Streams, Buffers, and Blobs (the initial post is on http://www.kraigbrockschmidt.com/2013/03/18/why-doesnt-storagefile-close-method/).
I'm new to AFNetworking and I'm interested in using it to handle a few dozen JSON requests (for example, using a web service's API that responds with JSON) for my application, but I'm having some trouble understanding how I should do this.
Could anyone offer some insight on how I'd go about accomplishing this? Like I said, I'm new to the library so an explanation would be greatly appreciated if you explain with code.
For a more specific example as to what I'm trying to do, here's the Clear Read API I'm using, where you pass the URL as a parameter in the URL and are returned a JSON response (the API extracts the article from a URL, removing the other bloat).
Example URL: http://api.thequeue.org/v1/clear?url=http://blogs.balsamiq.com/product/2012/02/27/uxstackexchange/&format=json
I'll be taking a few dozen URLs and running them all through that service and wish to save the results.
I was previously doing this with NSURLConnection in a for loop, firing off several dozen NSURLConnections, which was causing my data to be quite messed up by the end, with timeouts and whatnot from so many going at once.
I understand that it would be better to do only a few at a time, and AFNetworking seems perfect for this kind of problem, but I'm really just confused how I'd use it/subclass it or whatever.
I'd recommend starting with their Getting Started guide.
There's not much too it, really: build an AFJSONRequestOperation for each call to the API you want to make, and in the success callback, handle the deserialized JSON appropriately. If you have a bunch of calls to make, use AFHTTPClient to a) simply some of the repetitive work of building those operations, and b) use the client's operation queue to batch them all up. You can then throttle the number of requests in flight at once with the queue's setMaxConcurrentOperationCount: method.
I do not see any answer on similar question regarding limiting size of read string from URL. A user can point by mistake on URL bringing several gigabytes of data which obviously will bring down my iPhone application. I want to limit size to 100Kb, is there any way to do that using the standard method?
In short, no, because the method you're referring to is pretty much a convenience method. If you want to check the size of the request prior to making it, you might as well use ASIHTTPRequest or even just NSURLConnection.
Specifically with ASIHTTPRequest, you'll get a delegate callback when the HTTP headers are received, so you'll know the size. Then, if the size is 100KB+, you can just cancel the request.
So, we have a very large and complex website that requires a lot of state information to be placed in the URL. Most of the time, this is just peachy and the app works well. However, there are (an increasing number of) instances where the URL length gets reaaaaallllly long. This causes huge problems in IE because of the URL length restriction.
I'm wondering, what strategies/methods have people used to reduce the length of their URLs? Specifically, I'd just need to reduce certain parameters in the URL, maybe not the entire thing.
In the past, we've pushed some of this state data into session... however this decreases addressability in our application (which is really important). So, any strategy which can maintain addressability would be favored.
Thanks!
Edit: To answer some questions and clarify a little, most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long. These parameters can contain anything legal in a URL (meaning they aren't just numbers or just letters, could be anything). Case sensitivity may or may not matter.
Also, ideally we could convert these to POST, however due to the immense architectural changes required for that, I don't think that is really possible.
If you don't want to store that data in the session scope, you can:
Send the data as a POST parameter (in a hidden field), so data will be sent in the HTTP request body instead of the URL
Store the data in a database and pass a key (that gives you access to the corresponding database record) back and forth, which opens a lot of scalability and maybe security issues. I suppose this approach is similar to use the session scope.
most of our parameters aren't an issue... however some of them are dynamically generated with the possibility of being very long
I don't see a way to get around this if you want to keep full state info in the URL without resorting to storing data in the session, or permanently on server side.
You might save a few bytes using some compression algorithm, but it will make the URLs unreadable, most algorithms are not very efficient on small strings, and compressing does not produce predictable results.
The only other ideas that come to mind are
Shortening parameter names (query => q, page=> p...) might save a few bytes
If the parameter order is very static, using mod_rewritten directory structures /url/param1/param2/param3 may save a few bytes because you don't need to use parameter names
Whatever data is repetitive and can be "shortened" back into numeric IDs or shorter identifiers (like place names of company branches, product names, ...) keep in an internal, global, permanent lookup table (London => 1, Paris => 2...)
Other than that, I think storing data on server side, identified by a random key as #Guido already suggests, is the only real way. The up side is that you have no size limit at all: An URL like
example.com/?key=A23H7230sJFC
can "contain" as much information on server side as you want.
The down side, of course, is that in order for these URLs to work reliably, you'll have to keep the data on your server indefinitely. It's like having your own little URL shortening service... Whether that is an attractive option, will depend on the overall situation.
I think that's pretty much it!
One option which is good when they really are navigatable parameters is to work these parameters into the first section of the URL e.g.
http://example.site.com/ViewPerson.xx?PersonID=123
=>
http://example.site.com/View/Person/123/
If the data in the URL is automatically generated can't you just generate it again when needed?
With little information it is hard to think of a solution but I'd start by researching what RESTful architectures do in terms of using hypermedia (i.e. links) to keep state. REST in Practice (http://tinyurl.com/287r6wk) is a very good book on this very topic.
Not sure what application you are using. I have had the same problem and I use a couple of solutions (ASP.NET):
Use Server.Transfer and HttpContext (PreviousPage in .Net 2+) to get access to a public property of the source page which holds the data.
Use Server.Transfer along with a hidden field in the source page.
Using compression on querystring.