AFNetworking set image without extension - ios

There is a method in AFNetworking that can set image conveniently:
- (void)setImageWithURL:(NSURL *)url
placeholderImage:(UIImage *)placeholderImage
but if the url image have no extension(like http://static.qyer.com/album/user/330/21/QkpVQBsHaA/670), there are some problems,sometimes the image can be displayed exactly some times it is not displayed.
I found a method
[AFImageRequestOperation addAcceptableContentTypes:<#(NSSet *)contentTypes#>];
how should I set the contentTypes?

If you curl the URL provided, you can see the problem:
curl -i -X HEAD http://static.qyer.com/album/user/330/21/QkpVQBsHaA/670
HTTP/1.0 200 OK
Server: nginx/1.0.11
Date: Fri, 29 Mar 2013 02:03:24 GMT
Content-Type: application/octer-stream
Last-Modified: Tue, 19 Mar 2013 09:40:23 GMT
ETag: "53430075-9814c-4d843e4fc6fc0"
Accept-Ranges: bytes
Content-Length: 622924
Powered-By-ChinaCache: MISS from 060531Q354
Powered-By-ChinaCache: MISS from 060532235y
Connection: close
Content-Type: application/octer-stream (which is, strangely, a misspelling of application/octet-stream), is not a valid image mime type. If you have any control over the server, I would strongly recommend you fix this to send real mime types—for the sake of everyone accessing the CDN.
Otherwise, I would recommend you add */* to the list of acceptable content types. This should accept anything thrown at it. You can also manually specify any content types you might expect the CDN to serve, including application/octer-stream.

Related

iOS 9 Alamofire never loads image

I'm using AlamofireImage for loading images into an ImageView in a tableview cell (in a separated xib file)
The problem is that the image never shows up. I think the code is right and url is valid too.
Here is the code (very simple):
let placeholderImage = UIImage(named: "imgNoPhoto1")
if let urlImage = NSURL(string: urlString) {
photoImage.af_setImageWithURL(urlImage, placeholderImage: placeholderImage)
}
Any ideas? Could be that the cell is not being reloaded? I have tested it in iOS 8 and 9.
Hope you help me!
Thank you
Depending on whether you're cancelling the request in prepareForReuse or not, you may be running into a bug that I just fixed in AlamofireImage #55. I'll be pushing out a new release with this fix here in the next couple of days. If this is actually what you're running into, you could comment out the cancellation logic for now and that should fix your problem until we release the fix.
If this is not the issue you are running into, then I'd follow the advice of everyone else and make sure you can download the image using cURL.
Update #1
Okay, I found out what your issue is. The server is not returning a valid content type which is causing AlamofireImage to not validate the image and it won't try to decode the data into an image. You can find this by running the following command in Terminal:
curl -H "User-Agent: iOS" -s -D - http://files.encuentra24.com/normalsq/sv/58/08/68/sv/58/08/68/5808689_5e7d99.jpg -o /dev/null
What this does is run curl against the URL you provided. It doesn't download the image data, it just prints out the response headers. I also found that you need to pass the User-Agent header, otherwise you'll always get a 403. Here's what the curl command will print out:
cnoon:~$ curl -H "User-Agent: iOS Example/com.alamofire.iOS-Example (1; OS Version 9.1 (Build 13B137))" -s -D - http://files.encuentra24.com/normalsq/sv/58/08/68/sv/58/08/68/5808689_5e7d99.jpg -o /dev/null
HTTP/1.1 200 OK
Cache-Control: max-age=2592000, public
Content-Type: image/jpg
Date: Fri, 11 Dec 2015 16:16:38 GMT
Expires: Sun, 10 Jan 2016 16:16:38 GMT
Pragma: no-cache
Server: nginx/1.7.12
Set-Cookie: sessioninfo=uv491lgtjqvkmt267l1nmlbm24; path=/
Set-Cookie: esid=deleted; expires=Thu, 11-Dec-2014 16:16:37 GMT; path=/
Vary: Accept-Encoding
Content-Length: 23757
Now the REALLY important part of this output is the Content-Type: image/jpg. That's not actually a valid Content-Type header. The valid one is image/jpeg. Therefore, AlamofireImage by default won't validate this response and won't decode the image.
Solution
Thankfully, we already have support for this built into AlamofireImage. You can add a custom content-type to the Request response serializers. How you do that is as follows:
Alamofire.Request.addAcceptableImageContentTypes(["image/jpg"])
This will register the image/jpg content type as an acceptable content type with the response serialization system. After registering, any content type matching image/jpg will be decoded. For more info about this, please refer to AlamofireImage #58.

Downloading captions always returns a 403

When I call the captions.download endpoint with an ID that we retrieve from the captions.list endpoint, it always returns a 403. For example:
https://www.youtube.com/watch?v=1HRwpwOj4aA
I call captions.list with:
GET https://www.googleapis.com/youtube/v3/captions?part=id&videoId=1HRwpwOj4aA&key={YOUR_API_KEY}
This is response:
cache-control: private, max-age=0, must-revalidate, no-transform
content-encoding: gzip
content-length: 236
content-type: application/json; charset=UTF-8
date: Sat, 23 May 2015 17:55:57 GMT
etag: "dhbhlDw5j8dK10GxeV_UG6RSReM/Rztb3ln4Zb6O07vb7_KSZi2y1NM"
expires: Sat, 23 May 2015 17:55:57 GMT
server: GSE
vary: Origin, X-Origin
{
"kind": "youtube#captionListResponse",
"etag": "\"dhbhlDw5j8dK10GxeV_UG6RSReM/Rztb3ln4Zb6O07vb7_KSZi2y1NM\"",
"items": [
{
"kind": "youtube#caption",
"etag": "\"dhbhlDw5j8dK10GxeV_UG6RSReM/pwH-4wtyQJz0U3l57fA8uKm4e1I\"",
"id": "kHlUsiuNS4TjB25loauZNXGrjK91I1tEdNyOpTRNA78="
}
]
}
When I use the above id to call captions.download:
GET https://www.googleapis.com/youtube/v3/captions/kHlUsiuNS4TjB25loauZNXGrjK91I1tEdNyOpTRNA78%3D?key={YOUR_API_KEY}
This is response:
403 Forbidden
cache-control: private, max-age=0
content-encoding: gzip
content-length: 29
content-type: text/html; charset=UTF-8
date: Sat, 23 May 2015 17:59:05 GMT
expires: Sat, 23 May 2015 17:59:05 GMT
server: GSE
vary: Origin, X-Origin
Forbidden
Any ideas what could be happening here?
From the YouTube API docs:
403 Forbidden: The permissions associated with the request are not
sufficient to download the caption track. The request might not be
properly authorized, or the video order might not have enabled
third-party contributions for this caption.
Instead caption download API that sometimes returns 403 (if video not have enabled third-party contributions for this caption) you can use youtube.com/api/timedtext
what you wrote above about "only works for videos your google account owns" is not my experience. I just successfully ran captions.download on a video (about dog training) which I definitely do not own - do not even have a dog. However, I have tested the exact same code on the video mentioned here on stackoverflow and get a 403 error.
So no it doesn't always return a 403 sometimes it returns a 200! Try it with the dog video mentioned above:
python captions.py --videoid="jBN2_YuTclU" --action="download" --captionid='8S2GjnNfitU5HHoLyTeLxq_W1dP29YRFC8E8vFBUtws='
with the code you probably already have here.
It will need your client_secrets.json downloaded from the Google credentials page somewhere and a missing file youtube-v3-api-captions.json which you can get from here. The code launches a browser where you log in for OAuth2 authorisation.
Still, there must be a reason why it works for some videos and not others. #Abhishek might have it above. The wrong comment has been upvoted there. Nothing in the output of captions.list for a video that allows captions downloads and a video that does not is obviously different which would explains why one works and the other does not. If anyone can supply which {'key':'value'} pair in the youtube api controls this would be helpful.
Status 403 Forbidden means that nobody has the right to access that URL. You shouldn't receive that message if you have the wrong API key, for example; that should give Status 401 Unauthorised. I'd check the URL carefully.

Firefox stored cached incomplete response

I just found a partial response being cached as complete in one of our customer's machines, which rendered the whole website unusable. And I have absolutely no idea, what could possible have gone wrong there.
So what could have possibly gone wrong in the following setup?
On the server-side, we have an ASP.NET-application running. One IHttpHandler handles requests to javascript-files. It basically minifies the files as they are requested and writes the result on the response-stream. It does also log the length of the string being written to the Response-Stream:
String javascript = /* Javascript is retrieved here */;
HttpResponse response = context.Response;
response.ContentEncoding = Encoding.UTF8;
response.ContentType = "application/javascript";
HttpCachePolicy cache = response.Cache;
cache.SetCacheability(HttpCacheability.Public);
cache.SetMaxAge(TimeSpan.FromDays(300));
cache.SetETag(ETag);
cache.SetExpires(DateTime.Now.AddDays(300));
cache.SetLastModified(LastModified);
cache.SetRevalidation(HttpCacheRevalidation.None);
response.Headers.Add("Vary", "Accept-Encoding");
Log.Info("{0} characters sent", javascript.length);
response.Write(javascript);
response.Flush();
response.End();
The content is then normally sent using gzip-encoding with chunked transfer-encoding. Seems simple enough to me.
Unfortunately, I just had a remote-session with a user, where only about 1/3 of the file was in the cache, which broke the file of course (15k instead of 44k). In the cache, the content-encoding was also set to gzip, all communication took place via https.
After having opened the source-file on the user's machine, I just hit Ctrl-F5 and the full content was displayed immediately.
What could have possibly gone wrong?
In case it matters, please find the cache-entry from Firefox below:
Cache entry information
key: <resource-url>
fetch count: 49
last fetched: 2015-04-28 15:31:35
last modified: 2015-04-27 15:29:13
expires: 2016-02-09 14:27:05
Data size: 15998 B
Security: This is a secure document.
security-info: (...)
request-method: GET
request-Accept-Encoding: gzip, deflate
response-head: HTTP/1.1 200 OK
Cache-Control: public, max-age=25920000
Content-Type: application/javascript; charset=utf-8
Content-Encoding: gzip
Expires: Tue, 09 Feb 2016 14:27:12 GMT
Last-Modified: Tue, 02 Jan 2001 11:00:00 GMT
Etag: W/"0"
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
Date: Wed, 15 Apr 2015 13:27:12 GMT
necko:classified: 1
Your clients browser is most likely caching the JavaScript files which would mean the src of your scripts isn't changing.
For instance if you were to request myScripts
<script src="/myScripts.js">
Then the first time, the client would request that file and any further times the browser would read its cache.
You need to append some sort of unique value such as a timestamp to the end of your scripts so even if the browser caches the file, the new timestamp will act like a new file name.
The client receives the new scripts after pressing Ctrl+F5 because this is a shortcut to empty the browsers cache.
MVC has a really nice way of doing this which involves appending a unique code which changes everytime the application or it's app pool is restarted. Check out MVC Bundling and Minification.
Hope this helps!

When does Rails respond with 'transfer-encoding' vs. 'content-length'?

I'm building an API on Rails version 4.1.7/Nginx that responds to request from an iOS app. We're seeing some weird caching on the client and we think it has something to do with a small difference in the response that Rails is sending back. My questions...
1) I want to understand why, for the exact same request (with only the Authorization header value changed), Rails sends back transfer-encoding: chunked sometimes and Content-Length: <number> sometimes? I thought that maybe it had something to do with the response size, but in the example responses whose headers I've pasted below, the data returned in the body is EXACTLY the same.
2) Is there a way to force it to use Content-Length? We think that that will fix our caching issues in our iOS app.
Response #1
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:31 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: dd36f139-1986-4da6-9645-4438d41e74b0
X-Runtime: 0.123865
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked
Connection: keep-alive
Request #2
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:36 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 0cfd7705-157b-41b5-aa36-739bc6f8302e
X-Runtime: 0.092672
X-XSS-Protection: 1; mode=block
Content-Length: 2234
Connection: keep-alive
Both responses are valid according to HTTP 1.1, so you need to fix your client code that it can handle both. It is a bad idea to try to fix the server so that that it behave in a way that it does not trigger a bug in the client.
The next version of nginx may behave differently, you users may even have proxies that change the transfer, maybe only when they do roaming and use a different provider.
If you want to do some finger-printing on the header, the ETag-header may help you, as the ETag should stay constant when the content of the response is not changed, regardless of the transfer.
The server typically sends in chunks when it calls a dynamic page, because it then does not need to create a buffer for the whole page and wait till all of the page is generated.
The server often send the response in one go if it has the buffer already, for example because it is in cache or the content is on a file and is not to big. Sending in one go is more efficient, on the other hand, an extra copy of the data to buffer the output needs more memory and is less efficient. So the server may even decide this according to the available memory.

Loading an image using SDWebImage with no file extension

So I'm using SDWebImage to load images asynchronously in my iOS UITableView. To do this I call:
[cell.itemImageView setImageWithURL:imageUrl placeholderImage:[UIImage imageName:#"placeholder.png"]];
Where imageUrl might be:
imageUrl = [NSURL URLWithString:#"http://example.org/image.png"];
This all works fine, however the server API I'm using will occasionally return an image without an image extension. In this case SDWebImage appears to not attempt to load the image. Is there anyway I can force it to download the image? I cannot just append .png to the image as this causes permissions issues where the image is hosted.
EDIT: I have just ran this on a example image currently hosted (the GUID is the filename):
curl -i -X HEAD http://example.org/images/98b67f6a-671c-482c-8e3b-0ade8bfa01be
And it returns with this:
HTTP/1.1 200 OK
x-amz-id-2: UGcGcyUuUfWBD2YrhmfoRq8oiXIwEkBJ9x4TdimLAcPc9Yim26tRRgjN/PVBak+S
x-amz-request-id: 513EDF3EB400DE6E
Date: Mon, 12 Aug 2013 16:03:08 GMT
Last-Modified: Thu, 13 Jun 2013 01:28:50 GMT
ETag: "96ca8a122a94c97eee83ef685c7e2e7b"
Accept-Ranges: bytes
Content-Type: image/jpg
Content-Length: 17631
Server: AmazonS3
Not really an answer but to resolve this I ended up changing the server to save images with their extension.
The content type returned by the server is currently image/jpg; it should be image/jpeg instead.

Resources