Why aren't my nimbus didImageLoad methods being called? - ios

I'm new to Nimbus. Right now my app is trying to retrieve 4 images via this code:
for (int i=minFoto; i<=maxFoto; i++) {
NINetworkImageView* networkImageView = [self networkImageView];
NSString *resourceURL = [NSString stringWithFormat:#"%#registration/rest/users/account_get_foto/%#?fotoId=%d", baseURL, ssid, i];
NSLog(resourceURL);
[networkImageView setPathToNetworkImage:resourceURL
forDisplaySize: CGSizeMake(50, 50)
contentMode: networkImageView.contentMode];
I know my loop is working because I see all four NSLog's come out correctly. However, I am only getting the first image. networkImageViewDidStartLoad is only being called once and neither didLoadImage or networkImageViewDidFailLoad is being called. I think it is odd that didLoadImage is never being call. Never. I know I have the data because I'm using CharlieProxy (great app BTW, well worth the $50) and it shows the image data in the response packets.
So I commented this out of my delegate:
[[Nimbus networkOperationQueue] setMaxConcurrentOperationCount:1];
And as you might expect, I'm getting 4 calls to networkImageViewDidStartLoad, and still zero to didLoadImage or networkImageViewDidFailLoad.
Here are my request headers (from CharlieProxy)
GET /registration/rest/users/account_get_foto/fdbc2222-7b75-4ff4-b111-623e951e5b00?fotoId=134 HTTP/1.1
Host: -------------:8080
User-Agent: Ferret/1.0 CFNetwork/548.0.3 Darwin/11.2.0
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: keep-alive
and here's the response headers, showing a "200 OK"
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
X-Powered-By: Servlet/3.0; JBossAS-6
Content-Type: image/*
Content-Length: 461109
Date: Tue, 31 Jan 2012 21:12:33 GMT
âPNG (png data deleted...)
I'm a little puzzled now. My server is clearly returning the image data, but my app just isn't getting it. Any ideas?

Well, I found it. Might as well put the answer in for future googlers.
My networkImageView had gone out of scope and was ARC'd. Funny, I thought I had saved it away, but that code was partially commented out!

Related

Server App on Heroku is not accessibe

Hi we have a Ruby on Rails server application on Heroku, but when I send a post request to it, I always get a 400 Bad Request response. I have searched other 400 errors, but none are related to our issue. The HTTP response that we receive looks like this below:
HTTP/1.1 400 Bad Request
Server: Cowboy
Date: Fri, 14 Aug 2015 21:55:25 GMT
Content-Length: 0
The post request that I am sending looks like this below:
POST http://ourapp.herokuapp.com/api/v1/requests HTTP/1.0
Accept-Language: en-us
Accept: text/plain
Content-Type: application/x-www-form-urlencoded
Content-Length: 38
Connection: Close
request=600&key=&newKey=danasecretkey&
Sorry, I had to put blank lines after each header or it would all show up on one line.
If I create an HTML form to send the data, there is no issue. It's when I then try to send the same request from our file server, that I get the errors. I tried using a preflight request with all of the correct request headings, but received the same 400 Bad Request error.
Does anyone have any suggestions as to what I might be doing wrong?
Well, just guessing from what you've said:
request=600&key=&newKey=danasecretkey&
It's likely that you have something like params.require(:key) in your controller. And your request is missing that parameter.
Rails will respond with 400 status in case you missed some require'd params.
What fixed it was switching from HTTP1.0 to HTTP1.1, adding the host header and changing the uri.
The logs didn't tell us anything, and the params were ok. The problem was not fully grasping the HTTP header requirements.

When does Rails respond with 'transfer-encoding' vs. 'content-length'?

I'm building an API on Rails version 4.1.7/Nginx that responds to request from an iOS app. We're seeing some weird caching on the client and we think it has something to do with a small difference in the response that Rails is sending back. My questions...
1) I want to understand why, for the exact same request (with only the Authorization header value changed), Rails sends back transfer-encoding: chunked sometimes and Content-Length: <number> sometimes? I thought that maybe it had something to do with the response size, but in the example responses whose headers I've pasted below, the data returned in the body is EXACTLY the same.
2) Is there a way to force it to use Content-Length? We think that that will fix our caching issues in our iOS app.
Response #1
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:31 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: dd36f139-1986-4da6-9645-4438d41e74b0
X-Runtime: 0.123865
X-XSS-Protection: 1; mode=block
transfer-encoding: chunked
Connection: keep-alive
Request #2
HTTP/1.1 200 OK
Cache-Control: max-age=0, private, must-revalidate
Content-Type: application/json; charset=utf-8
Date: Wed, 18 Mar 2015 00:59:36 GMT
ETag: "86f277ea63295460d4f3bed9a073eaa2"
Server: nginx/1.6.2
Status: 200 OK
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 0cfd7705-157b-41b5-aa36-739bc6f8302e
X-Runtime: 0.092672
X-XSS-Protection: 1; mode=block
Content-Length: 2234
Connection: keep-alive
Both responses are valid according to HTTP 1.1, so you need to fix your client code that it can handle both. It is a bad idea to try to fix the server so that that it behave in a way that it does not trigger a bug in the client.
The next version of nginx may behave differently, you users may even have proxies that change the transfer, maybe only when they do roaming and use a different provider.
If you want to do some finger-printing on the header, the ETag-header may help you, as the ETag should stay constant when the content of the response is not changed, regardless of the transfer.
The server typically sends in chunks when it calls a dynamic page, because it then does not need to create a buffer for the whole page and wait till all of the page is generated.
The server often send the response in one go if it has the buffer already, for example because it is in cache or the content is on a file and is not to big. Sending in one go is more efficient, on the other hand, an extra copy of the data to buffer the output needs more memory and is less efficient. So the server may even decide this according to the available memory.

SSE (Server-sent events) not working

I'm testing SSE in my Rails app (server: Puma) in Chrome but they are not triggered:
setTimeout((function() {
console.log("log1");
var source = new EventSource('/websites/21/backlinks/realtime_push');
source.addEventListener('pagination', function(e) {
console.log("log2");
var data = JSON.parse(e.data);
$('#pagination').html(data.html);
});
}), 1);
only "log1" is written to console.
In developer tools I see XHR requests every time server pushes something (each second) but the response is empty - not sure if developer tools just don't show it or something else is wrong.
curl http://localhost:3000/websites/21/backlinks/realtime_push
returns:
event: pagination
data: {"html":"pagination"}
event: pagination
data: {"html":"pagination"}
so the data should be sent back...
What could be the problem here?
UPDATE: the problem is this monkey patch for problems with render_to_string from the question here: ActionController::Live with SSE not working properly
"this only appears to fix the problem... although it will cause the controller to actually send the data, the receiving end in JavaScript for some reason still won't get notified of events"
It's strange because in both cases when I use render_to_string and when I don't, I get the same headers with curl:
HTTP/1.1 200 OK
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-UA-Compatible: chrome=1
Content-Type: text/event-stream
Cache-Control: no-cache
X-Request-Id: 4de7c8a6-a54f-45ef-9013-0447f85438c2
X-Runtime: 0.033030
Transfer-Encoding: chunked
But in one case it works on the JavaScript side and in other it doesn't :/

Neo4j Query returns BadInputException - when executed through S.D.N

Have asked the same question in Spring Data forum, but think that it is related to Neo4j API.The query is working fine when run within the webadmin.
The request/response, as shown below. An update: - works fine when the values are hard-coded and passed.
POST /db/data/cypher HTTP/1.1
Accept: application/json;stream=true
X-Stream: true
Content-Type: application/json
User-Agent: neo4j-rest-graphdb/0
Host: localhost:7000
Connection: keep-alive
Transfer-Encoding: chunked
b1
{"query":"START n=node:LAT_LONG('withinDistance:[{0},{1}, {2}]') match n<-[:address]-(location)<-[:CONTAINS]-(pol) return pol","params": {"2":50.0,"1":-74.598347,"0":39.274423}}
0
The exception that is received is:
HTTP/1.1 400 Bad Request
Content-Encoding: UTF-8
Content-Type: application/json; stream=true
Transfer-Encoding: chunked
Server: Jetty(6.1.25)
D70
{"exception":"BadInputException","fullname":"org.neo4j.server.rest.repr.BadInputException","stacktrace":["org.neo4j.server.rest.repr.RepresentationExceptionHandlingIterable.exceptionOnHasNext(RepresentationExceptionHandlingIterable.java:50)","org.neo4j.helpers.collection.ExceptionHandlingIterable$1.hasNext(ExceptionHandlingIterable.java:60)","org.neo4j.helpers.collection.IteratorWrapper.hasNext(IteratorWrapper.java:42)","org.neo4j.server.rest.repr.ListRepresentation.serialize(ListRepresentation.java:58)","org.neo4j.server.rest.repr.Serializer.serialize(Serializer.java:75)","org.neo4j.server.rest.repr.MappingSerializer.putList(MappingSerializer.java:61)","org.neo4j.server.rest.repr.CypherResultRepresentation.serialize(CypherResultRepresentation.java:57)","org.neo4j.server.rest.repr.MappingRepresentation.serialize(MappingRepresentation.java:42)","org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:144)"],"cause":{"exception":"NullPointerException","fullname":"java.lang.NullPointerException","stacktrace":["org.neo4j.gis.spatial.indexprovider.LayerNodeIndex.query(LayerNodeIndex.java:246)","org.neo4j.gis.spatial.indexprovider.LayerNodeIndex.query(LayerNodeIndex.java:289)","org.neo4j.cypher.internal.spi.gdsimpl.GDSBackedQueryContext$$anon$1.indexQuery(GDSBackedQueryContext.scala:87)","org.neo4j.cypher.internal.executionplan.builders.IndexQueryBuilder$$anonfun$getNodeGetter$2.apply(IndexQueryBuilder.scala:83)","org.neo4j.cypher.internal.executionplan.builders.IndexQueryBuilder$$anonfun$getNodeGetter$2.apply(IndexQueryBuilder.scala:81)","org.neo4j.cypher.internal.pipes.matching.MonoDirectionalTraversalMatcher.findMatchingPaths(MonodirectionalTraversalMatcher.scala:45)","org.neo4j.cypher.internal.pipes.TraversalMatchPipe$$anonfun$internalCreateResults$1.apply(TraversalMatchPipe.scala:38)","org.neo4j.cypher.internal.pipes.TraversalMatchPipe$$anonfun$internalCreateResults$1.apply(TraversalMatchPipe.scala:35)","scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)","scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)","org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply$mcZ$sp(ClosingIterator.scala:36)","org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply(ClosingIterator.scala:35)","org.neo4j.cypher.internal.ClosingIterator$$anonfun$hasNext$1.apply(ClosingIterator.scala:35)","org.neo4j.cypher.internal.ClosingIterator.failIfThrows(ClosingIterator.scala:86)","org.neo4j.cypher.internal.ClosingIterator.hasNext(ClosingIterator.scala:35)","org.neo4j.cypher.PipeExecutionResult.hasNext(PipeExecutionResult.scala:133)","scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)","scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:29)","org.neo4j.helpers.collection.ExceptionHandlingIterable$1.hasNext(ExceptionHandlingIterable.java:58)","org.neo4j.helpers.collection.IteratorWrapper.hasNext(IteratorWrapper.java:42)","org.neo4j.server.rest.repr.ListRepresentation.serialize(ListRepresentation.java:58)","org.neo4j.server.rest.repr.Serializer.serialize(Serializer.java:75)","org.neo4j.server.rest.repr.MappingSerializer.putList(MappingSerializer.java:61)","org.neo4j.server.rest.repr.CypherResultRepresentation.serialize(CypherResultRepresentation.java:57)","org.neo4j.server.rest.repr.MappingRepresentation.serialize(MappingRepresentation.java:42)","org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:144)"]}}
0
The link to my question in Spring forum is BadInputException-for-a-neo4j-Custom-query
You cannot use parameters within the strings of an index query 'withinDistance:[{0},{1}, {2}]', you have to provide the whole index query as a parameter.
{"query":"START n=node:LAT_LONG({indexQuery})
match n<-[:address]-(location)<-[:CONTAINS]-(pol)
return pol",
"params": {"indexQuery": "withinDistance:[39.27442339.274423, -74.598347, 50.0 ]"} }

AFNetworking set image without extension

There is a method in AFNetworking that can set image conveniently:
- (void)setImageWithURL:(NSURL *)url
placeholderImage:(UIImage *)placeholderImage
but if the url image have no extension(like http://static.qyer.com/album/user/330/21/QkpVQBsHaA/670), there are some problems,sometimes the image can be displayed exactly some times it is not displayed.
I found a method
[AFImageRequestOperation addAcceptableContentTypes:<#(NSSet *)contentTypes#>];
how should I set the contentTypes?
If you curl the URL provided, you can see the problem:
curl -i -X HEAD http://static.qyer.com/album/user/330/21/QkpVQBsHaA/670
HTTP/1.0 200 OK
Server: nginx/1.0.11
Date: Fri, 29 Mar 2013 02:03:24 GMT
Content-Type: application/octer-stream
Last-Modified: Tue, 19 Mar 2013 09:40:23 GMT
ETag: "53430075-9814c-4d843e4fc6fc0"
Accept-Ranges: bytes
Content-Length: 622924
Powered-By-ChinaCache: MISS from 060531Q354
Powered-By-ChinaCache: MISS from 060532235y
Connection: close
Content-Type: application/octer-stream (which is, strangely, a misspelling of application/octet-stream), is not a valid image mime type. If you have any control over the server, I would strongly recommend you fix this to send real mime types—for the sake of everyone accessing the CDN.
Otherwise, I would recommend you add */* to the list of acceptable content types. This should accept anything thrown at it. You can also manually specify any content types you might expect the CDN to serve, including application/octer-stream.

Resources