I'm trying to fetch some weather data (Location forecast). This request return a couple of kb. OK, so I would only like to request the first part or terminate the request when i get the line
<temperature id="TTT" unit="celsius" value="xxxx"/>
Is it possible to do a URLRequest in iOS Swift that request a certain number of bytes/chars. Or is put into chunks and then terminated when temperature is received. I could setup a server acting as proxy and cut down the data exchanged to my app, but it would be nice if I could avoid this.
Related
I'm not sure when this started happening (I believe fairly recently). This is a breaking change if you rely on the steps documented here:
https://msdn.microsoft.com/en-us/office/office365/howto/sync-calendar-view
The issue is that the the Office 365 & Outlook.com calendarview api no longer seems to return a #odata.nextLink when there is more data to be fetched if you specify "odata.track-changes" for the "Prefer" header in your request.
Here is a CURL request to repro the issue... be sure make the request authenticated as a user with at least 50 events during the time frame specified (to trigger paging).
curl -H "Authorization: Bearer <OMITTED>" -H "Accept: application/json; odata.metadata=none" -H "Prefer: odata.track-changes" "https://outlook.office.com/api/v2.0/me/calendarview?startdatetime=2016-06-16T00:00:00Z&enddatetime=2017-06-23T00:00:00Z"
When I make this request, the resulting response has 10 entries (even though there are at least 50 events) and the response does not have an #odata.nextLink. It does have a #odata.deltaLink however.
Is anyone else experiencing this issue?
As far as I know it has always worked this way. The initial sync returns a deltaLink instead of a nextLink. You have to treat that initial sync request specially and go ahead and issue the next request using the deltaToken.
Initial sync request: The very first sync request sets up the sync state.
Initial sync response:
Check for "Preference-Applied: odata.track-changes" in the response header to confirm a successful sync attempt and the resource supports synchronization.
If the sync attempt was successful, the initial response always contains an #odata.deltaLink with a deltaToken value. If the response contains any data, save the deltaToken value for the second request.
If the initial response wasn't successful, or doesn't return any data indicating there are no events in the specified calendar view, this round of sync ends.
Subsequent sync request: Use the deltaToken or skipToken value from the previous request to issue the next request. See the second and third sync requests as examples.
Subsequent sync response:
If the response returns any data, and, there is more data to sync in that time range, the response would include an #odata.nextLink and a skipToken value. Save the skipToken for the next sync request.
Go back to step 3, follow the nextLink, if any, apply the corresponding skipToken value in the next sync request, and follow any subsequent nextLink, until you have synchronized all the data in the time range for that calendar.
Final sync response: When all events in the calendar view are synchronized, the final response in this round would include an #odata.deltaLink and a deltaToken again. Save the deltaToken value for the next round of synchronization.
I'm creating an IOS app using swift.
Recently, I've encountered a weird bug.
Im trying to check if a url is valid, therefore, I'm creating a request with the url and check for the response. I do this task with dataTaskWithRequest of NSUrlSession.
The weird bug is that if the URL is alibaba ,the response returns after a long time(more than 20 seconds sometimes).
Why does that happen?
As far as i concerned it happens only with this specific url.
Here is some code although its not necessary .
let request = NSMutableURLRequest(URL: validatedUrl)
request.HTTPMethod = "HEAD"
let session = NSURLSession.sharedSession()
let task = session.dataTaskWithRequest(request){ data, response, error in
// The response here returns after a very long time
let url = request.URL!.absoluteString
I would appreciate some help guys!
You're retrieving the contents of a URL over the Internet. The speed at which this happens is arbitrary. It depends on the speed of the DNS server that looks up the hostname, the speed of the web server that responds to the request, the speed of the user's Internet connection, and the speed of every network in between.
You can safely assume that it will either succeed or time out within three minutes. Twenty seconds isn't even slightly unusual over a cellular network.
You should probably rethink what you're doing with this URL and why you're doing it, or at least try to figure out a way to avoid keeping the user waiting while you fetch the URL.
I'm very new in Erlang world and I'm trying to write a client for the Twitter Stream API. I'm using httpc:request to make a POST request and I constantly get 401 error, I'm obviously doing something wrong with how I'm sending the request... What I have looks like this:
fetch_data() ->
Method = post,
URL = "https://stream.twitter.com/1.1/statuses/filter.json",
Headers = "Authorization: OAuth oauth_consumer_key=\"XXX\", oauth_nonce=\"XXX\", oauth_signature=\"XXX%3D\", oauth_signature_method=\"HMAC-SHA1\", oauth_timestamp=\"XXX\", oauth_token=\"XXX-XXXXX\", oauth_version=\"1.0\"",
ContentType = "application/json",
Body = "{\"track\":\"keyword\"}",
HTTPOptions = [],
Options = [],
R = httpc:request(Method, {URL, Headers, ContentType, Body}, HTTPOptions, Options),
R.
At this point I'm confident there's no issue with the signature as the same signature works just fine when trying to access the API with curl. I'm guessing there's some issue with how I'm making the request.
The response I'm getting with the request made the way demonstrated above is:
{ok,{{"HTTP/1.1",401,"Unauthorized"},
[{"cache-control","must-revalidate,no-cache,no-store"},
{"connection","close"},
{"www-authenticate","Basic realm=\"Firehose\""},
{"content-length","1243"},
{"content-type","text/html"}],
"<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\"/>\n<title>Error 401 Unauthorized</title>\n</head>\n<body>\n<h2>HTTP ERROR: 401</h2>\n<p>Problem accessing '/1.1/statuses/filter.json'. Reason:\n<pre> Unauthorized</pre>\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n</body>\n</html>\n"}}
When trying with curl I'm using this:
curl --request 'POST' 'https://stream.twitter.com/1.1/statuses/filter.json' --data 'track=keyword' --header 'Authorization: OAuth oauth_consumer_key="XXX", oauth_nonce="XXX", oauth_signature="XXX%3D", oauth_signature_method="HMAC-SHA1", oauth_timestamp="XXX", oauth_token="XXX-XXXX", oauth_version="1.0"' --verbose
and I'm getting the events just fine.
Any help on this would be greatly appreciated, new with Erlang and I've been pulling my hair out on this one for quite a while.
There are several issues with your code:
In Erlang you are encoding parameters as a JSON body while with curl, you are encoding them as form data (application/x-www-form-urlencoded). Twitter API expects the latter. In fact, you get a 401 because the OAuth signature does not match, as you included the track=keyword parameter in the computation while Twitter's server computes it without the JSON body, as it should per OAuth RFC.
You are using httpc with default options. This will not work with the streaming API as the stream never ends. You need to process results as they arrive. For this, you need to pass {sync, false} option to httpc. See also stream and receiver options.
Eventually, while httpc can work initially to access Twitter streaming API, it brings little value to the code you need to develop around it to stream from Twitter API. Depending on your needs you might want to replace it a simple client directly built on ssl, especially considering it can decode HTTP packets (what is left for you is the HTTP chunk encoding).
For example, if your keywords are rare, you might get a timeout from httpc. Besides, it might be easier to update the list of keywords or your code with no downtime without httpc.
A streaming client directly based on ssl could be implemented as a gen_server (or a simple process, if you do not follow OTP principles) or even better a gen_fsm to implement reconnection strategies. You could proceed as follows:
Connect using ssl:connect/3,4 specifying that you want the socket to decode the HTTP packets with {packet, http_bin} and you want the socket to be configured in passive mode {active, false}.
Send the HTTP request packet (preferably as an iolist, with binaries) with ssl:send/2,3. It shall spread on several lines separated with CRLF (\r\n), with first the query line (GET /1.1/statuses/filter.json?... HTTP/1.1) and then the headers including the OAuth headers. Make sure you include Host: stream.twitter.com as well. End with an empty line.
Receive the HTTP response. You can implement this with a loop (since the socket is in passive mode), calling ssl:recv/2,3 until you get http_eoh (end of headers). Note down whether the server will send you data chunked or not by looking at the Transfer-Encoding response header.
Configure the socket in active mode with ssl:setopts/2 and specify you want packets as raw and data in binary format. In fact, if data is chunked, you could continue to use the socket in passive mode. You could also get data line by line or get data as strings. This is a matter of taste: raw is the safest bet, line by line requires that you check the buffer size to prevent truncation of a long JSON-encoded tweet.
Receive data from Twitter as messages sent to your process, either with receive (simple process) or in handle_info handler (if you implemented this with a gen_server). If data is chunked, you shall first receive the chunk size, then the tweets and the end of the chunk eventually (cf RFC 2616). Be prepared to have tweets that spread on several chunks (i.e. maintain some kind of buffer). The best here is to do the minimum decoding in this process and send tweets to another process, possibly in binary format.
You should also handle errors and socket being closed by Twitter. Make sure you follow Twitter's guidelines for reconnection.
I have written a web-service using Erlang and Mochiweb. The web service returns a lot of results and takes some time to finish the computation.
I'd like to return results as soon as the program finds it, instead of returning them when it found them all.
edit:
i found that i can use a chunked request to stream result, but seems that i can't find a way to close the connection. so any idea on how to close a mochiweb request?
To stream data of yet unknown size with HTTP 1.1 you can use HTPP chunked transfer encoding. In this encoding each chunk of data prepended by its size in hexadecimal. Last chunk is a zero-length chunk, with the chunk size coded as 0, but without any data.
If client doesn't support HTTP 1.1 server can send data as binary chunks and close connection at the end of the stream.
In MochiWeb it's all works as following:
HTTP response should be started with Response = Request:respond({Code, ResponseHeaders, chunked}) function. (By the way, look at the code comments);
Then chunks can be send to client with Response:write_chunk(Data) function. To indicate client the end of the stream chunk of zero length should be sent: Response:write_chunk(<<>>).
When handling of current request is over MochiWeb decides should connection be closed or can be reused by HTTP persistent connection.
I'm implementing a REST API using ASP.NET MVC, and a little stumbling block has come up in the form of the Expect: 100-continue request header for requests with a post body.
RFC 2616 states that:
Upon receiving a request which
includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
This sounds to me like I need to make two responses to the request, i.e. it needs to immediately send a HTTP 100 Continue response, and then continue reading from the original request stream (i.e. HttpContext.Request.InputStream) without ending the request, and then finally sending the resultant status code (for the sake of argument, lets say it's a 204 No Content result).
So, questions are:
Am I reading the specification right, that I need to make two responses to a request?
How can this be done in ASP.NET MVC?
w.r.t. (2) I have tried using the following code before proceeding to read the input stream...
HttpContext.Response.StatusCode = 100;
HttpContext.Response.Flush();
HttpContext.Response.Clear();
...but when I try to set the final 204 status code I get the error:
System.Web.HttpException: Server cannot set status after HTTP headers have been sent.
The .NET framework by default always sends the expect: 100-continue header for every HTTP 1.1 post. This behavior can be programmatically controlled per request via the System.Net.ServicePoint.Expect100Continue property like so:
HttpWebRequest httpReq = GetHttpWebRequestForPost();
httpReq.ServicePoint.Expect100Continue = false;
It can also be globally controlled programmatically:
System.Net.ServicePointManager.Expect100Continue = false;
...or globally through configuration:
<system.net>
<settings>
<servicePointManager expect100Continue="false"/>
</settings>
</system.net>
Thank you Lance Olson and Phil Haack for this info.
100-continue should be handled by IIS. Is there a reason why you want to do this explicitly?
IIS handles the 100.
That said, no it's not two responses. In HTTP, when the Expect: 100-continue comes in as part of the message headers, the client should be waiting until it receives the response before sending the content.
Because of the way asp.net is architected, you have little control over the output stream. Any data that gets written to the stream is automatically put in a 200 response with chunked encoding whenever you flush, be it that you're in buffered mode or not.
Sadly all this stuff is hidden away in internal methods all over the place, and the result is that if you rely on asp.net, as does MVC, you're pretty much unable to bypass it.
Wait till you try and access the input stream in a non-buffered way. A whole load of pain.
Seb