Inconsistent ETags in YouTube API - youtube-api

I'm looking at building a caching layer on top of the YouTube API and making use of the HTTP standard ETag functionality to do this as described here https://developers.google.com/youtube/v3/getting-started#etags
I've done some direct testing of this against the API in most cases it seems to be working - I can get 304 responses etc.
However I'm seeing a few places where the API is returning different ETags when the response has not changed.
In these cases the ETags seem to cycle between a set of values instead of being a single consistent value.
When I pick one of the ETags and use it to send a conditional GET I will sometimes get a 304 back (when it matches) or sometimes get a 200 with a full response (when it was one of the other values) even though the actual response data is the same.
I've found this behaviour in at least two places:
1) youtube/v3/channels part=brandingSettings
In the response here the brandingSettings has a "hints" value which is an array of size 3.
The order of the elements in this array is random and varies on each request however it seems to affect the etag, meaning I get 6 (permutations for 3 items) different possible ETags values for the same data.
Either the array order should be fixed or the ETag generation algorithm should account for this?
2) youtube/v3/channels part=contentDetails
The ETag for the response here seems to vary between 3 different values, despite there being no other differences in the data. In particular the other "etag" value within "items" remains constant.
Is this a bug in the YouTube API Etag implementation? Surely this behaviour will effectively break any caching layer trying to reduce data retrieval from the YouTube API?

Related

MS Graph API: Change page size after expanding children using query params

I'm using the MS Graph API to expand children for their name and downloadURL. This is working very well:
/path/?$expand=children($select=name,content.downloadUrl)
I want to increase the page size from the default 200 to 999 (or whatever max size it will allow). Reading the MS Graph docs, I learned that I can use $top=(int) to change the max page size.
I've tried this:
/path/?$expand=children($top=999&$select=name,content.downloadUrl)
And this:
/path/?$expand=children($select=name,content.downloadUrl;top=999)
But neither of these solutions work. I also tried replacing top=999 with something smaller like top=3, but that doesn't work either and always returns 200 children. It's as if the "top" isn't even applied.
Any help for this? Thanks!
You cannot control the page size in $expand. Expand should be used for situations where you want a sample set of the underlying data rather than the complete data set. It's generally best to think of it as a quick way to get the first page of data.
More importantly, you really don't want a REST API to give you "whatever the max size". HTTP may be super flexible but it is not optimal for moving large payloads and, as a result, performance will be horrible.
For optimal performance, you should try to keep your page sizes around 100 records (smaller is better) and processing each page of data as you receive it.

Why does the Twitter API search endpoint only show a max of 15 results when sorting by popular tweets?

When using the search endpoint, I am only getting a max of 15 results, no matter how popular of a search query I use. Setting count to 100 does not make a difference, however it does when sorting by most recent. Does anybody else experience this? Is it a possible bug or on purpose?
Here's an example call:
https://api.twitter.com/1.1/search/tweets.json?q=pluto&result_type=popular&count=100
Docs: https://dev.twitter.com/rest/public/search
I have actually the same problem. I can just tell you, if your request has more result than 15, that you can "repeat" the request checking the last block "search_metadata" in the json file. You get directly the next request to do under "next_results". If there are non more results you will not see this part of code.

When caching response what would be a reasonable value for maxage header when response does not change?

I have a YQL query that returns data that I know for sure will not ever change.
In order to avoid rate limits, I was thinking of adding a maxage header to the yql response.
Now I'm wondering what a reasonable value would be (in the case where I know for certain that the response will never ever change): a year ? 10 years ? more ?
Are there any specificities as to the way yql would treat the maxage header ?
Nice article on maxAge and how to use it: http://www.yqlblog.net/blog/2010/03/12/avoiding-rate-limits-and-getting-banned-in-yql-and-pipes-caching-is-your-friend/ . This should answer most of your queries about max age.
For your second question, if the response will never ever change, why even make an API call in the first place? You could eliminate the network latency altogether and have a conf/property file having the response on your server itself.
Am not quite sure if I understood what you meant by if there were any specifications to the way YQL would treat the header but will try to answer it to best of my knowledge. From the link I shared earlier, following are a few lines:
Secondly you can just ask YQL to cache the response to a statement for longer – just append the _maxage query parameter to your call and the result will be stored in cache for that length of time (but not shorter than it would have been originally):
http://query.yahooapis.com/v1/public/yql?q=select * from weather.forecast where location=90210&_maxage=3600
This is really useful when you’re using output from a table that’s not caching enough or an XML source without having to do any open table work
Hope this helps.

National Weather Service (NOAA) REST API returns nil for parameters of forecast

I am using the NWS REST API as my weather service for an app I am making. I was initially reluctant to use NWS because of its bad documentation, but I couldn't resist as it is offered completely free.
Now that I am trying to use it, I am running into some difficulty. When making a request for multiple days, the minimum temperature appears nil for several days.
(EDIT: As I have been testing the API more I have found that it is not always the minimum temperatures that are nil. It can be a max temp or a precipitation, it seems completely random. If you would like to make test calls using their web interface, you can do so here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserByDay.htm
and here: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdXML.htm)
Here is an example of a request the minimum temperatures are empty: http://graphical.weather.gov/xml/sample_products/browser_interface/ndfdBrowserClientByDay.php?listLatLon=40.863235,-73.714780&format=24%20hourly&numDays=7
Surprisingly, on their website, the minimum temperatures are available:
http://forecast.weather.gov/MapClick.php?textField1=40.83&textField2=-73.70
You'll see under the Minimum temperatures that it is filled with about 5 (sometimes less, it is inconsistent) blank fields that say <value xsi:nil="true"/>
If anybody can help me it would be greatly appreciated, using the NWS API can be a little overwhelming at times.
Thanks,
The nil values, from what I can understand of the documentation, here and here, simply indicate that the data is unavailable.
Without making assumptions about NOAA's data architecture, it's conceivable that the information available via the API may differ from what their website displays.
Missing values are represented by an empty element and xsi:nil=”true” (R2.2.1).
Nil values being returned seems to involve the time period. Notice the difference between the time-layout keys (see section 5.3.2) in 1 in these requests:
k-p24h-n7-1
k-p24h-n6-1
The data times are different.
<layout-key> element
The key is derived using the following convention:
“k” stands for key.
“p24h” implies a data period length of 24 hours.
“n7” means that the number of data times is 7.
“1” is a sequential number used to keep the layout keys unique.
Here, startDate is the factor. Leaving it off includes more time and might account for some requested data not yet being available.
Per documentation:
The beginning day for which you want NDFD data. If the string is empty, the start date is assumed to be the earliest available day in the database. This input is only needed if one wants to shorten the time window data is to be retrieved for (less than entire 7 days worth), e.g. if user wants data for days 2-5.
I'm not experiencing the randomness you mention. The folks on NOAA's Yahoo! Groups forum might be able to tell you more.

Amazon SimpleDB: Response messages don't agree with the request parameters

I'm making a simple high scores database for an iPhone game using Amazon's SimpleDB and am running into some strange issues where SimpleDB's response messages don't seem to line up with the requests I'm sending or even the state of the data on the server.
The expected sequence of events for submitting high scores in the app is:
A PutAttributes request is created
that tries to overwrite the current
score with the new value but only if
it is greater than the last known
value of the score.
If the expected value doesn't match the value on the server, SimpleDB's response message lets the app know what the actual value is and a new request is created using it as the new expected value.
This process continues until either
the response states that everything
was OK or until the score on
the server comes back as higher than
the score we're trying to submit
(i.e. if somebody with a higher
score submitted while this
back and forth was going on)
(In case it's relevant I'm using the ASIHTTPRequest class to handle the requests and I've explicitly turned off caching by setting each request's cache policy to ASIIgnoreCachePolicy when I create them.)
However, what's actually happening is a bit strange...
The first response comes back with the expected result. For example, the app submits a score of 200 and expects the score on the server to be 0 but it's actually 100. SimpleDB responds that the conditional check failed and lets the app know the actual value on the server (100).
The app sends a request with an updated expected value but SimpleDB responds with an identical response as the first time even though the expected value was changed (e.g. the response says the actual value is 100 and the expected value we passed in was 0 even though we had just changed it to 100).
The app sends a third request with the exact same score/expected values as the second request (e.g. 100 for both) and SimpleDB reports that the condition failed again because the actual value is 200.
So it looks like the second attempt actually worked even though SimpleDB reported a failure and gave an incorrect account of the parameters I had passed in. This odd behavior is also very consistent - every time I try to update a score with an expected value that doesn't match the one on the server the exact same sequence occurs.
I've been scratching my head at this for a while now and I'm flat out of ideas so if anyone with more SimpleDB experience than me could shed some light on this I'd be mighty grateful.
Below is a sample sequence of requests and responses in case that does a better job of describing the situation than my tortured explanation above (these values taken from actual requests and responses but I've edited out the non-relevant parts of the requests).
Request 1
(The score on the server is 100 at this point)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=000
Consistent=true
Response 1
Conditional check failed. Attribute (Score) value is (100) but was expected (000)
Request 2
(The app updates to proper score but based on the response SimpleDB seems to ignore the changes)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=100
Consistent=true
Response 2
Conditional check failed. Attribute (Score) value is (100) but was expected (000)
Request 3
(This time SimpleDB gets the expected value right but also reports that the score has been updated even though all previous responses indicated otherwise)
Attribute.1.Name=Score
Attribute.1.Replace=true
Attribute.1.Value=200
Expected.1.Name=Score
Expected.1.Value=100
Consistent=true
Response 3
Conditional check failed. Attribute (Score) value is (200) but was expected (100)
Update (10/21/10)
I checked to make sure that the requestIDs that are being returned from the server are all unique and indeed they are.
Try passing ConsistentRead=true in your requests.

Resources