i am trying to overcome intermittent 409 error that occurs while uploading/updating metadata of a file in SharePoint's Document library using Microsoft Graph SDK. To retry failed calls SDK provides WithMaxRetry() and WithShouldRetry() options. The MaxRetry works for error codes 429, and i am assuming that ShouldRetry delegate offers us an option to implement our own logic for the retries. Based on this assumption, i have the below code:
_graphServiceClientFactory.GetClient().Drives[driveId].Root.ItemWithPath(path).ListItem.Fields.Request()
.WithShouldRetry((delay, attempt, httpResponse) =>
(attempt <= 5 &&
(httpResponse.StatusCode == HttpStatusCode.Conflict)))
.UpdateAsync(new FieldValueSet { AdditionalData = dataDictionary });
In my test the ShouldRetry delegate was never called/evaluated on failures/otherwise, there is no documentation on the usage of WithShouldRetry(). It would be helpful to get inputs on usage of WithShouldRetry() option.
It appears that the WithShouldRetry() is faulty, i had reported this issue in GitHub (Microsft Graph SDK repo), and they have marked the issue as Bug.
As a workaround one could use Polly for retry as shown below,
var result = await Policy.Handle<ServiceException>(ex =>
ex.StatusCode == HttpStatusCode.Conflict ||
ex.StatusCode == HttpStatusCode.Locked ||
ex.StatusCode == HttpStatusCode.ServiceUnavailable ||
ex.StatusCode == HttpStatusCode.GatewayTimeout ||
ex.StatusCode == HttpStatusCode.TooManyRequests)
.Or<HttpRequestException>()
.WaitAndRetryAsync(3, retryCount => TimeSpan.FromSeconds(Math.Pow(2, retryCount) / 2))
.ExecuteAsync<FieldValueSet>(async () =>
await GetDriveItemWithPath(itemPath, driveId).ListItem.Fields.Request()
.WithMaxRetry(0)
.UpdateAsync(new FieldValueSet { AdditionalData = dataDictionary }));
By default Graph SDK does 3 retries for throttled and gateway timeout errors. In the above code those native retries have been blocked by calling WithMaxRetry(0). The internal retry logic of Graph SDK is made part of the Polly implementation.
Note: This Polly implementation should be a temporary solution, i believe it is best to utilize the WithShouldRetry() once the reported bug is resolved.
We got to build our own reporting database for our Youtube channel to measure the channel and video performance.
To support this, we implemented an ETL job to extract using Youtube Analytics API and used below python code to get the data.
def GetAnalyticsData(extractDate,accessToken, channelId):
channelId = 'channel%3D%3D{0}'.format(channelId)
headers = {'Authorization': 'Bearer {}'.format(accessToken),
'accept': 'application/json'}
url = 'https://youtubeanalytics.googleapis.com/v2/reports?dimensions={dimensions}&endDate={enddate}&ids={ids}&maxResults={maxresults}&metrics={metrics}&startDate={startdate}&alt={alt}&sort={sort}'.format(
dimensions='video',
ids=channelId,
enddate= extractDate,
startdate=extractDate,
metrics = 'views%2Ccomments%2Clikes%2Cdislikes%2Cshares%2CestimatedMinutesWatched%2CsubscribersGained%2CsubscribersLost%2CannotationClicks%2CannotationClickThroughRate%2CaverageViewDuration%2CaverageViewPercentage%2CannotationCloseRate%2CannotationImpressions%2CannotationClickableImpressions%2CannotationClosableImpressions%2CannotationCloses',
maxresults = 200,
alt ='json',
sort='-views'
)
return requests.get(url,headers=headers)
We hit this API everyday and get all the video metric and sorted by views in descending order.
This solved our need partially and it returns only 200 videos, if we specify maxResults more than 200, its return 400 error code.
The challenge is, how to get all videos for the given date and given channel?
Thanks in advance.
Regards,
Guna
I am not keen on YouTube analytics API, but it seems that you are looking for startIndex.
startIndex
integer
The 1-based index of the first entity to retrieve. (The default value is 1.) Use this parameter as a pagination mechanism along with the max-results parameter.
I am using tweepy streaming API to get the tweets containing a particular hashtag . The problem that I am facing is that I am unable to extract full text of the tweet from the Streaming API . Only 140 characters are available and after that it gets truncated.
Here is the code:
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
def analyze_status(text):
if 'RT' in text[0:3]:
return True
else:
return False
class MyStreamListener(tweepy.StreamListener):
def on_status(self, status):
if not analyze_status(status.text):
with open('fetched_tweets.txt', 'a') as tf:
tf.write(status.text.encode('utf-8') + '\n\n')
print(status.text)
def on_error(self, status):
print("Error Code : " + status)
def test_rate_limit(api, wait=True, buffer=.1):
"""
Tests whether the rate limit of the last request has been reached.
:param api: The `tweepy` api instance.
:param wait: A flag indicating whether to wait for the rate limit reset
if the rate limit has been reached.
:param buffer: A buffer time in seconds that is added on to the waiting
time as an extra safety margin.
:return: True if it is ok to proceed with the next request. False otherwise.
"""
# Get the number of remaining requests
remaining = int(api.last_response.getheader('x-rate-limit-remaining'))
# Check if we have reached the limit
if remaining == 0:
limit = int(api.last_response.getheader('x-rate-limit-limit'))
reset = int(api.last_response.getheader('x-rate-limit-reset'))
# Parse the UTC time
reset = datetime.fromtimestamp(reset)
# Let the user know we have reached the rate limit
print "0 of {} requests remaining until {}.".format(limit, reset)
if wait:
# Determine the delay and sleep
delay = (reset - datetime.now()).total_seconds() + buffer
print "Sleeping for {}s...".format(delay)
sleep(delay)
# We have waited for the rate limit reset. OK to proceed.
return True
else:
# We have reached the rate limit. The user needs to handle the rate limit manually.
return False
# We have not reached the rate limit
return True
myStreamListener = MyStreamListener()
myStream = tweepy.Stream(auth=api.auth, listener=myStreamListener,
tweet_mode='extended')
myStream.filter(track=['#bitcoin'], async=True)
Does any one have a solution ?
tweet_mode=extended will have no effect in this code, since the Streaming API does not support that parameter. If a Tweet contains longer text, it will contain an additional object in the JSON response called extended_tweet, which will in turn contain a field called full_text.
In that case, you'll want something like print(status.extended_tweet.full_text) to extract the longer text.
There is Boolean available in the Twitter stream. 'status.truncated' is True when the message contains more than 140 characters. Only then the 'extended_tweet' object is available:
if not status.truncated:
text = status.text
else:
text = status.extended_tweet['full_text']
This works only when you are streaming tweets. When you are collecting older tweets using the API method you can use something like this:
tweets = api.user_timeline(screen_name='whoever', count=5, tweet_mode='extended')
for tweet in tweets:
print(tweet.full_text)
This full_text field contains the text of all tweets, truncated or not.
You have to enable extended tweet mode like so:
s = tweepy.Stream(auth, l, tweet_mode='extended')
Then you can print the extended tweet, but remember due to Twitter APIs you have to make sure extended tweet exists otherwise it'll throw an error
l = listener()
class listener(StreamListener):
def on_status(self, status):
try:
print(status.extended_tweet['full_text'])
except Exception as e:
raise
else:
print(status.text)
return True
def on_error(self, status_code):
if status_code == 420:
return False
Worked for me.
Building upon #AndyPiper's answer, you can check to see if the tweet is there by either a try/except:
def get_tweet_text(tweet):
try:
return tweet.extended_tweet['full_text']
except AttributeError as e:
return tweet.text
OR check against the inner json:
def get_tweet_text(tweet):
if 'extended_tweet' in tweet._json:
return tweet.extended_tweet['full_text']
else:
return tweet.text
Note that extended_tweet is a dictionary object, so "tweet.extended_tweet.full_text" doesn't actually work and will throw an error.
In addition to the previous answer: in my case it worked only as status.extended_tweet['full_text'], because the status.extended_tweet is nothing but a dictionary.
this is what worked for me:
status = tweet if 'extended_tweet' in status._json: status_json = status._json['extended_tweet']['full_text'] elif 'retweeted_status' in status._json and 'extended_tweet' in status._json['retweeted_status']: status_json = status._json['retweeted_status']['extended_tweet']['full_text'] elif 'retweeted_status' in status._json: status_json = status._json['retweeted_status']['full_text'] else: status_json = status._json['full_text'] print(status_json)'
https://github.com/tweepy/tweepy/issues/935 - implemented from here, needed to change what they suggest but the idea stays the same
I use the Following Function:
def full_text_tweeet(id_):
status = api.get_status(id_, tweet_mode="extended")
try:
return status.retweeted_status.full_text
except AttributeError:
return status.full_text
and then call it in my list
tweets_list = []
# foreach through all tweets pulled
for tweet in tweets:
# printing the text stored inside the tweet object
tweet_list = [str(tweet.id),str(full_text_tweeet(tweet.id))]
tweets_list.append(tweet_list)
try this, this is the most simplest and fastest way.
def on_status(self, status):
if hasattr(status, "retweeted_status"): # Check if Retweet
try:
print(status.retweeted_status.extended_tweet["full_text"])
except AttributeError:
print(status.retweeted_status.text)
else:
try:
print(status.extended_tweet["full_text"])
except AttributeError:
print(status.text)
Visit the link it will give you the how extended tweet can be achieve
I am using Linq2Twitter in my ASP.net Web Forms application to return recent user tweets
var tweets = await
(from tweet in ctx.Status
where (
(tweet.Type == StatusType.User)
&& (tweet.ScreenName == screenName)
&& (tweet.ExcludeReplies == true)
&& (tweet.IncludeMyRetweet == false)
&& (tweet.Count == 10)
&& (tweet.RetweetCount < 1)
)
select tweet)
.Take(count)
.ToListAsync();
This seems to work well and I get the expected Json return, but...
When I try and construct a link to the original tweet...
“https://twitter.com/” + ScreenName + “/status/” + data.StatusId
I get a "Sorry, page does not exist error".
Upon investigation it appears that the returned StatusId is incorrect. For example, the returned StatusId is:
500244784682774500
When the actual tweet refers to:
500244784682774528
In other words, in this case, the StatusId seem to be 28 adrift.
Can anyone throw any light on what is happening/what I am doing wrong?
IThanks.
After some debugging I found that the ID returned to the LinqtoTwitter application was correct, the problem occurred either in the JSON converter or in JavaScript itself being unable to handle the unsigned-integer id value.
The solution was to create a simple view model from the returned results (using an extension method against the LinqToTwitter.Status object) and passing that to the client instead of the whole data graph.
Original message
YouTube API の動画検索(http://gdata.youtube.com/feeds/api/videos?v=2)で
start-index を指定しても同じ結果が返されることがあります。
feeds/api/videos?v=2&max-results=15&start-index=881
feeds/api/videos?v=2&max-results=15&start-index=981
上記のようなAPIを実行すると、同じ結果が返されます。 原因を教えていただけますでしょうか?
Translated message (by Google translate)
Video Search of YouTube API in
(http://gdata.youtube.com/feeds/api/videos?v=2) You might have the
same result is returned if you specify a start-index.
feeds/api/videos?v=2&max-results=15&start-index=881
feeds/api/videos?v=2&max-results=15&start-index=981
When you run the API, such as described above, the same result is returned.
Could you tell me the reason?
It should return error since the maximum allowed number of videos to return is now 500
https://code.google.com/p/gdata-issues/issues/detail?id=4282#c6
If you run the below (less than index 501), it will return you different results
http://gdata.youtube.com/feeds/api/videos?v=2&max-results=15&start-index=481
http://gdata.youtube.com/feeds/api/videos?v=2&max-results=15&start-index=381