youtube GeolocationSearch Daily Limit for Unauthenticated Use Exceeded - youtube

I downloaded several samples from https://developers.google.com/youtube/v3/docs/videos/list
The search.java, UpdateVideo.java can run successfully, which means I have applied youtube.apikey, client_secret and client_id successfully, or these two code will fail to run. But when I run GeolocaionSearch.java, it always shows me
There was a service error: 403 : Daily Limit for Unauthenticated Use
Exceeded. Continued use requires signup.
When I run
VideoListResponse listResponse = listVideosRequest.execute();
and the line before this line
YouTube.Videos.List listVideosRequest = youtube.videos().list("snippet, recordingDetails").setId(videoId);
can run successfully and I can get the video id.
I don't know why it shows me the error.

I am sure this is the right answer:
YouTube.Videos.List listVideosRequest = youtube.videos().list("snippet").setId(videoId);
listVideosRequest.setKey(apiKey);
VideoListResponse listResponse = listVideosRequest.execute();

Related

Slack Conversations API conversations.kick returning "channel_not_found" for a public channel

I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.

YouTube reporting API reports are all blank

I have created set of YouTube reporting jobs for a YouTube channel. The jobs were created and run every day as scheduled. However when I go to download the jobs they are all blank.
This is how I authenticate with the API:
def authenticate_from_credentials(API_SERVICE_NAME, API_VERSION):
youtube_client_id = os.environ['youtube_client_id']
youtube_client_secret = os.environ['youtube_client_secret']
youtube_refresh_token = os.environ['youtube_refresh_token']
credentials = client.OAuth2Credentials(
access_token=None,
client_id=youtube_client_id,
client_secret=youtube_client_secret,
refresh_token=youtube_refresh_token,
token_expiry=None,
token_uri='https://oauth2.googleapis.com/token',
user_agent=None,
revoke_uri=None
)
youtube_reporting = build(API_SERVICE_NAME, API_VERSION, credentials=credentials)
return youtube_reporting
This is the method I have been using to create the jobs:
# Call the YouTube Reporting API's jobs.create method to create a job.
def create_reporting_job(youtube_reporting, report_type_id, name):
# Provide keyword arguments that have values as request parameters.
reporting_job = youtube_reporting.jobs().create(
body=dict(
reportTypeId=report_type_id,
name=name
),
).execute()
print ('Reporting job "%s" created for reporting type "%s" at "%s"'
% (reporting_job['name'], reporting_job['reportTypeId'],
reporting_job['createTime']))
I authenticate like this:
youtube_reporting=authenticate_from_credentials('youtubereporting','v1')
And I will create a job like this:
create_reporting_job(youtube_reporting,"channel_combined_a2","Channel Combined a2")
I am not sure what the problem is here. The channel does have content and subscribers so the reports shouldn't be empty. I think there could be an issue with credentials or perhaps the wrong channel is associated with the report since the developer's Google accounts are different than the content owners. But I checked the channels associated with the Oauth credentials I am using and it was the right channel.
Why might my reports be empty and how can I fix this?
I hit the same issue, the problem is that you need to wait several hours for the report to become generated on the backend, at which point re-querying for reports will show results.
There is a subtle mention about this delay on https://developers.google.com/youtube/reporting/v1/reports under Step 3:
The API response to the jobs.create method contains a Job resource,
which specifies the ID that uniquely identifies the job. You can
start retrieving the report within 48 hours of the time that the job
is created, and the first available report will be for the day that
you scheduled the job.
This was quite confusing.

How to get a forever token using Oauth1?

I sell products online through a website I wrote. To manage my fulfilment flow, when a purchase is made I want my app to automatically create a card on a Trello board.
I've managed to do everything okay except that after a few minutes the token that I was using expires, even though I thought I had created a token that would never expire.
I can't manually authenticate every time an order comes in.
Here's the code I've written to generate tokens. (Oauth1).
Step 1 (one time): Get a manually authorized resource owner key, resource owner secret, and verifier.
import requests
from requests_oauthlib import OAuth1Session
oauth = OAuth1Session(CLIENT_KEY, client_secret=CLIENT_SECRET)
fetch_response = oauth.fetch_request_token(REQUEST_TOKEN_URL)
resource_owner_key = fetch_response.get('oauth_token')
resource_owner_secret = fetch_response.get('oauth_token_secret')
print(f'resource_owner_key: {resource_owner_key}')
print(f'resource_owner_secret: {resource_owner_secret}')
auth_url = oauth.authorization_url(AUTHORIZE_TOKEN_URL, scope='read,write', expiration='never') # expiration never
print(auth_url)
# Now manually authenticate in browser using this URL. Record resource owner key, secret and verifier
Step 2 (every time): Use resource owner key, resource owner secret, and verifier to generate a token.
oauth = OAuth1Session(CLIENT_KEY,
client_secret=CLIENT_SECRET,
resource_owner_key=RESOURCE_OWNER_KEY,
resource_owner_secret=RESOURCE_OWNER_SECRET,
verifier=VERIFIER)
oauth_tokens = oauth.fetch_access_token(ACCESS_TOKEN_URL)
token = oauth_tokens.get('oauth_token')
Step 3: Use token in POST request to make card.
This all works fine for a few minutes, then on trying to use it again I get the error:
requests_oauthlib.oauth1_session.TokenRequestDenied: Token request failed with code 500, response was 'token not found'.
I thought that token was last forever? I can still see under my account details on Trello:
read and write access on all your boards
read and write access on all your teams
Approved: today at 6:30 AM
Never Expires
Set expiration long expiration time in token like expire in 2099 something like that
Solved - I was doing everything right, just that Step 2 should only be done once instead of every time. I thought I had to generate a new token for each new request, but the token generated at the 'token = ' line is actually good to save off and use forever.

Twitter v1.1: 400 Bad request

I have problems with the new Twitter API: v1.0 is working without problems, but if I change the URL to v1.1 I get all the time a error "400 Bad request" (seen with Firebug).
Example:
https://api.twitter.com/1/statuses/user_timeline.json?screen_name=twitterapi
This is working like a charm, everything works as excepted.
Simply changing the URL to .../1.1/... and I get a Bad request error and even to JSON error response or even some content at all.
https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=twitterapi
Note: It couldn't be a rate limitation, because I accessed the URL the first time ever.
https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=twitterapi redirects me to https://api.twitter.com/1/statuses/user_timeline.json?screen_name=twitterapi
Looks like 1.1 is the same thing as 1
UPD: Looks like this is a rate limit (as 1.1 link worked for me 2 hours ago). Even if you hit API page for the first time, some of your apps (descktop or mobile) could use API methods.
UPD2: in 1.1 400 Bad request means you are not autorized (https://dev.twitter.com/docs/error-codes-responses, https://dev.twitter.com/docs/auth/oauth#user-context). So you need to get user context
You need to authenticate and authorize using oauth before using v1.1 apis
Here is something which works with python tweepy - gets statuses from users timeline
def twitter_fetch(screen_name = "BBCNews",maxnumtweets=10):
'Fetch tweets from #BBCNews'
# API described at https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline
consumer_token = '' #substitute values from twitter website
consumer_secret = ''
access_token = ''
access_secret = ''
auth = tweepy.OAuthHandler(consumer_token,consumer_secret)
auth.set_access_token(access_token,access_secret)
api = tweepy.API(auth)
#print api.me().name
#api.update_status('Hello -tweepy + oauth!')
for status in tweepy.Cursor(api.user_timeline,id=screen_name).items(2):
print status.text+'\n'
if __name__ == '__main__':
twitter_fetch('BBCNews',10)
For me the cause was the size of the media that was attached to the tweet. If it was <1.2MB it went through OK, but if it was over, I would get a 400 error every time.
Strange considering Twitter says the tweet limit is 3MB https://twittercommunity.com/t/getting-media-parameter-is-invalid-after-successfully-uploading-media/58354

Clicks on OpenGraph links in timeline fail to authenticate with "Error validating verification code"

I'm trying to authenticate users coming to my site from a click on an opengraph action in facebook.
I've generate the action successfully with the following code. This
goes out to my timeline successfully. So far, so good.
require 'cgi'
client_id ="CLIENT_ID"
client_secret = "CLIENT_SECRET"
recipe = "http://foodonthetable.shadr.us/recipes/1007-spanish-chicken-and-rice"
access_token = "LONGACCESSTOKEN"
# PUBLISH
result = `curl -F access_token=#{access_token} -F recipe=#{recipe} https://graph.facebook.com/me/foodonthetable_mrh:plan_to_cook`
puts result
When I click on the link in the timeline, I'm taken to the following URL. This is where
I start having issues. Basically, I want to recognize the user coming in
through this link and log them in (or create a new user). I have
authenticated referrals turned on, and I'm receiving the code parameter as
expected.
http://foodonthetable.shadr.us/recipes/1007-spanish-chicken-and-rice?fb_action_ids=384156608318085&fb_action_types=foodonthetable_mrh%3Aplan_to_cook&fb_source=timeline_og&action_object_map=%7B%22384156608318085%22%3A10151004406923956%7D&code=AQAPExNw7MHkxcLTEi5L24iD79pVa-WYxyhBA_bhdWLCM0PCGDuPjh1WqmAyd3_O3_LSjYzPawrinHNP3nv9BCMB_XuTnr8De8xQ2AwXqCaeHUzZUPm2MPyQ_eodOC9-YtjkvXm_PzRX3JG58khalT3AJjVuZvKHn5hWGDSohXLRbHGxW-vOg_Whm3mt_WUkdd0#_=_
However, when I try to get the auth_token using this code, I'm not able to.
I've tried using a couple of different versions of the redirect_uri thinking
that might be the problem (based on other posts) but nothing seems to work.
redirect_uri = "http://foodonthetable.shadr.us/recipes/1007-spanish-chicken-and-rice"
redirect_uri = "http://foodonthetable.shadr.us/facebook_connect/connect"
code = "AQAPExNw7MHkxcLTEi5L24iD79pVa-WYxyhBA_bhdWLCM0PCGDuPjh1WqmAyd3_O3_LSjYzPawrinHNP3nv9BCMB_XuTnr8De8xQ2AwXqCaeHUzZUPm2MPyQ_eodOC9-YtjkvXm_PzRX3JG58khalT3AJjVuZvKHn5hWGDSohXLRbHGxW-vOg_Whm3mt_WUkdd0"
result = `curl -F client_id=#{client_id} -F redirect_uri=#{redirect_uri} -F client_secret=#{client_secret} -F code=#{code} https://graph.facebook.com/oauth/access_token`
puts result
No matter what I do here, I always get the following error:
{"error":{"message":"Error validating verification code.","type":"OAuthException","code":100}}
Any help would be appreciated.
This does fix it.
redirect_uri = request.url.split('&code=').first

Resources