Files uploaded using Oauth 2.0 service account do not appear - oauth-2.0

Trying to use Oauth 2.0 server to server authentication (using a service account) to upload a file to google drive. Have used their sample code as a reference, the resulting script is something like this:
import httplib2
import pprint
import sys
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
from apiclient.http import MediaFileUpload
def main(argv):
# Load the key in PKCS 12 format that you downloaded from the Google API
# Console when you created your Service account.
f = open('key.p12', 'rb')
key = f.read()
f.close()
# Check https://developers.google.com/drive/scopes for all available scopes
OAUTH_SCOPE = 'https://www.googleapis.com/auth/drive'
# Path to the file to upload
FILENAME = 'testfile.txt'
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with the Credentials. Note that the first parameter, service_account_name,
# is the Email address created for the Service account. It must be the email
# address associated with the key that was created.
credentials = SignedJwtAssertionCredentials(
'xxxxx-xxxxxxx#developer.gserviceaccount.com',
key,
OAUTH_SCOPE)
http = httplib2.Http()
http = credentials.authorize(http)
drive_service = build('drive', 'v2', http=http)
# Insert a file
media_body = MediaFileUpload(FILENAME, mimetype='text/plain', resumable=True)
body = {
'title': 'My document',
'description': 'A test document',
'mimeType': 'text/plain'
}
fil = drive_service.files().insert(body=body, media_body=media_body).execute()
pprint.pprint(fil)
if __name__ == '__main__':
main(sys.argv)
The script seems to run ok (no errors, pprint shows output that seems to be fine). However the google drive page for the account does not show the uploaded file. When trying to access one of the links from the pprint output to see the file I get a "You need permission" message from Google Drive, which is weird, as I am logged to the account in which I created the service account.

The file is owned by the service account, not your Google account. Service accounts have their own 5gb of space for Google Drive.
You'll need to either share the file with your user account or have the service account impersonate your user account (assuming you're in a Google Apps domain) so that the file is created and owned by your user account.

Related

Does not have storage.buckets.list access to the Google Cloud project?

I wrote a Ruby script that will upload an audio file to a Google Cloud Storage.
def upload_to_Google_cloud(audio_file)
project_id = "<project_id>"
key_file = "<json_file>"
storage = Google::Cloud::Storage.new project: project_id, keyfile: key_file
bucket_name = storage.buckets.first.name
puts bucket_name
bucket = storage.bucket bucket_name
local_file_path = "/path/#{audio_file}"
file = bucket.create_file local_file_path, "#{audio_file}.flac"
return "Uploaded #{file.name}"
end
Though, everytime I run the command -> ruby video_dictation.rb, it returns me an error which is xxxxxxxxx does not have storage.buckets.list access to the Google Cloud project. (Google::Cloud::PermissionDeniedError).
Any help, suggestions? Thanks!
Should be permission issue.
Try to create a service account. It looks like this "my-storage-bucket#yourprojectname.iam.gserviceaccount.com"
Go to IAM & Admin -> Permission
Assign that service account with "Storage Object Admin" role.
Try your code again. If is working, please scope down your permission to the below list based on your needs.
5. Remember to download the json key file for that particular service account.

boto3 list all accounts in an organization

I have a requirement that I want to list all the accounts and then write all the credentials in my ~/.aws/credentials file. Fir this I am using boto3 in the following way
import boto3
client = boto3.client('organizations')
response = client.list_accounts(
NextToken='string',
MaxResults=123
)
print(response)
This fails with the following error
botocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the ListAccounts operation: The security token included in the request is expired
The question is , which token is it looking at? And if I want information about all accounts what credentials should I be using in the credentials file or the config file?
You can use boto3 paginators and pages.
Get an organizations object by using an aws configuration profile in the master account:
session = boto3.session.Session(profile_name=master_acct)
client = session.client('sts')
org = session.client('organizations')
Then use the org object to get a paginator.
paginator = org.get_paginator('list_accounts')
page_iterator = paginator.paginate()
Then iterate through every page of accounts.
for page in page_iterator:
for acct in page['Accounts']:
print(acct) # print the account
I'm not sure what you mean about "getting credentials". You can't get someone else's credentials. What you can do is list users, and if you want then list their access keys. That would require you to assume a role in each of the member accounts.
From within the above section, you are already inside a for-loop of each member account. You could do something like this:
id = acct['Id']
role_info = {
'RoleArn': f'arn:aws:iam::{id}:role/OrganizationAccountAccessRole',
'RoleSessionName': id
}
credentials = client.assume_role(**role_info)
member_session = boto3.session.Session(
aws_access_key_id=credentials['Credentials']['AccessKeyId'],
aws_secret_access_key=credentials['Credentials']['SecretAccessKey'],
aws_session_token=credentials['Credentials']['SessionToken'],
region_name='us-east-1'
)
However please note, that the role specified OrganizationAccountAccessRole needs to actually be present in every account, and your user in the master account needs to have the privileges to assume this role.
Once your prerequisites are setup, you will be iterating through every account, and in each account using member_session to access boto3 resources in that account.

Cannot Read Bigquery table sourced from Google Sheet (Oath / Scope Error)

import pandas as pd
from google.cloud import bigquery
import google.auth
# from google.cloud import bigquery
# Create credentials with Drive & BigQuery API scopes
# Both APIs must be enabled for your project before running this code
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/bigquery',
])
client = bigquery.Client(credentials=credentials, project=project)
# Configure the external data source and query job
external_config = bigquery.ExternalConfig('GOOGLE_SHEETS')
# Use a shareable link or grant viewing access to the email address you
# used to authenticate with BigQuery (this example Sheet is public)
sheet_url = (
'https://docs.google.com/spreadsheets'
'/d/1uknEkew2C3nh1JQgrNKjj3Lc45hvYI2EjVCcFRligl4/edit?usp=sharing')
external_config.source_uris = [sheet_url]
external_config.schema = [
bigquery.SchemaField('name', 'STRING'),
bigquery.SchemaField('post_abbr', 'STRING')
]
external_config.options.skip_leading_rows = 1 # optionally skip header row
table_id = 'BambooHRActiveRoster'
job_config = bigquery.QueryJobConfig()
job_config.table_definitions = {table_id: external_config}
# Get Top 10
sql = 'SELECT * FROM workforce.BambooHRActiveRoster LIMIT 10'
query_job = client.query(sql, job_config=job_config) # API request
top10 = list(query_job) # Waits for query to finish
print('There are {} states with names starting with W.'.format(
len(top10)))
The error I get is:
BadRequest: 400 Error while reading table: workforce.BambooHRActiveRoster, error message: Failed to read the spreadsheet. Errors: No OAuth token with Google Drive scope was found.
I can pull data in from a BigQuery table created from CSV upload, but when I have a BigQuery table created from a linked Google Sheet, I continue to receive this error.
I have tried to replicate the sample in Google's documentation (Creating and querying a temporary table):
https://cloud.google.com/bigquery/external-data-drive
You are authenticating as yourself, which is generally fine for BQ if you have the correct permissions. Using tables linked to Google Sheets often requires a service account. Create one (or have your BI/IT team create one), and then you will have to share the underlying Google Sheet with the service account. Finally, you will need to modify your python script to use the service account credentials and not your own.
The quick way around this is to use the BQ interface, select * from the Sheets-linked table, and save the results to a new table, and query that new table directly in your python script. This works well if this is a one-time upload/analysis. If the data in the sheets will be changing consistently and you will need to routinely query the data, this is not a long-term solution.
I solved problem by adding scope object to client.
from google.cloud import bigquery
import google.auth
credentials, project = google.auth.default(scopes=[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/bigquery',
])
CLIENT = bigquery.Client(project='project', credentials=credentials)
https://cloud.google.com/bigquery/external-data-drive
import pandas as pd
from google.oauth2 import service_account
from google.cloud import bigquery
#from oauth2client.service_account import ServiceAccountCredentials
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/bigquery']
SERVICE_ACCOUNT_FILE = 'mykey.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_credentials = credentials.with_subject('myserviceaccountt#domain.iam.gserviceaccount.com')
client = bigquery.Client(credentials=delegated_credentials, project=project)
sql = 'SELECT * FROM `myModel`'
DF = client.query(sql).to_dataframe()
You can try to update your default credentials through the console:
gcloud auth application-default login --scopes=https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/cloud-platform

How to get content owner info for particular video via YouTube Partner API?

Is there a way to get content owner information (attribution) for particular video via YouTube Partner API?
For example, this video: http://www.youtube.com/watch?v=I67cgXr6L6o&feature=c4-overview&list=UUGnjeahCJW1AF34HBmQTJ-Q is attributed to VEVO.
Is there a way to get that info somehow via API?
It's possbible now:
https://developers.google.com/youtube/partner/docs/v1/ownership/
[EDIT]
I've used this sample to create the snipped below and succesfully got the assets content owner.
You will need setup your python enviromment and the clients_secrets.file correctly
#!/usr/bin/python
import httplib2
import os
import sys
import logging
import requests
import json
import time
from apiclient.discovery import build
from oauth2client.file import Storage
from oauth2client.client import flow_from_clientsecrets
from oauth2client.tools import argparser, run_flow
from symbol import for_stmt
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the Google Developers Console at
# https://console.developers.google.com/.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE))
YOUTUBE_SCOPES = (
# This OAuth 2.0 access scope allows for read-only access to the authenticated
# user's account, but not other types of account access.
"https://www.googleapis.com/auth/youtube.readonly",
# This OAuth 2.0 scope grants access to YouTube Content ID API functionality.
"https://www.googleapis.com/auth/youtubepartner")
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
YOUTUBE_PARTNER_API_SERVICE_NAME = "youtubePartner"
YOUTUBE_PARTNER_API_VERSION = "v1"
KEY = "USE YOUR KEY"
# Authorize the request and store authorization credentials.
def get_authenticated_services(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=" ".join(YOUTUBE_SCOPES), message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http()))
youtube_partner = build(YOUTUBE_PARTNER_API_SERVICE_NAME, YOUTUBE_PARTNER_API_VERSION, http=credentials.authorize(httplib2.Http()))
return (youtube, youtube_partner)
def get_content_owner_id(youtube_partner):
content_owners_list_response = youtube_partner.contentOwners().list(fetchMine=True).execute()
return content_owners_list_response["items"][0]["id"]
#def
def claim_search(youtube_partner, content_owner_id, _videoId):
video_info = get_video_info(_videoId)
print '\n---------- CLAIM SEARCH LIST - VIDEO (',video_info['title'],') ID: (',_videoId,')'
claims = youtube_partner.claimSearch().list( videoId=_videoId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- CLAIMS: ',claims
if claims['pageInfo']['totalResults'] < 1:
print " --- DOESN'T HAVE CLAIMS"
return
assetId = claims['items'][0]['assetId']
print ' --- ASSET ID: ', assetId
ownershipHistory = youtube_partner.ownershipHistory().list(assetId=assetId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- OWNERSHIP HISTORY: ', ownershipHistory
contentOwnerId = ownershipHistory['items'][0]['origination']['owner']
print ' --- CONTENT OWNER ID: ', contentOwnerId
contentOwners = youtube_partner.contentOwners().get(contentOwnerId=contentOwnerId, onBehalfOfContentOwner=content_owner_id).execute()
print ' --- CONTENT OWNERS: ', contentOwners
def get_video_info(videoId):
r = requests.get("https://www.googleapis.com/youtube/v3/videos?id="+videoId+"&part=id,snippet,contentDetails,status,statistics,topicDetails,recordingDetails&key="+KEY)
content = r.content
content = json.loads(content)
return content['items'][0]['snippet']
if __name__ == "__main__":
args = argparser.parse_args()
(youtube, youtube_partner) = get_authenticated_services(args)
#video id's. Ex: "https://www.youtube.com/watch?v=lgSLz5FeXUg"
videos = ['I67cgXr6L6o', 'lgSLz5FeXUg']
content_owner_id = get_content_owner_id(youtube_partner)
for vid in videos :
claim_search(youtube_partner, content_owner_id, vid)
time.sleep(0.5)
The (censored) proof :
There is no way to check other people's ownership or management rights on a video or channel. YouTube deliberately avoids that.
While ownershipHistory works, I saw GET https://www.googleapis.com/youtube/partner/v1/assets?id=[multiple asset ids] also worked when it's called with fetchOwnership=effective option
https://developers.google.com/youtube/partner/docs/v1/assets/list

How to authenticate to flickr with Flickraw gem

I want to upload a photo but need to authenticate with flickr in order to do so. I am using the flickraw gem but don't understand the instructions below:
require 'flickraw'
FlickRaw.api_key="... Your API key ..."
FlickRaw.shared_secret="... Your shared secret ..."
token = flickr.get_request_token(:perms => 'delete')
auth_url = token['oauth_authorize_url']
puts "Open this url in your process to complete the authication process : #{auth_url}"
puts "Copy here the number given when you complete the process."
verify = gets.strip
begin
flickr.get_access_token(token['oauth_token'], token['oauth_token_secret'], verify)
login = flickr.test.login
puts "You are now authenticated as #{login.username}"
rescue FlickRaw::FailedResponse => e
puts "Authentication failed : #{e.msg}"
end
Can someone explain to me what this code is doing and how I should use it.
First , you should open http service
rails server
On the Console , you will see
Open this url in your process to complete the authication process : http://xxxx.xxxx.xxxx.xxxx........
you have to copy the url and post it on your browser.
After log in , you will get a number , like
xxx-xxx-xxx
just copy it onto your console!
Create a new Flickr app. Get the api key and shared secret from there.
"flickr.get_request_token" creates a request oauth token from flickr. You might want to set permissions to :write if you want to upload instead of :delete
auth_url is where you have to redirect to. That url also contains the oauth request tokens that you just created.
Once you are in auth_url page ( for this you have to login to your Yahoo! account), you can authorize your app to access your flickr account. This gives a verification id.
Use that verification id to you can get the oauth access tokens using this method call 'flickr.get_access_token'
Once you have the Oauth access tokens, you could do any api queries on flickr that your :perms would allow.
The entire process is described in detail here - http://www.flickr.com/services/api/auth.oauth.html
I submitted a pull request but here is an updated form of the documentation that should make this more clear
== Simple
+#Place near the top of your controller i.e. underneath FlickrController < ApplicationController
require 'flickraw'
+#Create an initializer file i.e. Flickr.rb and place it in config -> initializers folder
FlickRaw.api_key="... Your API key ..."
FlickRaw.shared_secret="... Your shared secret ..."
+#Examples of how the methods work
list = flickr.photos.getRecent
id = list[0].id
...

Resources