Are Google API token.json files single use? - google-sheets

I'm making a simple app to access google sheets that I have saved on google drive. I set up a project on google, created the Oauth credentials and ran the Python quickstart code to generate the token.json file.
Yesterday, after doing that, I ran this portion of the quickstart code and it ran perfectly and returned the rows from the sample spreadsheet:
###Add step to pull in previous staff comments, joino in MRN
from __future__ import print_function
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
###This only runs when not connected to NetExtender. Should work when ported to citrix but running locally for testing is going to be difficult
###Gsheets
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/spreadsheets']
# The ID and range of a sample spreadsheet.
SAMPLE_SPREADSHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
SAMPLE_RANGE_NAME = 'Class Data!A2:E'
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID,
range=SAMPLE_RANGE_NAME).execute()
values = result.get('values', [])
if not values:
print('No data found.')
else:
print('Name, Major:')
for row in values:
# Print columns A and E, which correspond to indices 0 and 4.
print('%s, %s' % (row[0], row[4]))
However, today when I run that code, it doesn't work anymore. I get an error:
('invalid_scope: Bad Request', {'error': 'invalid_scope', 'error_description':'Bad Request'})
Are the token files single use and would I need to generate a new one every time I want to run this (that's the only reason I can imagine it would work fine yesterday but not today)? If that is the issue, is there a way to program this so that I don't need to re-authenticate in Google and create a new token file every time I want to run this?
Thanks!

Related

How to monitor Python Application(without django or other framework) in instana

i have a simple python application without any framework and API calls.
how i will monitor python application on instana kubernates.
i want code snippet to add with python application ,which trace application
and display on instana
how i will monitor python application on instana kubernates
There is publicly available guide, that should help you setting up the kubernetes agent.
i have a simple python application without any framework and API calls
Well, instana is for distributed tracing, meaning distributed components calling each other, each other's APIs predominantly by using known frameworks (with registered spans).
Nevertheless, you could make use of SDKSpan, here is a super simple example:
import os
os.environ["INSTANA_TEST"] = "true"
import instana
import opentracing.ext.tags as ext
from instana.singletons import get_tracer
from instana.util.traceutils import get_active_tracer
def foo():
tracer = get_active_tracer()
with tracer.start_active_span(
operation_name="foo_op",
child_of=tracer.active_span
) as foo_scope:
foo_scope.span.set_tag(ext.SPAN_KIND, "exit")
result = 20 + 1
foo_scope.span.set_tag("result", result)
return result
def main():
tracer = get_tracer()
with tracer.start_active_span(operation_name="main_op") as main_scope:
main_scope.span.set_tag(ext.SPAN_KIND, "entry")
answer = foo() + 21
main_scope.span.set_tag("answer", answer)
if __name__ == '__main__':
main()
spans = get_tracer().recorder.queued_spans()
print('\nRecorded Spans and their IDs:',
*[(index,
span.s,
span.data['sdk']['name'],
dict(span.data['sdk']['custom']['tags']),
) for index, span in enumerate(spans)],
sep='\n')
This should work in any environment, even without an agent and should give you an output like this:
Recorded Spans and their IDs:
(0, 'ab3af60079f3ca57', 'foo_op', {'span.kind': 'exit', 'result': 21})
(1, '53b67f7298684cb7', 'main_op', {'span.kind': 'entry', 'answer': 42})
Of course in a production, you wouldn't want to print the recorded spans, but send it to the well configured agent, so you should remove setting the INSTANA_TEST.

How to create dynamic dataframe from AWS Glue 3 catalog in local environment?

I I have performed some AWS Glue version 3.0 jobs testing using Docker containers as detailed here.
The following code outputs two lists, one per connection, with the names of the tables in a database:
import boto3
db_name_s3 = "s3_connection_db"
db_name_mysql = "glue_catalog_mysql_connection_db"
def retrieve_tables(database_name):
session = boto3.session.Session()
glue_client = session.client("glue")
response_get_tables = glue_client.get_tables(DatabaseName=database_name)
return response_get_tables
s3_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_s3)["TableList"]]
mysql_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_mysql)["TableList"]]
print(f"These are the tables from {db_name_s3} db: {s3_tables_list}\n")
print(f"These are the tables from {db_name_mysql} db {mysql_tables_list}")
Now, I try to create a dynamic dataframe with the from_catalog method in this way:
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
source_activities = glueContext.create_dynamic_frame.from_catalog(
database = db_name,
table_name =table_name
)
When database="s3_connection_db", everything works fine, however, when I set database="glue_catalog_mysql_connection_db", I get the following error:
Py4JJavaError: An error occurred while calling o45.getDynamicFrame.
: java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
I understand the issue is related to the fact that I am trying to fetch data from a mysql table but I am not sure how to solve this. By the way, the job runs fine on the Glue console.
I would really appreciate some help, thanks!

IBM Cloud Functions - Python Actions

I'm learning how to use Serverless Functions, I'm working trying to connect a Watson assistant through webhooks using a python action that is processing a small dataset, I'm still struggling to succeed on it.
I’ve done my coding on Jupyter environment calling raw csv dataset from Github and using pandas to handle it. The issue is when I’m invoking the action into IBM Functions works 10% of the times. I did debug on Jupyter and Visual Studio environments and the code seems to be ok, but once I move the code to the IBM Functions environment it doesn't perform.
import sys
import csv
import json
import pandas as pd
location = ('Germany') #Passing country parameter for testing purpose
data = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/03-24-2020.csv')
def main(args):
location = args.get("location")
for index, row in data.iterrows():
currentLoc = row['Country/Region']
if currentLoc == location:
covid_statistics = {
"Province/State": row['Province/State'],
"Country/Region": row['Country/Region'],
"Confirmed":row['Confirmed'],
"Deaths":row['Deaths'],
"Recovered":row['Recovered']
}
return {"message": covid_statistics}
else:
return {"message": "Data not available"}

Server Code for Shiny-Server for a Specific App is not showing any of the Dataframes

So the ui.R file is working perfectly. However, the server.R is what I suspect may be causing the issue here. The intended behavior is that I have data frames display above the embedded HTML charts on each one of my pages. However, the data frames are not generated. The intended goal is to use the google sheets package, read a google sheet, and then morph it into a data frame exposed on R Shiny.
I have tried placing the data frame function and definition above and below within the ui.R and the server.R. However, I am not getting any return on any of the output.
This is for a Shiny-Server hosted on Ubuntu 16.04 Server.
#
# This is the server logic of a Shiny web application. You can run the
# application by clicking 'Run App' above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(shinydashboard)
library(googlesheets)
library(googleCharts)
library(googleAuthR)
library(stats)
library(searchConsoleR)
library(googleAnalyticsR)
library(httr)
library(dplyr)
library(plyr)
library(mosaic)
library(DT)
library(httpuv)
library(htmltools)
# Google Sheets for Synced Keys with Data Master
# ===============================================
handover <- gs_key("1Wu8gJ#$%%#$%%###$##$#%###$%##%-VVHcB8c")
for_gs_sheet <- gs_read(handover)
str(for_gs_sheet)
# Define server logic required to draw a histogram
shinyServer(function(input, output) {
google_app <- oauth_app(
"google",
key = "3901########################m",
secret = "b#########################z"
)
#oauth2.0_token(google_app)
## ---------- Google Authentication ---------- ##
gs_auth(token = NULL ,new_user = FALSE,
key = getOption("################.com"),
secret = getOption("##############Ka5mz"),
cache = getOption("googlesheets.httr_oauth_cache"), verbose = TRUE)
for_gs_sheet <- gs_read(handover)
str(for_gs_sheet)
output$mytable = DT::renderDataTable({
df <- gs_read(handover)
})
})
The actual results should show output as related to the DT package. However, the data table is not being processed and/or is not made visible when called in the server output.
This stems from a service token issue.
The best way is to just create a service token and session that maintains an open connection and refreshes the token.
I fixed this issue by backing the token directly into the app via JSON and having the app call the JSON file within the directory the shiny app was stored in under the /srv/ directory. You can download a copy of the service account information and store it in the working directory of the app:
root#miradashboard1:/srv/shiny-server/Apps/CSM# ls
miradashboard-f89f243d0221.json server.R ui.R
Then make sure you call the service token within the server.R and ui.R.
service_token <- gar_auth_service(json_file="/srv/shiny-server/Apps/CSM/miradashboard-f89f243d0221.json")

Exported Dataflow Template Parameters Unknown

I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team

Resources