cloud run service log's summary is not textpayload - google-cloud-run

I am trying to log within a cloud run service and all log summaries show up POST 0 B null ms curl/7.68.0 http://example.a.run.app/ which is not really informative and I need to expand each log to see what it is.
My code is the following and I am using bazel to build the container:
import logging
import os
from flask import Flask, make_response
import google.cloud.logging
app = Flask(__name__)
client = google.cloud.logging.Client()
client.setup_logging()
#app.route('/', methods=['POST'])
def hello_world():
logging.warning('warning message')
logging.info('info message')
logging.error('error message')
return make_response('OK', 200)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
other SO posts I am following to get log working:
Log to Cloud Logging with correct severity from Cloud Run Job and package used in the job
Google Cloud Functions Python Logging issue
EDIT: Logs show up fine on "Logs Explorer" but not on "Logs" under "Cloud Run"

There is a bug raised for display textPayload (when it's present) instead of a URL in the initial collapsed view of an entry in the legacy log viewers at issue tracker. Which is still open, there is no update for a long time and the last update was to
right-click on the textPayload field, you can select "Add field to
summary line" and this will be put in the collapsed summary line.
you may consider adding your concern in that issue tracker or raise a new issue tracker by referencing that thread further progress can be tracked there.

Related

Embedding IPython REPL in Docker

Following this wonderful article, I'm trying to use the IPython REPL for debugging my Flask app. The idea is that you run import IPython; IPython.embed() at a point where you want to take a look around the state of your projects.
I'm developing my app in a Docker container to make it easier to run with other services. I tried inserting this line into a views.py function like so:
#page.route('/', methods=['GET', 'POST'])
def index():
form = SearchForm()
if form.validate_on_submit():
results = request.form.get('search')
import IPython; IPython.embed()
return render_template('page/index.html', form=form, results=results)
else:
flash(form.errors)
return render_template('page/index.html', form=form)
When a valid POST request is made through the form, I see the following output from Docker:
website_1 | IPython 8.4.0 -- An enhanced Interactive Python. Type '?' for help.
website_1 | In [1]: Do you really want to exit ([y]/n)?
Then I see gunicorn logging the POST and GET requests. It would seem docker automatically shuts down IPython and continues to render_template.
I'm wondering if there is anyway to get this to work as an actual breakpoint as described in the article. I'd love to be able to take a look around my code this way. Thanks in advance for any advice.

Are Google API token.json files single use?

I'm making a simple app to access google sheets that I have saved on google drive. I set up a project on google, created the Oauth credentials and ran the Python quickstart code to generate the token.json file.
Yesterday, after doing that, I ran this portion of the quickstart code and it ran perfectly and returned the rows from the sample spreadsheet:
###Add step to pull in previous staff comments, joino in MRN
from __future__ import print_function
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
###This only runs when not connected to NetExtender. Should work when ported to citrix but running locally for testing is going to be difficult
###Gsheets
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/spreadsheets']
# The ID and range of a sample spreadsheet.
SAMPLE_SPREADSHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
SAMPLE_RANGE_NAME = 'Class Data!A2:E'
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID,
range=SAMPLE_RANGE_NAME).execute()
values = result.get('values', [])
if not values:
print('No data found.')
else:
print('Name, Major:')
for row in values:
# Print columns A and E, which correspond to indices 0 and 4.
print('%s, %s' % (row[0], row[4]))
However, today when I run that code, it doesn't work anymore. I get an error:
('invalid_scope: Bad Request', {'error': 'invalid_scope', 'error_description':'Bad Request'})
Are the token files single use and would I need to generate a new one every time I want to run this (that's the only reason I can imagine it would work fine yesterday but not today)? If that is the issue, is there a way to program this so that I don't need to re-authenticate in Google and create a new token file every time I want to run this?
Thanks!

Failures in init.groovy.d scripts: null values returned

I'm trying to get Jenkins set up, with configuration, within a Docker environment. Per a variety of sources, it appears the suggested method is to insert scripts into JENKINS_HOME/init.groovy.d. I've taken scripts from places like the Jenkins wiki and made slight changes. They're only partially working. Here is one of them:
import java.util.logging.ConsoleHandler
import java.util.logging.FileHandler
import java.util.logging.SimpleFormatter
import java.util.logging.LogManager
import jenkins.model.Jenkins
// Log into a file
println("extralogging.groovy")
def RunLogger = LogManager.getLogManager().getLogger("hudson.model.Run")
def logsDir = new File("/var/log/jenkins")
if (!logsDir.exists()) { logsDir.mkdirs() }
FileHandler handler = new FileHandler(logsDir.absolutePath+"/jenkins-%g.log", 1024 * 1024, 10, true);
handler.setFormatter(new SimpleFormatter());
RunLogger.addHandler(handler)
This script fails on the last line, RunLogger.addHandler(handler).
2019-12-20 19:25:18.231+0000 [id=30] WARNING j.util.groovy.GroovyHookScript#execute: Failed to run script file:/var/lib/jenkins/init.groovy.d/02-extralogging.groovy
java.lang.NullPointerException: Cannot invoke method addHandler() on null object
I've had a number of other scripts return NULL objects from various gets similar to this one:
def RunLogger = LogManager.getLogManager().getLogger("hudson.model.Run")
My goal is to be able to develop (locally) a Jenkins implementation and then hand it to our sysops guys. Later, as I add pipelines and what not, I'd like to be able to also work on them in a local Jenkins configuration and then hand something for import into production Jenkins.
I'm not sure how to produce API documentation so I can chase this myself. Maybe I need to stop doing it this way and just grab the files that get modified when I do this via the GUI and just stuff the files into the right place.
Suggestions?

Exported Dataflow Template Parameters Unknown

I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team

How to run firefox in native app of an extension?

I'm trying simple modify an extension example to run firefox,but I get a message prompt :
Firefox is already running,but is no responding. To open a new new window, you must firest close the existing Firefox process, or restart your system.
#!/usr/bin/env python3
import sys
import json
import struct
import subprocess
try:
# Python 3.x version
# Read a message from stdin and decode it.
def getMessage():
rawLength = sys.stdin.buffer.read(4)
if len(rawLength) == 0:
sys.exit(0)
messageLength = struct.unpack('#I', rawLength)[0]
message = sys.stdin.buffer.read(messageLength).decode('utf-8')
return json.loads(message)
# Encode a message for transmission,
# given its content.
def encodeMessage(messageContent):
encodedContent = json.dumps(messageContent).encode('utf-8')
encodedLength = struct.pack('#I', len(encodedContent))
return {'length': encodedLength, 'content': encodedContent}
# Send an encoded message to stdout
def sendMessage(encodedMessage):
sys.stdout.buffer.write(encodedMessage['length'])
sys.stdout.buffer.write(encodedMessage['content'])
sys.stdout.buffer.flush()
while True:
receivedMessage = getMessage()
if receivedMessage == "ping":
run_result=subprocess.run('firefox -P firefox_word ',shell=True,stdout=subprocess.PIPE)
sendMessage(encodeMessage("pong3"))
except AttributeError:
pass
My purpose is open a local html file by my extension or native app of my extension.
I had a similar issue a while ago, also when I was experimenting with WebExtensions examples. I think the problem is with your Firefox profile. The solution that worked for me was to create a new profile, then (after a day or so) reopen the previous profile. It has been fine since then. I do not know any more details about the problem.
The Mozilla page "Firefox is already running but is not responding" error message - How to fix it describes this solution as well as a number of others (which I tried, but did not have success with).
You can start the Firefox Profile Manager as per the following instructions (see here for complete details):
If Firefox is open, close Firefox:
Press Windows Key + R on the keyboard. A Run dialog will open.
In the Run dialog box, type in: firefox.exe -P Click OK. The Firefox Profile Manager (Choose User Profile) window should open.
Create a new profile, click 'Start Firefox'
To open your previous profile, launch Profile Manager again and select your default profile
For me I need work in same profile,now my solution is made a shell script as daemon to read a fifo and my native app of my extension write that fifo when I need run firefox.
Note you need run that daemon outside the native app of extension.

Resources