How to authenticate to Cloud Storage from a Docker app on Cloud Run - docker

I have a Node.js app in a Docker container that I'm trying to deploy to Google Cloud Run.
I want my app to be able to read/write files from my GCS buckets that live under the same project, and I haven't been able to find much information around it.
This is what I've tried so far:
1. Hoping it works out of the box
A.k.a. initializing without credentials, like in App Engine.
const { Storage } = require('#google-cloud/storage');
// ...later in an async function
const storage = new Storage();
// This line throws the exception below
const [file] = await storage.bucket('mybucket')
.file('myfile.txt')
.download()
The last line throws this exception
{ Error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500"
at Gaxios._request (/server/node_modules/gaxios/build/src/gaxios.js:85:23)
2. Hoping it works out of the box after setting the Storage Admin IAM role to my Cloud Run service accounts.
Nope. No difference with previous.
3. Copying my credentials file as a cloudbuild.yaml step:
...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gs://top-secret-bucket/gcloud-prod-credentials.json', '/www/gcloud-prod-credentials.json']
...
It copies the file just fine, but then the file is nor visible from my app. I'm still not sure where exactly it was copied to, but listing the /www directory from my app shows no trace of it.
4. Copy my credentials file as a Docker step
Wait, but for that I need to authenticate gsutil, and for that I need the credentials.
So...
What options do I have without uploading my credentials file to version control?

This is how I managed to make it work:
The code for initializing the client library was correct. No changes here from the original question. You don't need to load any credentials if the GCS bucket belongs to the same project as your Cloud Run service.
I learned that the service account [myprojectid]-compute#developer.gserviceaccount.com (aka "Compute Engine default service account") is the one used by default for running the Cloud Run service unless you specify a different one.
I went to the Service Accounts page and made sure that the mentioned service account was enabled (mine wasn't, this was what I was missing).
Then I went here, edited the permissions for the mentioned service account and added the Storage Object Admin role.
More information on this article: https://cloud.google.com/run/docs/securing/service-identity

I believe the correct way is to change to a custom service account that has the desired permissions. You can do this under the 'Security' tab when deploying a new revision.

Related

403 error in Chrome when attempting to authenticate Cloud Run developer

Background:
I've got a project in Cloud Run with two services, both mapped to custom domains. The production site is mysite.com and the development site is dev.mysite.com. I deployed the development site with the --no-allow-unauthenticated flag to prevent public viewing. I want developers to be able to view the site in a browser though. Based on what I've read the "solution" Google currently has isn't great. You have to run the command gcloud auth print-identity-token to identify your Bearer token then use the ModHeader browser extension to modify the request header. The token is constantly changing and having ModHeader enabled to change the request header breaks authentication on other pages, so it is big PITA, but it works, mostly.
Question:
What doesn't work is having the development site load images from the Google Cloud Storage Bucket. Every resource which should be pulled from the bucket results in a 403 error for that resource, but the page loads fine otherwise. I'm the project owner (i.e. my email address is the "owner") and have admin rights on everything including the bucket in question. The bucket's Access Control is set to "Fine-grained: Object-level ACLs". When I deploy the project using the --allow-unauthenticated the images are accessible. Why isn't the bucket honoring my token?
Update:
I'm not 100% sure, but I think the issue might be related to the fact that ModHeader applies its rules to ALL open tabs. I tried another header modification extension named Requestly which allows rules to be targeted to specific URLs and now my development site is loading images as expected.

Google Sheets Add-on error: authorisation is required to perform that action

I have an add-on that worked for years inside my domain/company until Google decided to change stuff.
I republished it and now I can run it but nobody else in the company can.
The error they receive is:
"Authorisation is required to perform that action".
I cannot pinpoint exactly where the error is because the GCP Log only tells me the function not the line, but it seems most of the times the error appears when showing a sidebar.
I do not use any kind of API, simply GAS but "just in case " I added in OAuth consent screen these scopes: .../auth/script.container.ui and .../auth/spreadsheets.
In Google Workspace Marketplace SDK OAuth Scopes I've just left the default.
Also I tried adding in appscript.json this (at top level):
"oauthScopes": [
"https://www.googleapis.com/auth/script.container.ui",
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/script.scriptapp",
"https://www.googleapis.com/auth/spreadsheets",
"https://www.googleapis.com/auth/userinfo.email"
]
What else can I try ?
Update: as requested in comments here's the offending code:
// clientside
google.script.run.
withSuccessHandler()
.withFailureHandler(failureHandler) // failureHandler gets called
.aServerFunc()
//serverside
function aServerFunc(){
Logger.log('REACHED') // NO REACHED APPEARS IN CLOUD LOGS !
var docProp = PropertiesService.getDocumentProperties();
return docProp.getProperty(key)
}
So I guess the problem is nobody else but me can run google.script.run in an add-on !
Update 2:
I've removed the PropertiesService calls so it's just a blank function on the server. So it's clear that nobody but me can run google.scripts.run.
Update 3:
As required in the comments here's the procedures I did to publish the add-on:
I created a google cloud project, then configured the OAuth consentscreen (with the same scopes as appsscript.json - see above list), then in Google Workspace Marketplace SDK I've set the script ID and deployment number and the same scopes and published.
It turns out the add-on was just fine !
It's just this 4 years old bug that Google refuses to fix
If the user is logged in with multiple accounts the default one will be used.
If the default account is non-domain and the add-on is restricted to a domain the add-on will fail to authorise.

How do i add Analytics to my existing wso2is? (WSO2 Identity Server)

Have deployed wso2is deployed on my k8s cluster using the Dockerfile mentioned in the https://github.com/wso2/docker-is/blob/5.7.x/dockerfiles/ubuntu/is-analytics/base/Dockerfile, its working fine.
Now the requirement has changed to have login stats (successful/unsuccessful/ failed attempts etc.) discover the Analytics support is the one use. But i am not quite sure how do i add this module into my Dockerfile?
Can someone list the various steps to install wso2is with analytics.
I have download the wso2is-analytics-5.7.0 zip, but i am not sure what else in the Dockerfile(from the link mentioned above) needs to change other than the :
"ARG WSO2_SERVER=wso2is-analytics"
Edited: so going once again the wso2is site : https://docs.wso2.com/display/IS570/Prerequisites+to+Publish+Statistics
Step 03: Configure Event Publishers , is these all optional? if already have wso2is deployed before?
because it says this "In a fresh WSO2 IS pack, you can view all the event publishers related to WSO2 IS Analytics in the /repository/deployment/server/eventpublishers directory."
Expected Result:
Have a working wso2is with analytics dasboard to track the login success/failure attempts.
thanks for your support, appreciate it!
Maurya (novice on wso2is)
The latest IS analytics-5.7.0, has different profiles, here, you need the following profiles,
Worker - Consume events from IS and process (This is consumed from event publishers in IS side, in the documentation, Step 3 and 4 are optional, that is documented for further understanding and not needed for deployment initially )
[1] https://hub.docker.com/r/wso2/wso2is-analytics-worker
Dashboard - This is to view the statistics
[2] https://hub.docker.com/r/wso2/wso2is-analytics-dashboard
[3] https://docs.wso2.com/display/IS570/Accessing+the+Analytics+Dashboard

Can't access to Azure Storage using Lucene.Net And Azure App Service

We Have search implemented using Lucene.Net, Indexes are stored in Azure Storage Folder, Few days ago we moved our Web Application From Azure CloudService To Azure AppService.
If we run this locally it works as expected, also works in CloudService But when we published our Web Application to Azure AppService
we have below Exception:
System.UnauthorizedAccessException: Access to the path 'D:\AzureDirectory' is denied.
tried to update AzureDirectory and Azure Storage packages but it's not working.
Any Idea?
Thanks,
Solution was to change Lucene.Net.Store.Azure.AzureDirectory s CacheDirectory path to D:/Home/AzureDirectory
AzureDirectory(cloudStorageAccount, containerName, FSDirectory.Open(new DirectoryInfo("D:/Home/AzureDirectory")))
As you mentioned I had no d:\ access
tried to update AzureDirectory
As David Makogon mentioned that in the Azure WebApp, we have no access to create or access D:\AzureDirectory folder. We could get more info from the Azure WebApp Sandbox. The following is the snippet from the document
File System Restrictions/Considerations
Applications are highly restricted in terms of their access of the file system.
Home directory access (d:\home)
Every Azure Web App has a home directory stored/backed by Azure Storage. This network share is where applications store their content. This directory is available for the sandbox with read/write access.
According to the exception you mentioned, it seems that some code want to access the folder D:\AzureDirectory but it is not existed in the Azure WebApp. We also could remote debug our WebApp in the Azure to find the related code, more details please refer to the Remote debugging web apps.
You don't have d:\ access. In Web Apps, your app lives under d:\home (more accurately d:\home\site).
Also - fyi this isn't "Azure Storage" - that term refers to blob storage.

Network Service account does not accept local paths

I am creating a program that runs as a service and creates database backups (using pg_dump.exe) at certain points during the day. This program needs to be able to write the backup files to local drives AND mapped network drives.
At first, I was unable to write to network drives, but solved the problem by having the service log on as an administrator account. However, my boss wants the program to run without users having to key in a username and password for the account.
I tried to get around this by using the Network Service account (which does not need a password and always has the same name). Now my program will write to network drives, but not local drives! I tried using the regular C:\<directory name>\ path syntax as well as \\<computer name>\C$\<directory name>\ syntax and also \\<ip address>\C$\<directory name>\, none of which work.
Is there any way to get the Network Service account to access local drives?
Just give the account permission to access those files/directories, it should work. For accessing local files, you need to tweak ACLs on the files and directories. For accessing via network share, you have to make changes to file ACLs, as well as permissions on network share.
File ACLs can be modified in Exploler UI, or from command line, using standard icacls.exe. E.g. this command line will give directory and all files underneath Read, Write and Delete permissions for Network Service.
icacls c:\MyDirectory /T /grant "NT AUTHORITY\Network Service":(R,W,D)
File share permissions are easier to modify from UI, using fsmgmt.msc tool.
You will need to figure out what minimal set of permissions necessary to be applied. If you don't worry about security at all, you can give full permissions, but it is almost always an overkill, and opens you up more if for any reason the service is compromised.
I worked around this problem by creating a new user at install time which I add to the Administrators group. This allows the service to write to local and network drives, without ever needing password/username info during the setup.

Resources