Permissions required to run inside Compute Engine - google-cloud-powershell

I have a Google Cloud Powershell script for managing snapshots that works flawlessly from my workstation but will not run inside the VM due to permissions errors like this:
PS>TerminatingError(Get-GceDisk): "Google.Apis.Requests.RequestError
Insufficient Permission [403] Errors [ Message[Insufficient
Permission] Location[ - ] Reason[insufficientPermissions]
Domain[global] ] " Get-GceDisk : Google.Apis.Requests.RequestError
Insufficient Permission [403] Errors [
Message[Insufficient Permission] Location[ - ] Reason[insufficientPermissions] Domain[global] ]
We have attempted to add my same permissions to the service account without success.
We have attempted to run the script with my Google account from the VM without success.
I think this may possibly have something to do with the Cloud API access scopes, but am having difficulty researching this online.
Can someone point me in the right direction?

Cloud API access scopes was the solution, but changing them requires a new VM. This was easiest to accomplish by drilling into a recent snapshot and clicking 'new instance' from there. When making the new instance, select the appropriate level of Cloud API access. Then it all works fine.

Related

Google Sheets Add-on error: authorisation is required to perform that action

I have an add-on that worked for years inside my domain/company until Google decided to change stuff.
I republished it and now I can run it but nobody else in the company can.
The error they receive is:
"Authorisation is required to perform that action".
I cannot pinpoint exactly where the error is because the GCP Log only tells me the function not the line, but it seems most of the times the error appears when showing a sidebar.
I do not use any kind of API, simply GAS but "just in case " I added in OAuth consent screen these scopes: .../auth/script.container.ui and .../auth/spreadsheets.
In Google Workspace Marketplace SDK OAuth Scopes I've just left the default.
Also I tried adding in appscript.json this (at top level):
"oauthScopes": [
"https://www.googleapis.com/auth/script.container.ui",
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/script.scriptapp",
"https://www.googleapis.com/auth/spreadsheets",
"https://www.googleapis.com/auth/userinfo.email"
]
What else can I try ?
Update: as requested in comments here's the offending code:
// clientside
google.script.run.
withSuccessHandler()
.withFailureHandler(failureHandler) // failureHandler gets called
.aServerFunc()
//serverside
function aServerFunc(){
Logger.log('REACHED') // NO REACHED APPEARS IN CLOUD LOGS !
var docProp = PropertiesService.getDocumentProperties();
return docProp.getProperty(key)
}
So I guess the problem is nobody else but me can run google.script.run in an add-on !
Update 2:
I've removed the PropertiesService calls so it's just a blank function on the server. So it's clear that nobody but me can run google.scripts.run.
Update 3:
As required in the comments here's the procedures I did to publish the add-on:
I created a google cloud project, then configured the OAuth consentscreen (with the same scopes as appsscript.json - see above list), then in Google Workspace Marketplace SDK I've set the script ID and deployment number and the same scopes and published.
It turns out the add-on was just fine !
It's just this 4 years old bug that Google refuses to fix
If the user is logged in with multiple accounts the default one will be used.
If the default account is non-domain and the add-on is restricted to a domain the add-on will fail to authorise.

How to authenticate to Cloud Storage from a Docker app on Cloud Run

I have a Node.js app in a Docker container that I'm trying to deploy to Google Cloud Run.
I want my app to be able to read/write files from my GCS buckets that live under the same project, and I haven't been able to find much information around it.
This is what I've tried so far:
1. Hoping it works out of the box
A.k.a. initializing without credentials, like in App Engine.
const { Storage } = require('#google-cloud/storage');
// ...later in an async function
const storage = new Storage();
// This line throws the exception below
const [file] = await storage.bucket('mybucket')
.file('myfile.txt')
.download()
The last line throws this exception
{ Error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500"
at Gaxios._request (/server/node_modules/gaxios/build/src/gaxios.js:85:23)
2. Hoping it works out of the box after setting the Storage Admin IAM role to my Cloud Run service accounts.
Nope. No difference with previous.
3. Copying my credentials file as a cloudbuild.yaml step:
...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gs://top-secret-bucket/gcloud-prod-credentials.json', '/www/gcloud-prod-credentials.json']
...
It copies the file just fine, but then the file is nor visible from my app. I'm still not sure where exactly it was copied to, but listing the /www directory from my app shows no trace of it.
4. Copy my credentials file as a Docker step
Wait, but for that I need to authenticate gsutil, and for that I need the credentials.
So...
What options do I have without uploading my credentials file to version control?
This is how I managed to make it work:
The code for initializing the client library was correct. No changes here from the original question. You don't need to load any credentials if the GCS bucket belongs to the same project as your Cloud Run service.
I learned that the service account [myprojectid]-compute#developer.gserviceaccount.com (aka "Compute Engine default service account") is the one used by default for running the Cloud Run service unless you specify a different one.
I went to the Service Accounts page and made sure that the mentioned service account was enabled (mine wasn't, this was what I was missing).
Then I went here, edited the permissions for the mentioned service account and added the Storage Object Admin role.
More information on this article: https://cloud.google.com/run/docs/securing/service-identity
I believe the correct way is to change to a custom service account that has the desired permissions. You can do this under the 'Security' tab when deploying a new revision.

How do i add Analytics to my existing wso2is? (WSO2 Identity Server)

Have deployed wso2is deployed on my k8s cluster using the Dockerfile mentioned in the https://github.com/wso2/docker-is/blob/5.7.x/dockerfiles/ubuntu/is-analytics/base/Dockerfile, its working fine.
Now the requirement has changed to have login stats (successful/unsuccessful/ failed attempts etc.) discover the Analytics support is the one use. But i am not quite sure how do i add this module into my Dockerfile?
Can someone list the various steps to install wso2is with analytics.
I have download the wso2is-analytics-5.7.0 zip, but i am not sure what else in the Dockerfile(from the link mentioned above) needs to change other than the :
"ARG WSO2_SERVER=wso2is-analytics"
Edited: so going once again the wso2is site : https://docs.wso2.com/display/IS570/Prerequisites+to+Publish+Statistics
Step 03: Configure Event Publishers , is these all optional? if already have wso2is deployed before?
because it says this "In a fresh WSO2 IS pack, you can view all the event publishers related to WSO2 IS Analytics in the /repository/deployment/server/eventpublishers directory."
Expected Result:
Have a working wso2is with analytics dasboard to track the login success/failure attempts.
thanks for your support, appreciate it!
Maurya (novice on wso2is)
The latest IS analytics-5.7.0, has different profiles, here, you need the following profiles,
Worker - Consume events from IS and process (This is consumed from event publishers in IS side, in the documentation, Step 3 and 4 are optional, that is documented for further understanding and not needed for deployment initially )
[1] https://hub.docker.com/r/wso2/wso2is-analytics-worker
Dashboard - This is to view the statistics
[2] https://hub.docker.com/r/wso2/wso2is-analytics-dashboard
[3] https://docs.wso2.com/display/IS570/Accessing+the+Analytics+Dashboard

Cruise control to Visual Studio Online, TF30063: You are not authorized

I am trying to configure Cruise control 1.8.5 to connect to Visual Studio Online to get the source to build and run tests, however whatever I do I always get the error 'TF30063: You are not authorized'
I have copied over tf.exe and required DLLs to the build machine as per one of the answers here (I would rather not install VS2013 on the build machine if possible, this is just a proof of concept at this point).
I have enabled alternative credentials as per here
This is part of the cc.net config - everything in square brackets has been double checked:
<sourcecontrol type="vsts">
<server>https://[account-name].visualstudio.com/DefaultCollection</server>
<project>$/[proj-name]</project>
<username>[alt-credential-username]</username>
<password>[alt-credential-password]</password>
<executable>[path-to-tf.exe]</executable>
<workspace>[workspace-name]</workspace>
<autoGetSource>true</autoGetSource>
<cleanCopy>true</cleanCopy>
<force>true</force>
<deleteWorkspace>true</deleteWorkspace>
</sourcecontrol>
I have tried my normal credentials, the alternative credentials, and even ##LIVEID##[alt-credential-username] as mentioned here
I always get:
ThoughtWorks.CruiseControl.Core.CruiseControlException: TF30063: You are not authorized to access https://[account-name].visualstudio.com/DefaultCollection
Note that I am able to connect manually from the build machine, e.g. running the following authenticates successfully and displays the expected directories under source control:
tf.exe dir /folders /server:https://[account-name].visualstudio.com/DefaultCollection "$/[proj-name]" /login:[alt-credential-username],[alt-credential-password]
(this works from both admin and non-admin cmd)
I also tried to run this to get a 'service account' however it crashes for me on Windows 7 x64
I have seen this however as I can manually connect using cmd/tf.exe I am assuming that tf.exe now does support alternative credentials...
I will probably try Team Explorer Everywhere 2013 tomorrow, although I'd rather avoid this due to it being Java based which I have no problem with apart from it's just another dependency / step in the setup.
Any tips/suggestions appreciated, as I'm out of ideas at this point!
UPDATE:
Using fiddler as suggested shows the following
tf.exe get from the command line is sending two requests:
1)
https://icb.visualstudio.com/DefaultCollection/VersionControl/v5.0/repository.asmx
soap body:
ReconcileLocalWorkspace [ xmlns=http://schemas.microsoft.com/TeamFoundation/2005/06/VersionControl/ClientServices/03 ]
with values for workspaceName, ownerName, pendingChangeSignature and maxClientPathLength
2)
https://icb.visualstudio.com/DefaultCollection/VersionControl/v5.0/repository.asmx
soap body:
Get [ xmlns=http://schemas.microsoft.com/TeamFoundation/2005/06/VersionControl/ClientServices/03 ]
with values for workspaceName, ownerName, requests, maxResults and maxClientPathLength
However CC is sending this request twice:
https://icb.visualstudio.com/DefaultCollection/Services/v1.0/Registration.asmx
soap body:
GetRegistrationEntries [ xmlns=http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Registration/03 ]
with toolId=vstfs
Short answer: Run the cruise control .net service as the same user you used to create the workspace (not the local system account).
Details:
I ended up getting the cruise control .net source, and attaching a debugger to see exactly what tf command was generating the error.
The process info that was failing:
Executable = "...tf.exe"
args = {dir /folders /server:https://[account-name].visualstudio.com/DefaultCollection "$/[proj-name]" /login:[alt-credential-username],[alt-credential-password]}
workingDirectory = "[valid-working-dir]"
Running this command manually worked fine - which led me to check the cc service properties, which were set to log on as Local System Account. I updated this to log on as the same (Windows) user that I used to create the workspace and this fixed the problem.
I can only assume that tf.exe is doing something clever with accounts/credentials or maybe it's just that workspaces are tied to particular windows logons?
Thanks to Edward for suggesting I fiddle with fiddler.

Network Service account does not accept local paths

I am creating a program that runs as a service and creates database backups (using pg_dump.exe) at certain points during the day. This program needs to be able to write the backup files to local drives AND mapped network drives.
At first, I was unable to write to network drives, but solved the problem by having the service log on as an administrator account. However, my boss wants the program to run without users having to key in a username and password for the account.
I tried to get around this by using the Network Service account (which does not need a password and always has the same name). Now my program will write to network drives, but not local drives! I tried using the regular C:\<directory name>\ path syntax as well as \\<computer name>\C$\<directory name>\ syntax and also \\<ip address>\C$\<directory name>\, none of which work.
Is there any way to get the Network Service account to access local drives?
Just give the account permission to access those files/directories, it should work. For accessing local files, you need to tweak ACLs on the files and directories. For accessing via network share, you have to make changes to file ACLs, as well as permissions on network share.
File ACLs can be modified in Exploler UI, or from command line, using standard icacls.exe. E.g. this command line will give directory and all files underneath Read, Write and Delete permissions for Network Service.
icacls c:\MyDirectory /T /grant "NT AUTHORITY\Network Service":(R,W,D)
File share permissions are easier to modify from UI, using fsmgmt.msc tool.
You will need to figure out what minimal set of permissions necessary to be applied. If you don't worry about security at all, you can give full permissions, but it is almost always an overkill, and opens you up more if for any reason the service is compromised.
I worked around this problem by creating a new user at install time which I add to the Administrators group. This allows the service to write to local and network drives, without ever needing password/username info during the setup.

Resources