I am trying to read my google cloud default credentials with berglas, and it says that:
failed to create berglas client: failed to create kms client: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
And I am passing the right path, and i have tried with many paths but none of them work.
$HOME/.config/gcloud:/root/.config/gcloud
I'm unfamiliar with Berglas (please include references) but the error is clear. Google's client libraries attempt to find credentials automatically. The documentation describes the process by which credentials are sought.
Since the credentials aren't being found, you're evidently not running on a Google Cloud compute service (where credentials are found automatically). Have you set an environment variable called APPLICATION_DEFAULT_CREDENTIALS and is it pointing to a valid Service Account key file?
The Berglas' README suggests using the following command to auth your user's credentials as Application Default Credentials. You may not have completed this step:
gcloud auth application-default login
Related
I started an Azurite docker on local VM and then tried to copy data to it by azcopy and az CLI like below
export AZURE_STORAGE_ACCOUNT="devstoreaccount1"
export AZURE_STORAGE_ACCESS_KEY="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
azcopy copy /local/data/ http://localvm:10000/devstoreaccount1/data/test --from-to LocalBlob
INFO: Scanning...
failed to perform copy command due to error: Login Credentials missing. No SAS token or OAuth token is present and the resource is not public
I want to authenticate with the Account key and Account name and preferably be able to copy using azcopy.
I scoured the GitHub and stack to find only one https://github.com/Azure/azure-storage-azcopy/issues/867 issue and there is nothing there regarding auth. It looks like I am missing something that's obvious. Your help will be much appreciated.
The version used were:
azure-cli 2.11.1
azcopy version 10.7.0
I was able to getaway with using az cli instead of azcopy.
export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azuritedockerhost:10000/devstoreaccount1;"
az storage blob upload -f local-file -c container-name -n dir/blob-name
Hope this helps someone. Plus it would really nice to be able to use azcopy too so if anybody finds out how it will greatly appreciated.
The Microsoft documentation 'Get started with AzCopy' indicates the following under the 'Run AzCopy' heading:
As an owner of your Azure Storage account, you aren't automatically
assigned permissions to access data. Before you can do anything
meaningful with AzCopy, you need to decide how you'll provide
authorization credentials to the storage service.
Under the next heading 'Authorize AzCopy', the documentation states:
You can provide authorization credentials by using Azure Active Directory
(AD), or by using a Shared Access Signature (SAS) token.
Even though you're accessing a local storage emulator (Azurite) on your local machine, the AzCopy app wants an OAuth token or SAS token. See this link to generate SAS tokens for local storage or online storage.
A SAS token must be appended to the destination parameter in the azcopy copy command. I use the Active AD (OAuth token) authorization credentials option so that I can run multiple azcopy commands without appending a SAS token to every command.
To resolve the AzCopy error you're getting "Failed to perform copy command due to error: Login Credentials missing. No SAS token or OAuth token is present and the resource is not public", enter the following into a command prompt or Windows PowerShell:
azcopy login --tenant-id=<your-tenant-directory-id-from-azure-portal>
and then follow the steps this command returns. Here's a reference to azcopy login. From the heading 'Authorize without a secret store' in this reference: "
The azcopy login command retrieves an OAuth token and then places that
token into a secret store on your system.
From 'Authorize a user identitiy' heading:
After you've successfully signed in, you can close the browser window
and begin using AzCopy.
Use azcopy logout from a command prompt to stop any more AzCopy commands.
Here are the steps with screen captures for the login process as well as where to find a tenant ID to get the AzCopy login process going.
Get tenant ID from the Azure portal.
In a command prompt enter the azcopy login command along with the --tenant-id parameter.
Follow the steps indicated in the command prompt: "...use a web browser to open the page https://microsoft.com/devicelogin and enter the code...".
"A sign-in window will appear. In that window, sign into your Azure account by using your Azure account credentials."
"After you've successfully signed in, you can close the browser window and begin using AzCopy."
You can run your original azcopy copy /local/data/ http://localvm:10000/devstoreaccount1/data/test --from-to LocalBlob without the need for the export entries in your question.
AzCopy deliberately avoids support for account key authentication, because an account key has full admin privileges: https://github.com/Azure/azure-storage-azcopy/issues/186
The only workaround I have found so far is to generate a SAS (for the container) in Azure Storage Explorer, and then use the SAS URL with AzCopy.
Setting up authentication for Docker | Artifact Registry Documentation suggests that gcloud is more secure than using a JSON file with credentials. I disagree. In fact I'll argue the exact opposite is true. What am I misunderstanding?
Setting up authentication for Docker | Artifact Registry Documentation says:
gcloud as credential helper (Recommended)
Configure your Artifact Registry credentials for use with Docker directly in gcloud. Use this method when possible for secure, short-lived access to your project resources. This option only supports Docker versions 18.03 or above.
followed by:
JSON key file
A user-managed key-pair that you can use as a credential for a service account. Because the credential is long-lived, it is the least secure option of all the available authentication methods
The JSON key file contains a private key and other goodies giving a hacker long-lived access. The keys to the kingdom. But only to the Artifact Repository in this instance, because the service account that the JSON file is for only has specifically those rights.
Now gcloud has two auth options:
gcloud auth activate-service-account ACCOUNT --key-file=KEYFILE
gcloud auth login
Lets start with gcloud and a service account: Here it stores KEYFILE in unencrypted in ~/.config/gcloud/credentials.db. Using the JSON file directly boils down docker login -u _json_key --password-stdin https://some.server < KEYFILE which stores the KEYFILE contents in ~/.docker/config.json. So using gcloud with a service account or just using the JSON file directly should be equivalent, security wise. They both store the same KEYFILE unencrypted in a file.
gcloud auth login requires login with a browser where I give consent to giving gcloud access to my user account in its entirety. It is not limited to the Artifact Repository like the service account is. Looking with sqlite3 ~/.config/gcloud/credentials.db .dump I can see that it stores an access_token but also a refresh_token. If the hacker has access to ~/.config/gcloud/credentials.db with access and refresh tokens, doesn't he own the system just as much as if he had access to the JSON file? Actually, this is worse because my user account is not limited to just accessing the Artifact Registry - now the user has access to everything my user has access to.
So all in all: gcloud auth login is at best security-wise equivalent to using the JSON file. But because the access is not limited to the Artifact Registry, it is in fact worse.
Do you disagree?
I use Google Cloud Registry, which adds "auths" and "credHelpers" keys to my ~/.docker/config.json.
The problem I have is that when I'm offline, or just building locally, it tries to connect to each hostname, which either fails (when offline) or is really slow (when online).
How can I tell docker-compose to not use these credentials/hosts when building?
My workaround now is to delete the properties from the ~/.docker/config.json, and then gcloud auth configure-docker each time, but I'd rather not have to keep authenticating to push when I do want to use GCR.
ocker.api.build._set_auth_headers: Looking for auth config
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://asia.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://eu.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://marketplace.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://staging-k8s.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://us.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'us.gcr.io'
I read everything in this post and multiple others but nothing is working... I cannot use Google Credentials to access my git Repo in Google Cloud Platform
I have Jenkins running in a Docker Container in Google Cloud Platform. I have Source Code in a Google Cloud Repository that I want to use for a Build.
On the Google Cloud Platform side I created a Service User, gave the User the following Roles:
Project Worker
Source-Repository Admin
Storageobject creator
ComputeEngine creator
I created the JSON File and downloaded it.
On The Jenkins Side I installed the Google OAuth Credentials and the Google Container Registry Auth Plugin.
I added new Credentials "Google Service Account from private key" and added the json file.
So, if I now want to create a new Job (Freestyle or pipeline does not matter) I see the following:
I see the credentials I created but for the "Google Container Registry". As soon as I add the repository URL, "https://source.developers.google.com/p...." The Drop-down is cleared and all is gone.
I also took a look at the credentials.xml and job file, to see, if I can rewrite there something by myself. The Google Credentials do not have an credentialId like others...
<com.google.jenkins.plugins.credentials.oauth.GoogleRobotPrivateKeyCredentials plugin="google-oauth-plugin#0.6">
<module/>
<projectId>testprojekt</projectId>
<serviceAccountConfig class="com.google.jenkins.plugins.credentials.oauth.JsonServiceAccountConfig">
<jsonKeyFile>/var/jenkins_home/gauth/key8529180263669390055.json</jsonKeyFile>
</serviceAccountConfig>
</com.google.jenkins.plugins.credentials.oauth.GoogleRobotPrivateKeyCredentials>
I'm currently out of ideas... would be happy for any hint.
Thank you!
We could run "gcloud auth list" to get our credentialed account, and now I want to do the same thing in my python code, that is checking the credential account by API in python. But I didn't fine it..... Any suggestion?
More information is:
I want to check my account name before I create credentials
CREDENTIALS = GoogleCredentials.from_stream(ACCOUNT_FILE)
CREDENTIALS = GoogleCredentials.get_application_default()
gcloud stores credentials obtained via
gcloud auth login
gcloud auth activate-service-account
in its internal local database. There is no API besides gcloud auth list command to query them. Note that this is different (usually a subset) from the list of credentials in GCP.
Credentials used by gcloud are meant to be separate from what you use in your python code.
Perhaps you want to use
https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/keys/list, there is also API for that https://cloud.google.com/iam/docs/creating-managing-service-accounts.
For application default credentials you would download json key file using developer console https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or use gcloud iam service-accounts keys create command.
There is also gcloud auth application-default login command, which will create application default credential file in well known location, but you should not use it for anything serious except perhaps developing/testing. Note that credentials obtained via this command do not show up in gcloud auth list.