I'd like to authenticate to gcloud CLI took from GitHub Codespaces devcontainer. I can setup GitHub Codespaces secrets to expose GOOGLE_APPLICATION_CREDENTIALS, and I can assign it the actual service account value (JSON content).
But, the gcloud CLI expects it to be a path to a file. What is considered a best-practice to deal with that?
Thanks,
I’m not sure if there are best practices for these patterns yet, but I’ve found success with creating a Codespace secret that contains the text of a file and writing it to a file in the Codespace using a lifecycle script such as postCreateCommand.
{
“postCreateCommand”: “echo -e \"$SERVICE_ACCOUNT_CREDENTIALS\" > /path/to/your/file.json”
}
Note: I usually don’t call this command from directly within devcontainer.json, I create a separate bash script and execute that instead.
From there you should be able to interact with the file normally. I’ve successfully used this pattern for SSH keys, kubeconfigs, and AWS credentials.
Related
I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.
I created a docker container, put my project inside it and then I ran sls deploy and it worked even without to setup credentials. How is it possible? Is Serverless Framework getting the credentials from memory or anywhere else?
During serverless deploy, it can get config information from ~/.aws/credentials file. There is an explanation about it in the document https://www.serverless.com/framework/docs/providers/aws/guide/credentials/
I am trying to understand what does assigning service account to a Cloud Run service actually do in order to improve the security of the containers. I have multiple processes running within a Cloud Run service and not all of them do need to access the project resources.
A more specific question I have in mind is:
Would I be able to create multiple users and run some processes as a user that does not have access to the service account or does every user have access to the service account?
I run a small experiment on a VM instance (I guess this will be a similar case as with Cloud Run) where I created a new user and after creation, it wasn't authorized to use the service account of the instance. However, I am not sure is there a way to authorize it which would make my method insecure.
Thank you.
EDIT
To perform the test I created a new os user and used "gcloud auth list" from the new user account. However, I should have made a curl request and I would have been able to retrieve credentials as pointed out by an answer below.
Your question is not very clear but I will try to provide you several inputs.
When you run a service on Cloud Run, you have 2 choices for defining its identity
Either it's the compute engine service account which is used (by default, is you specify nothing)
Or it's the service account that you specify at the deployment
This service account is valid for the Cloud Run service (you can have up to 1000 different services per project).
Now, when you run your container, the service account is not really loaded into the container (it's the same thing with compute engine), but there is an API available for requesting the authentication data of this service account. It's name metadata server
It's not restricted to users (I don't know how you perform your test on Compute Engine!), a simple curl is enough for getting the data.
This metadata server is used when you use your libraries, for example, and you use the "default credentials". gcloud SDK also uses it.
I hope you have a better view now. If not, add details in your question or in the comments.
The keyword that's missing from guillaume’s answer is "permissions".
Specifically, if you don't assign a service account, Cloud Run will use the Compute Engine default service account.
This default account has Editor role on your project (in other words, it can do nearly anything on your GCP project, short of creating new service accounts and giving them access, and maybe deleting the GCP project). If you use default service account and your container is compromised, you're probably in trouble. ⚠️
However, if you specify a new --service-account, by default it has no permissions. You have to bind it roles or permissions (e.g. GCS Object Reader, PubSub Publisher...) that your application needs.
Just to add the previous answers, if you are using something like Cloud Build,here is how you can implement it
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- "gcloud secrets versions access latest --secret=[SECRET_NAME] \t --format='get(payload.data)' | tr '_-' '/+' | base64 -d > Dockerfile"
entrypoint: bash
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- gcloud run deploy [SERVICE_NAME] --source . --region=[REGION_NAME] --service-account=[SERVICE_ACCOUNT]#[PROJECT_ID].iam.gserviceaccount.com --max-instances=[SPECIFY_REQUIRED_VALUE]
entrypoint: /bin/bash
options:
logging: CLOUD_LOGGING_ONLY
I am using this in a personal project but I will explain what is happening here. The first one is pulling data from my Secret Manager where I am storing a Dockerfile with the secret environment variables. This is optional, if you are not storing any API keys and secrets,you can skip it. But if you have a different folder structure (ie that isn't flat)
The second deploys Cloud Run from the source code. The documentation for that can be found here.
https://cloud.google.com/run/docs/deploying-source-code
I have a Powershell script that uploads audit logs to an S3 repository. The script works fine when I run it while logged in but I need to define a scheduled task and the task needs to be run as SYSTEM user. Can someone recommend a way that I can provide the SYSTEM user with the AWS credentials so that they are not stored on the machine in clear text?
I just found what I think is the answer: if I run the script thru the task scheduler once with the 'Set-AWSCredentials' command it creates the encrypted key info in C:\Windows\System32\config\systemprofile\AppData\Local\AWSToolkit\RegisteredAccounts.json. Then I was able to remove the 'Set-AWSCredentials' command and the script seems to run successfully.
I don't believe this is possible. Your authentication info will have to be stored in the clear on the client machine one way or another, and windows doesn't provide any convenient methods for protecting that information.
You might find it more convenient to manage access if you use ssh + encryption certificates as credentials (i believe there's an ssh client for powershell, though i haven't tried using it for aws work)
I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login