As part of the process to build my docker container I need to pull some files from an s3 bucket but I keep getting fatal error: Unable to locate credentials even though for now I am setting the credentials as ENV vars (though would like to know of a better way to do this)
So when building the container I run
docker build -t my-container --build-arg AWS_DEFAULT_REGION="region" --build-arg AWS_ACCESS_KEY="key" --build-arg AWS_SECRET_ACCESS_KEY="key" . --squash
And in my Dockerfile I have
ARG AWS_DEFAULT_REGION
ENV AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
ARG AWS_ACCESS_KEY
ENV AWS_ACCESS_KEY=$AWS_ACCESS_KEY
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
RUN /bin/bash -l -c "aws s3 cp s3://path/to/folder/ /my/folder --recursive"
Does anyone know how I can solve this (I know there is an option to add a config file but that just seems an unnecessary extra step as I should be able to read from ENV).
The name of the environment variable is AWS_ACCESS_KEY_ID vs AWS_ACCESS_KEY
You can review the full list from amazon doc
The following variables are supported by the AWS CLI
AWS_ACCESS_KEY_ID – AWS access key.
AWS_SECRET_ACCESS_KEY – AWS secret key. Access and secret key
variables override credentials stored in credential and config files.
AWS_SESSION_TOKEN – session token. A session token is only required if
you are using temporary security credentials.
AWS_DEFAULT_REGION – AWS region. This variable overrides the default
region of the in-use profile, if set.
AWS_DEFAULT_PROFILE – name of the CLI profile to use. This can be the
name of a profile stored in a credential or config file, or default to
use the default profile.
AWS_CONFIG_FILE – path to a CLI config file.
Related
I'm trying to setup Ory Kratos on ECS.
Their documentation says that you can run migrations with the following command...
docker -e DSN="engine://username:password#host:port/dbname" run oryd/kratos:v0.10 migrate sql -e
I'm trying to recreate this for an ECS task and the Dockerfile so far looks like this...
# syntax=docker/dockerfile:1
FROM oryd/kratos:v0.10
COPY kratos /kratos
CMD ["-c", "/kratos/kratos.yml", "migrate", "sql", "-e", "--yes"]
It uses the base oryd/kratos:v0.10 image, copies across a directory with some config and runs the migration command.
What I'm missing is a way to construct the -e DSN="engine://username:password#host:port/dbname". I'm able to supply my database secret from AWS Secrets Manager directly to the ECS task, however the secret is a JSON object in a string containing the engine, username, password, host, port and dbname properties.
How can I securely construct the required DSN environment variable?
Please see the ECS documentation on injecting SecretsManager secrets. You can inject specific values from a JSON secret as individual environment variables. Search for "Example referencing a specific key within a secret" in the page I linked above. So the easiest way to accomplish this without adding a JSON parser tool to your docker image, and writing a shell script to parse the JSON inside the container, is to simply have ECS inject each specific value as a separate environment variable.
I have built a containerised python application which runs without issue locally using a .env file and and a docker-compose.yml file compiled with compose build.
I am then able to use variables within the Dockerfile like this.
ARG APP_USR
ENV APP_USR ${APP_USR}
ARG APP_PASS
ENV APP_PASS ${APP__PASS}
RUN pip install https://${APP_USR}:${APP_PASS}#github.org/*****/master.zip
I am deploying to cloud run via a synced bitbucket repository, and have defined under "REVISIONS" > "SECRETS AND VARIABLES",(as described here: https://cloud.google.com/run/docs/configuring/environment-variables)
but I can not work out how to access these variables in the Dockerfile during build.
As I understand it, I need to create a cloudbuild.yaml file to define the variables, but I haven't been able to find a clear example of how to set this up using the Environment variables defined in cloud run.
My understanding is that it is not possible to directly use a Cloud Run revision's environment variables in the Dockerfile because the build is managed by Cloud Build, which doesn't know about Cloud Run revision before the deployment.
But I was able to use Secret Manager's secrets in the Dockerfile.
Sources:
Passing secrets from Secret Manager to cloudbuild.yaml: https://cloud.google.com/build/docs/securing-builds/use-secrets
Passing an environment variable from cloudbuild.yaml to Dockerfile: https://vsupalov.com/docker-build-pass-environment-variables/
Quick summary:
In your case, for APP_USR and APP_PASS:
Grant the Secret Manager Secret Accessor (roles/secretmanager.secretAccessor) IAM role for the secret to the Cloud Build service account (see first source).
Add an availableSecrets block at the end of the cloudbuild.yaml file (out of the steps block):
availableSecrets:
secretManager:
- versionName: <APP_USR_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_USR'
- versionName: <APP_PASS_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_PASS'
Pass the secrets to your build step (depends on how you summon docker build, Google's documentation uses 'bash', I use Docker directly):
- id: Build
name: gcr.io/cloud-builders/docker
args:
- build
- '-f=Dockerfile'
- '.'
# Add these two `--build-arg` params:
- '--build-arg'
- 'APP_USR=$$APP_USR'
- '--build-arg'
- 'APP_PASS=$$APP_PASS'
secretEnv: ['APP_USR', 'APP_PASS'] # <=== add this line
Use these secrets as standard environment variables in your Dockerfile:
ARG APP_USR
ENV APP_USR $APP_USR
ARG APP_PASS
ENV APP_PASS $APP_PASS
RUN pip install https://$APP_USR:$APP_PASS#github.org/*****/master.zip
You have several way to achieve that.
You can, indeed, create your container with your .env in it. But it's not a good practice because your .env can contain secret (API Key, database password,...) and because your container is tight to an environment
The other solution is to deploy your container on Cloud Run (not a docker compose, it doesn't work on Cloud Run), and add the environment variable with the revision. use, for example, --set-env-vars=KEY1=Value1 format to achieve that.
If you have secrets, you can store them in secret manager and load it as env var at runtime, or as volume
The last solution, if you can specify where your container will get the .env file in your file tree (I'm not expert in Python to help you on that), you can use this trick that I described in this article. It's perfectly designed for configuration file, it's stored natively in Secret manager and therefore protect your secret automatically.
Is it possible to access machine environments inside dockerfile? I was thinking passing the SECRET as build ARG, like so:
docker-compose:
version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
dockerfile:
FROM image
ARG SECRET
RUN script-${SECRET}
Note: the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.
Edit 1: It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.
Edit 2: This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.
The secrets should be used during run time and provided by execution environment.
Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.
In order to address this, Docker recently introduced a special option --secret. To make it work, you will need the following:
Set environment variable DOCKER_BUILDKIT=1
Use the --secret argument to docker build command
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt...
Add a syntax comment to the very top of your Docker file
# syntax = docker/dockerfile:1.0-experimental
Use the --mount argument to mount the secret for every RUN directive that needs it
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
Please note that this needs Docker version 18.09 or later.
ARG is a build time argument. You want to keep Secrets secret and not write them in the artifacts. Keep secrets in external environment variables or in external files.
docker run -e SECRET_NAME=SECRET_VALUE
and in docker-compose:
services:
app-name:
environment:
- SECRET_NAME=YOUR_VALUE
or
services:
app-name:
env_file:
- secret-values.env
Kubernetes
When you run exactly the same container image in Kubernetes, you mount the secret from a Secret object.
containers:
- name: app-name
image: app-image-name
env:
- name: SECRET_NAME
valueFrom:
secretKeyRef:
name: name-of-secret-object
key: token
Yes, to passing secret data as ARG if you need to access the secret during the container build; you have no (!?) alternative.
ARG values are only available for the duration of the build so you need to be able to trust the build process and that it is cleaned up appropriately at its conclusion; if a malicious actor were able to access the build process (or after the fact), it could access the secret data.
It's curious that you wish to use the secret as script-${SECRET} as I assumed the secret would be used to access an external service. Someone would be able to determine the script name from the resulting Docker image and this would expose your secret.
I am trying to build a new Docker image dynamically using a Cloud Build trigger job, however I fail to see how to safely retrieve my credentials to authenticate against GCP with a service account.
Here are the steps:
Dockerfile created with steps to build a Docker image. One of the steps includes downloading a file from Google Storage (bucket) that I need to access as a GCP service account.
Docker image is built by using a Cloud Build trigger that is triggered after each change in the linked repository and stored in GCR.
Step one fails because:
1.) By default, for some reason, the user running the Dockerfile in GCP is not authenticated against GCP. It is not a default Google Cloud Build account, it is an anonymous user.
2.) I can authenticate as a service account BUT
a.) I don't want to store the JSON private key unencrypted locally or in the repository.
b.) If I stored it encrypted in the GCP repository, then I need to authenticate before decrypting it with KMS. But I don't have the key because it's still encrypted. So I am back to my problem.
c.) If I stored it in a GCP Storage bucket, I need to authenticate, too. So I am back to my problem.
Is there any other approach how I can execute the Cloud build trigger job and stay/get a GCP service account context?
The #1 solution of #ParthMehta is the right one.
Before calling the Docker Build, add this step in your Cloud Build for downloading the file from Cloud Storage by using the permission of Cloud Build environment (the service account is the following: <PROJECT_NUMBER>#cloudbuild.gserviceaccount.com)
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/my_file', 'my_file']
The file are copied in the current directory of Cloud Build execution /workspace. Then add the files to your container by adding a simple COPY in your Dockerfile
....
COPY ./my_file ./my_file
....
In a general way, when you are working on GCP environment, you should never have to use JSON key file.
you can let cloud build to download the file from cloud storage for you and let docker to access the directory so it can use the file. You'll need to allow cloud build service account to access your bucket.
see: https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
OR
Use gcloud auth configure-docker and then you can impersonate as service account using --impersonate-service-account with access to the bucket, so docker user has sufficient access to download the file
see: https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker
Old question but neither answer above was satisfactory for me because I needed to pull private packages from the Artifact Registry. After a lot of trial and error I found a solution using short-lived access tokens and service account impersonation and I'm sharing the solution in case anyone else has the same issue.
Specifically I'm using Cloud Build and a Docker container to transpile my Node app before deploying it. The build process needs to pull private NPM packages from the Artifact Registry, but didn't work because it wasn't authorized.
Working Solution
First create a Service Account that has access to whatever GCP service you need. In my case I created artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com and gave it access to the Artifact Registry repository as an "Artifact Registry Reader." In your case you'd give it access to that bucket.
Edit the newly created Service Account and under permissions add your Cloud Builder Service Account (<PROJECT_ID>#cloudbuild.gserviceaccount.com) as a Principal and grant it the "Service Account Token Creator" role.
Next, your cloudbuild.yaml file should look something like this:
steps:
# Step 1: Generate an Access Token and save it
#
# Here we call `gcloud auth print-access-token` to impersonate the service account
# we created above and to output a short-lived access token to the default volume
# `/workspace/access_token`. This is accessible in subsequent steps.
#
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- '-c'
- >
gcloud auth print-access-token --impersonate-service-account
artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com >
/workspace/access_token
entrypoint: sh
# Step 2: Build our Docker container
#
# We build the Docker container passing the access token we generated in Step 1 as
# the `--build-arg` `TOKEN`. It's then accessible within the Dockerfile using
# `ARG TOKEN`
#
- name: gcr.io/cloud-builders/docker
args:
- '-c'
- >
docker build -t us-docker.pkg.dev/<PROJECT>/services/frontend:latest
--build-arg TOKEN=$(cat /workspace/access_token) -f
./docker/prod/Dockerfile . &&
docker push us-docker.pkg.dev/<PROJECT>/services/frontend
entrypoint: sh
This next step is specific to private npm packages in the Artifact Registry, but I created a partial .npmrc file (missing the :_authToken line) with the following content:
#<NAMESPACE>:registry=https://us-npm.pkg.dev/<PROJECT>/npm/
//us-npm.pkg.dev/<PROJECT>/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/<PROJECT>/npm/:email=artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com
//us-npm.pkg.dev/<PROJECT>/npm/:always-auth=true
Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.
ARG NODE_IMAGE=node:17.2-alpine
FROM ${NODE_IMAGE} as base
ENV APP_PORT=8080
ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production
FROM base AS builder
# Create our WORKDIR
RUN mkdir -p ${WORKDIR}
# Set the current working directory
WORKDIR ${WORKDIR}
# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src
#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be
# authorized to download packages from the Artifact Registry
#
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################
ARG TOKEN
RUN echo "//us-npm.pkg.dev/<PROJECT>/npm/:_authToken=\"$TOKEN\"" >> .npmrc
RUN npm install
RUN npm run build
EXPOSE ${APP_PORT}/tcp
CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]
Obviously in your case you would authenticate with the access token in a different manner with GCS, but the overall concepts should translate well to any similar situation.
I'm wondering whether there are best practices on how to inject credentials into a Docker container during a docker build.
In my Dockerfile I need to fetch resources webservers which require basic authentication and I'm thinking about a proper way on how to bring the credentials into the container without hardcoding them.
What about a .netrc file and using it with curl --netrc ...? But what about security? I do no like the idea of having credentials being saved in a source repository together with my Dockerfile.
Is there for example any way to inject credentials using parameters or environment variables?
Any ideas?
A few new Docker features make this more elegant and secure than it was in the past. The new multi-phase builds let us implement the builder pattern with one Dockerfile. This method puts our credentials into a temporary "builder" container, and then that container builds a fresh container that doesn't hold any secrets.
You have choices for how you get your credentials into your builder container. For example:
Use an environment variable: ENV creds=user:pass and curl https://$creds#host.com
Use a build-arg to pass credentials
Copy an ssh key into the container: COPY key /root/.ssh/id_rsa
Use your operating system's own secure credentials using Credential Helpers
Multi-phase Dockerfile with multiple FROMs:
## Builder
FROM alpine:latest as builder
#
# -- insert credentials here using a method above --
#
RUN apk add --no-cache git
RUN git clone https://github.com/some/website.git /html
## Webserver without credentials
FROM nginx:stable
COPY --from=builder /html /usr/share/nginx/html