I'm wondering whether there are best practices on how to inject credentials into a Docker container during a docker build.
In my Dockerfile I need to fetch resources webservers which require basic authentication and I'm thinking about a proper way on how to bring the credentials into the container without hardcoding them.
What about a .netrc file and using it with curl --netrc ...? But what about security? I do no like the idea of having credentials being saved in a source repository together with my Dockerfile.
Is there for example any way to inject credentials using parameters or environment variables?
Any ideas?
A few new Docker features make this more elegant and secure than it was in the past. The new multi-phase builds let us implement the builder pattern with one Dockerfile. This method puts our credentials into a temporary "builder" container, and then that container builds a fresh container that doesn't hold any secrets.
You have choices for how you get your credentials into your builder container. For example:
Use an environment variable: ENV creds=user:pass and curl https://$creds#host.com
Use a build-arg to pass credentials
Copy an ssh key into the container: COPY key /root/.ssh/id_rsa
Use your operating system's own secure credentials using Credential Helpers
Multi-phase Dockerfile with multiple FROMs:
## Builder
FROM alpine:latest as builder
#
# -- insert credentials here using a method above --
#
RUN apk add --no-cache git
RUN git clone https://github.com/some/website.git /html
## Webserver without credentials
FROM nginx:stable
COPY --from=builder /html /usr/share/nginx/html
Related
I have built a containerised python application which runs without issue locally using a .env file and and a docker-compose.yml file compiled with compose build.
I am then able to use variables within the Dockerfile like this.
ARG APP_USR
ENV APP_USR ${APP_USR}
ARG APP_PASS
ENV APP_PASS ${APP__PASS}
RUN pip install https://${APP_USR}:${APP_PASS}#github.org/*****/master.zip
I am deploying to cloud run via a synced bitbucket repository, and have defined under "REVISIONS" > "SECRETS AND VARIABLES",(as described here: https://cloud.google.com/run/docs/configuring/environment-variables)
but I can not work out how to access these variables in the Dockerfile during build.
As I understand it, I need to create a cloudbuild.yaml file to define the variables, but I haven't been able to find a clear example of how to set this up using the Environment variables defined in cloud run.
My understanding is that it is not possible to directly use a Cloud Run revision's environment variables in the Dockerfile because the build is managed by Cloud Build, which doesn't know about Cloud Run revision before the deployment.
But I was able to use Secret Manager's secrets in the Dockerfile.
Sources:
Passing secrets from Secret Manager to cloudbuild.yaml: https://cloud.google.com/build/docs/securing-builds/use-secrets
Passing an environment variable from cloudbuild.yaml to Dockerfile: https://vsupalov.com/docker-build-pass-environment-variables/
Quick summary:
In your case, for APP_USR and APP_PASS:
Grant the Secret Manager Secret Accessor (roles/secretmanager.secretAccessor) IAM role for the secret to the Cloud Build service account (see first source).
Add an availableSecrets block at the end of the cloudbuild.yaml file (out of the steps block):
availableSecrets:
secretManager:
- versionName: <APP_USR_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_USR'
- versionName: <APP_PASS_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_PASS'
Pass the secrets to your build step (depends on how you summon docker build, Google's documentation uses 'bash', I use Docker directly):
- id: Build
name: gcr.io/cloud-builders/docker
args:
- build
- '-f=Dockerfile'
- '.'
# Add these two `--build-arg` params:
- '--build-arg'
- 'APP_USR=$$APP_USR'
- '--build-arg'
- 'APP_PASS=$$APP_PASS'
secretEnv: ['APP_USR', 'APP_PASS'] # <=== add this line
Use these secrets as standard environment variables in your Dockerfile:
ARG APP_USR
ENV APP_USR $APP_USR
ARG APP_PASS
ENV APP_PASS $APP_PASS
RUN pip install https://$APP_USR:$APP_PASS#github.org/*****/master.zip
You have several way to achieve that.
You can, indeed, create your container with your .env in it. But it's not a good practice because your .env can contain secret (API Key, database password,...) and because your container is tight to an environment
The other solution is to deploy your container on Cloud Run (not a docker compose, it doesn't work on Cloud Run), and add the environment variable with the revision. use, for example, --set-env-vars=KEY1=Value1 format to achieve that.
If you have secrets, you can store them in secret manager and load it as env var at runtime, or as volume
The last solution, if you can specify where your container will get the .env file in your file tree (I'm not expert in Python to help you on that), you can use this trick that I described in this article. It's perfectly designed for configuration file, it's stored natively in Secret manager and therefore protect your secret automatically.
I am trying to build a new Docker image dynamically using a Cloud Build trigger job, however I fail to see how to safely retrieve my credentials to authenticate against GCP with a service account.
Here are the steps:
Dockerfile created with steps to build a Docker image. One of the steps includes downloading a file from Google Storage (bucket) that I need to access as a GCP service account.
Docker image is built by using a Cloud Build trigger that is triggered after each change in the linked repository and stored in GCR.
Step one fails because:
1.) By default, for some reason, the user running the Dockerfile in GCP is not authenticated against GCP. It is not a default Google Cloud Build account, it is an anonymous user.
2.) I can authenticate as a service account BUT
a.) I don't want to store the JSON private key unencrypted locally or in the repository.
b.) If I stored it encrypted in the GCP repository, then I need to authenticate before decrypting it with KMS. But I don't have the key because it's still encrypted. So I am back to my problem.
c.) If I stored it in a GCP Storage bucket, I need to authenticate, too. So I am back to my problem.
Is there any other approach how I can execute the Cloud build trigger job and stay/get a GCP service account context?
The #1 solution of #ParthMehta is the right one.
Before calling the Docker Build, add this step in your Cloud Build for downloading the file from Cloud Storage by using the permission of Cloud Build environment (the service account is the following: <PROJECT_NUMBER>#cloudbuild.gserviceaccount.com)
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/my_file', 'my_file']
The file are copied in the current directory of Cloud Build execution /workspace. Then add the files to your container by adding a simple COPY in your Dockerfile
....
COPY ./my_file ./my_file
....
In a general way, when you are working on GCP environment, you should never have to use JSON key file.
you can let cloud build to download the file from cloud storage for you and let docker to access the directory so it can use the file. You'll need to allow cloud build service account to access your bucket.
see: https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
OR
Use gcloud auth configure-docker and then you can impersonate as service account using --impersonate-service-account with access to the bucket, so docker user has sufficient access to download the file
see: https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker
Old question but neither answer above was satisfactory for me because I needed to pull private packages from the Artifact Registry. After a lot of trial and error I found a solution using short-lived access tokens and service account impersonation and I'm sharing the solution in case anyone else has the same issue.
Specifically I'm using Cloud Build and a Docker container to transpile my Node app before deploying it. The build process needs to pull private NPM packages from the Artifact Registry, but didn't work because it wasn't authorized.
Working Solution
First create a Service Account that has access to whatever GCP service you need. In my case I created artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com and gave it access to the Artifact Registry repository as an "Artifact Registry Reader." In your case you'd give it access to that bucket.
Edit the newly created Service Account and under permissions add your Cloud Builder Service Account (<PROJECT_ID>#cloudbuild.gserviceaccount.com) as a Principal and grant it the "Service Account Token Creator" role.
Next, your cloudbuild.yaml file should look something like this:
steps:
# Step 1: Generate an Access Token and save it
#
# Here we call `gcloud auth print-access-token` to impersonate the service account
# we created above and to output a short-lived access token to the default volume
# `/workspace/access_token`. This is accessible in subsequent steps.
#
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- '-c'
- >
gcloud auth print-access-token --impersonate-service-account
artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com >
/workspace/access_token
entrypoint: sh
# Step 2: Build our Docker container
#
# We build the Docker container passing the access token we generated in Step 1 as
# the `--build-arg` `TOKEN`. It's then accessible within the Dockerfile using
# `ARG TOKEN`
#
- name: gcr.io/cloud-builders/docker
args:
- '-c'
- >
docker build -t us-docker.pkg.dev/<PROJECT>/services/frontend:latest
--build-arg TOKEN=$(cat /workspace/access_token) -f
./docker/prod/Dockerfile . &&
docker push us-docker.pkg.dev/<PROJECT>/services/frontend
entrypoint: sh
This next step is specific to private npm packages in the Artifact Registry, but I created a partial .npmrc file (missing the :_authToken line) with the following content:
#<NAMESPACE>:registry=https://us-npm.pkg.dev/<PROJECT>/npm/
//us-npm.pkg.dev/<PROJECT>/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/<PROJECT>/npm/:email=artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com
//us-npm.pkg.dev/<PROJECT>/npm/:always-auth=true
Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.
ARG NODE_IMAGE=node:17.2-alpine
FROM ${NODE_IMAGE} as base
ENV APP_PORT=8080
ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production
FROM base AS builder
# Create our WORKDIR
RUN mkdir -p ${WORKDIR}
# Set the current working directory
WORKDIR ${WORKDIR}
# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src
#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be
# authorized to download packages from the Artifact Registry
#
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################
ARG TOKEN
RUN echo "//us-npm.pkg.dev/<PROJECT>/npm/:_authToken=\"$TOKEN\"" >> .npmrc
RUN npm install
RUN npm run build
EXPOSE ${APP_PORT}/tcp
CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]
Obviously in your case you would authenticate with the access token in a different manner with GCS, but the overall concepts should translate well to any similar situation.
As part of the process to build my docker container I need to pull some files from an s3 bucket but I keep getting fatal error: Unable to locate credentials even though for now I am setting the credentials as ENV vars (though would like to know of a better way to do this)
So when building the container I run
docker build -t my-container --build-arg AWS_DEFAULT_REGION="region" --build-arg AWS_ACCESS_KEY="key" --build-arg AWS_SECRET_ACCESS_KEY="key" . --squash
And in my Dockerfile I have
ARG AWS_DEFAULT_REGION
ENV AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
ARG AWS_ACCESS_KEY
ENV AWS_ACCESS_KEY=$AWS_ACCESS_KEY
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
RUN /bin/bash -l -c "aws s3 cp s3://path/to/folder/ /my/folder --recursive"
Does anyone know how I can solve this (I know there is an option to add a config file but that just seems an unnecessary extra step as I should be able to read from ENV).
The name of the environment variable is AWS_ACCESS_KEY_ID vs AWS_ACCESS_KEY
You can review the full list from amazon doc
The following variables are supported by the AWS CLI
AWS_ACCESS_KEY_ID – AWS access key.
AWS_SECRET_ACCESS_KEY – AWS secret key. Access and secret key
variables override credentials stored in credential and config files.
AWS_SESSION_TOKEN – session token. A session token is only required if
you are using temporary security credentials.
AWS_DEFAULT_REGION – AWS region. This variable overrides the default
region of the in-use profile, if set.
AWS_DEFAULT_PROFILE – name of the CLI profile to use. This can be the
name of a profile stored in a credential or config file, or default to
use the default profile.
AWS_CONFIG_FILE – path to a CLI config file.
My use case is that I have multiple express micro-services that use the same middleware and I would like to create a different repo in the format of an npm module for each middleware.
Every repo is a private repo and can have a deploy key attached (can be different keys or the same)
All of this works OK locally. However when I try to use this with my docker-compose setup it fails on the npm install step, in the build stage.
Dockerfile
FROM node:alpine
RUN npm install --production
CMD npm start
docker-compose.yml
services:
node-api:
build:
context: .
dockerfile: Dockerfile
I understand this doesn't work because I don't have the deploy key I use on my local system in the Docker context.
I've looked around for a solution and none seem very easy/non hacky
Copy the key in and squash (CONS: not sure how I do this in a docker-compose file)http://blog.cloud66.com/pulling-git-into-a-docker-image-without-leaving-ssh-keys-behind/
Copy the key in on the build step and add to image. (CONS: Not very secure :( )
Use the key as a build argument. (CONS: see 2)
Dockerise something like https://www.vaultproject.io/ run that up first, add the key and use that within the node containers to get the latest key. (CONS: probably lots of work, maybe other issues?)
Use Docker secrets and docker stack deploy and store the key in docker secrets (CON: docker stack deploy has no support for docker volumes yet. See here https://docs.docker.com/compose/bundles/#producing-a-bundle unsupported key 'volumes')
My question is what is the most secure possible solution that is automated (minimal manual steps for users of the file)? Time of implementation is less of a concern. I'm trying to avoid checking in any sensitive data while making it easy for other people to run this locally.
Let's experiment with this new feature: Docker multi stage build
You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
The idea is to build a temporary base image, then start the build again only taking what you want from the previous image. It uses multiple FROM in the same Dockerfile:
FROM node as base-node-modules
COPY your_secret_key /some/path
COPY package.json /somewhere
RUN npm install <Wich use your key>
FROM node #yes again!
...
...
COPY --from=base-node-modules /somewhere/node_modules /some/place/node_modules
...
... # the rest of your Dockerfile
...
Docker will discard everything what you don't save from the first FROM.
I have this image:
FROM ubuntu:14.04.3
# copy db project to container
ADD . /db_project
WORKDIR /db_project
CMD ["./gitinitall.sh"]
This will copy the content of the current dir with db project that it contains git submodules which will be checked out and pulled from repo. So in db_project is the shell script that I run to get the submodules. Also has the backend that uses the dbs. The image is pushed to a private repo.
I want to use this image to create a container where I need to add the config of the database for the environment where it needs to be deployed, something like:
FROM myprivatedockerrepo:5000/db_project
...
WORKDIR /db_project
COPY config/dev.config /db_project/apps/mydb_db/config/dev.config
# get everything needed for backend
RUN mix deps.get
# expose the backend port
EXPOSE backendport
# start the beckend with the proper db configured
CMD ["./startbeckend"]
But it is failing to RUN mix deps.get:
Step 14/20 : RUN mix deps.get
---> Running in ab7375d69989
warning: path "apps/mydb_db" is a directory but it has no mix.exs. Mix won't consider this directory as part of your umbrella application. Please add a "mix.exs" or set the ":apps" key in your umbrella configuration with all relevant apps names as atoms
If I add a
RUN ls apps/mydb_db
before running the mix command:
ls: cannot access apps/mydb_db: No such file or directory
So it seems although in the image used, myprivatedockerrepo:5000/db_project, there should be
db_project/apps/mydb_db - mydb_db created by the shell script submodule get from git, it cannot find it, maybe I do not understand the docker layers or something?
To copy a folder you need to add a final '/'
# Dockerfile
ADD . /db_project/
See also here: https://docs.docker.com/engine/reference/builder/#add
Was my bad,
the gitinitall.sh script to get and update all submodules from git didn't work, since had no git settings or ssh key in the image, but it didn't return any code so it shows no failure.