Is it possible to access machine environments inside dockerfile? I was thinking passing the SECRET as build ARG, like so:
docker-compose:
version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
dockerfile:
FROM image
ARG SECRET
RUN script-${SECRET}
Note: the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.
Edit 1: It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.
Edit 2: This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.
The secrets should be used during run time and provided by execution environment.
Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.
In order to address this, Docker recently introduced a special option --secret. To make it work, you will need the following:
Set environment variable DOCKER_BUILDKIT=1
Use the --secret argument to docker build command
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt...
Add a syntax comment to the very top of your Docker file
# syntax = docker/dockerfile:1.0-experimental
Use the --mount argument to mount the secret for every RUN directive that needs it
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
Please note that this needs Docker version 18.09 or later.
ARG is a build time argument. You want to keep Secrets secret and not write them in the artifacts. Keep secrets in external environment variables or in external files.
docker run -e SECRET_NAME=SECRET_VALUE
and in docker-compose:
services:
app-name:
environment:
- SECRET_NAME=YOUR_VALUE
or
services:
app-name:
env_file:
- secret-values.env
Kubernetes
When you run exactly the same container image in Kubernetes, you mount the secret from a Secret object.
containers:
- name: app-name
image: app-image-name
env:
- name: SECRET_NAME
valueFrom:
secretKeyRef:
name: name-of-secret-object
key: token
Yes, to passing secret data as ARG if you need to access the secret during the container build; you have no (!?) alternative.
ARG values are only available for the duration of the build so you need to be able to trust the build process and that it is cleaned up appropriately at its conclusion; if a malicious actor were able to access the build process (or after the fact), it could access the secret data.
It's curious that you wish to use the secret as script-${SECRET} as I assumed the secret would be used to access an external service. Someone would be able to determine the script name from the resulting Docker image and this would expose your secret.
Related
I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:
...
services:
my-app:
build:
...
environment:
VAR_1: foo
VAR_2: bar
...
I want to keep these environment variables in the docker-compose.yaml so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my deployment.yaml:
containers:
- name: my-app
...
env:
- name: VAR_1
value: foo
- name: VAR_2
value: bar
...
Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?
I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the deployment.yaml, but I'd like to retain them in the docker-compose.yaml for the purposes of running the container locally. I couldn't find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.
Kompose can translate docker compose files to kubernetes resources:
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Let us assume docker-compose file and kubernetes runs the same way,
Both take a ready to use image and schedule a new pod or container based on it.
By default this image accept a set of env variables, to send those variables: docker-compose manage them in a way and kubernetes in an another way. (a matter of syntax)
So you can use the same image over compose and over kubernetes, but the syntax of sending the env variables will differ.
If you want them to presist no matter of the deployment and tool, you can always hardcode those env variables in the image itself, in another term, in your dockerfile that you used to build the image.
I dont recommend this way ofc, and it might not work for you in case you are using pre-built official images, but the below is an example of a dockerfile with env included.
FROM alpine:latest
# this is how you hardcode it
ENV VAR_1 foo
COPY helloworld.sh .
RUN chmod +x /helloworld.sh
CMD ["/helloworld.sh"]
If you want to move toward managing this in a much better way, you can use an .env file in your docker-compose to be able to update all the variables, especially when your compose have several apps that share the same variables.
app1:
image: ACRHOST/app1:latest
env_file:
- .env
And on kubernetes side, you can create a config map, link your pods to that configmap and then you can update the value of the configmap only.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
kubectl create configmap <map-name> <data-source>
Also note that you can set the values in your configmap directly from the .env file that you use in docker, check the link above.
The docker-compose.yml file and the Kubernetes YAML file serve similar purposes; both explain how to create a container from a Docker image. The Compose file is only read when you're running docker-compose commands, though; the configuration there isn't read when deploying to Kubernetes and doesn't make any permanent changes to the image.
If something needs to be set as an environment variable but really is independent of any particular deployment system, set it as an ENV in your image's Dockerfile.
ENV VAR_1=foo
ENV VAR_2=bar
# and don't mention either variable in either Compose or Kubernetes config
If you can't specify it this way (e.g., database host names and credentials) then you need to include it in both files as you've shown. Note that some of the configuration might be very different; a password might come from a host environment variable in Compose but a Kubernetes Secret.
We have used the technique detailed here to expose host environment variables to Docker build in a secured fashion.
# syntax=docker/dockerfile:1.2
FROM golang:1.18 AS builder
# move secrets out of the build process (and docker history)
RUN --mount=type=secret,id=github_token,dst=/app/secret_github_token,required=true,uid=10001 \
export GITHUB_TOKEN=$(cat /app/secret_github_token) && \
<nice command that uses $GITHUB_TOKEN>
And this command to build the image:
export DOCKER_BUILDKIT=1
docker build --secret id=github_token,env=GITHUB_TOKEN -t cool-image-bro .
The above works perfectly.
Now we also have a docker-compose file running in CI that needs to be modified. However, even if I confirmed that the ENV vars are present in that job, I do not know how to assign the environment variable to the github_token named secret ID.
In other words, what is the equivalent docker-compose command (up --build, or build) that can accept a mapping of an environment variable with a secret ID?
Turns out I was a bit ahead of the times. docker compose v.2.5.0 brings support for secrets.
After having modified the Dockerfile as explained above, we must then update the docker-compose to defined secrets.
docker-compose.yml
services:
my-cool-app:
build:
context: .
secrets:
- github_user
- github_token
...
secrets:
github_user:
file: secrets_github_user
github_token:
file: secrets_github_token
But where are those files secrets_github_user and secrets_github_token coming from? In your CI you also need to export the environment variable and save it to the default secrets file location. In our project we are using Tasks so we added these too lines.
Note that we are running this task from our CI, so you could do it differently without Tasks for example.
- printenv GITHUB_USER > /root/project/secrets_github_user
- printenv GITHUB_TOKEN > /root/project/secrets_github_token
We then update the CircleCI config and add two environment variable to our job:
.config.yml
name-of-our-job:
environment:
DOCKER_BUILDKIT: 1
COMPOSE_DOCKER_CLI_BUILD: 1
You might also need a more recent Docker version, I think they introduced it in a late 19 release or early 20. I have used this and it works:
steps:
- setup_remote_docker:
version: 20.10.11
Now when running your docker-compose based commands, the secrets should be successfully mounted through docker-compose and available to correctly build or run your Dockerfile instructions!
I have built a containerised python application which runs without issue locally using a .env file and and a docker-compose.yml file compiled with compose build.
I am then able to use variables within the Dockerfile like this.
ARG APP_USR
ENV APP_USR ${APP_USR}
ARG APP_PASS
ENV APP_PASS ${APP__PASS}
RUN pip install https://${APP_USR}:${APP_PASS}#github.org/*****/master.zip
I am deploying to cloud run via a synced bitbucket repository, and have defined under "REVISIONS" > "SECRETS AND VARIABLES",(as described here: https://cloud.google.com/run/docs/configuring/environment-variables)
but I can not work out how to access these variables in the Dockerfile during build.
As I understand it, I need to create a cloudbuild.yaml file to define the variables, but I haven't been able to find a clear example of how to set this up using the Environment variables defined in cloud run.
My understanding is that it is not possible to directly use a Cloud Run revision's environment variables in the Dockerfile because the build is managed by Cloud Build, which doesn't know about Cloud Run revision before the deployment.
But I was able to use Secret Manager's secrets in the Dockerfile.
Sources:
Passing secrets from Secret Manager to cloudbuild.yaml: https://cloud.google.com/build/docs/securing-builds/use-secrets
Passing an environment variable from cloudbuild.yaml to Dockerfile: https://vsupalov.com/docker-build-pass-environment-variables/
Quick summary:
In your case, for APP_USR and APP_PASS:
Grant the Secret Manager Secret Accessor (roles/secretmanager.secretAccessor) IAM role for the secret to the Cloud Build service account (see first source).
Add an availableSecrets block at the end of the cloudbuild.yaml file (out of the steps block):
availableSecrets:
secretManager:
- versionName: <APP_USR_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_USR'
- versionName: <APP_PASS_SECRET_RESOURCE_ID_WITH_VERSION>
env: 'APP_PASS'
Pass the secrets to your build step (depends on how you summon docker build, Google's documentation uses 'bash', I use Docker directly):
- id: Build
name: gcr.io/cloud-builders/docker
args:
- build
- '-f=Dockerfile'
- '.'
# Add these two `--build-arg` params:
- '--build-arg'
- 'APP_USR=$$APP_USR'
- '--build-arg'
- 'APP_PASS=$$APP_PASS'
secretEnv: ['APP_USR', 'APP_PASS'] # <=== add this line
Use these secrets as standard environment variables in your Dockerfile:
ARG APP_USR
ENV APP_USR $APP_USR
ARG APP_PASS
ENV APP_PASS $APP_PASS
RUN pip install https://$APP_USR:$APP_PASS#github.org/*****/master.zip
You have several way to achieve that.
You can, indeed, create your container with your .env in it. But it's not a good practice because your .env can contain secret (API Key, database password,...) and because your container is tight to an environment
The other solution is to deploy your container on Cloud Run (not a docker compose, it doesn't work on Cloud Run), and add the environment variable with the revision. use, for example, --set-env-vars=KEY1=Value1 format to achieve that.
If you have secrets, you can store them in secret manager and load it as env var at runtime, or as volume
The last solution, if you can specify where your container will get the .env file in your file tree (I'm not expert in Python to help you on that), you can use this trick that I described in this article. It's perfectly designed for configuration file, it's stored natively in Secret manager and therefore protect your secret automatically.
I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/
Version 3.1 of the docker-compose.yml specification introduces support for secrets.
I tried this:
version: '3.1'
services:
a:
image: tutum/hello-world
secret:
password: the_password
b:
image: tutum/hello-world
$ docker-compose up returns:
Unsupported config option for services.secret: 'password'
How can we use the secrets feature in practice?
You can read the corresponding section from the official documentation.
To use secrets you need to add two things into your docker-compose.yml file. First, a top-level secrets: block that defines all of the secrets. Then, another secrets: block under each service that specifies which secrets the service should receive.
As an example, create the two types of secrets that Docker will understand: external secrets and file secrets.
1. Create an 'external' secret using docker secret create
First thing: to use secrets with Docker, the node you are on must be part of a swarm.
$ docker swarm init
Next, create an 'external' secret:
$ echo "This is an external secret" | docker secret create my_external_secret -
(Make sure to include the final dash, -. It's easy to miss.)
2. Write another secret into a file
$ echo "This is a file secret." > my_file_secret.txt
3. Create a docker-compose.yml file that uses both secrets
Now that both types of secrets are created, here is the docker-compose.yml file that will read both of those and write them to the web service:
version: '3.1'
services:
web:
image: nginxdemos/hello
secrets: # secrets block only for 'web' service
- my_external_secret
- my_file_secret
secrets: # top level secrets block
my_external_secret:
external: true
my_file_secret:
file: my_file_secret.txt
Docker can read secrets either from its own database (e.g. secrets made with docker secret create) or from a file. The above shows both examples.
4. Deploy your test stack
Deploy the stack using:
$ docker stack deploy --compose-file=docker-compose.yml secret_test
This will create one instance of the web service, named secret_test_web.
5. Verify that the container created by the service has both secrets
Use docker exec -ti [container] /bin/sh to verify that the secrets exist.
(Note: in the below docker exec command, the m2jgac... portion will be different on your machine. Run docker ps to find your container name.)
$ docker exec -ti secret_test_web.1.m2jgacogzsiaqhgq1z0yrwekd /bin/sh
# Now inside secret_test_web; secrets are contained in /run/secrets/
root#secret_test_web:~$ cd /run/secrets/
root#secret_test_web:/run/secrets$ ls
my_external_secret my_file_secret
root#secret_test_web:/run/secrets$ cat my_external_secret
This is an external secret
root#secret_test_web:/run/secrets$ cat my_file_secret
This is a file secret.
If all is well, the two secrets we created in steps 1 and 2 should be inside the web container that was created when we deployed our stack.
Given you have a service myapp and a secrets file secrets.yml:
Create a compose file:
version: '3.1'
services:
myapp:
build: .
secrets:
secrets_yaml
Provision a secret using this command:
docker secret create secrets_yaml secrets.yml
Deploy your service using this command:
docker deploy --compose-file docker-compose.yml myappstack
Now your app can access the secret file at /run/secrets/secrets_yaml. You can either hardcode this path in your application or create a symbolic link.
The different question
This answer is probably to the question "how do you provision your secrets to your docker swarm cluster".
The original question "how do you manage secret values with docker compose" implies that the docker-compose file contains secret values. It doesn't.
There's a different question: "Where do you store the canonical source of the secrets.yml file". This is up to you. You can store it in your head, print on a sheet of paper, use a password manager, use a dedicated secrets application/database. Heck, you can even use a git repository if it's safely secured itself. Of course, never store it inside the system you're securing with it :)
I would recommend vault. To store a secret:
# create a temporary secret file
cat secrets.yml | vault write secret/myappsecrets -
To retrieve a secret and put it into your docker swarm:
vault read -field=value secret/myappsecrets | docker secret create secrets_yaml -
Of course, you can use docker cluster itself as a single source of truth for you secrets, but if your docker cluster breaks, you'd lost your secrets. So make sure to have a backup elsewhere.
The question nobody asked
The third question (that nobody asked) is how to provision secrets to developers' machines. It might be needed when there's an external service which is impossible to mock locally or a large database which is impossible to copy.
Again, docker has nothing to do with it (yet). It doesn't have access control lists which specify which developers have access to which secrets. Nor does it have any authentication mechanism.
The ideal solution appears to be this:
A developer opens some web application.
Authenticates using some single sign on mechanism.
Copies some long list of docker secret create commands and executes them in the terminal.
We have yet to see if such an application pops up.
You can also specify secrets stored locally in a file using file: key in secrets object. Then you don't have to docker secret create them yourself, Compose / docker stack deploy will do it for you.
version: '3.1'
secrets:
password:
file: ./password
services:
password_consumer:
image: alpine
secrets:
- password
Reference: Compose file version 3 reference: Secrets
One question was raised here in the comments, why should I initialize a swarm if I only need secrets? And my answer is that secrets is created for the swarm, where you have more than one node and you want to manage and share secrets in a secure way. But if you have one node, this will not (almost) add any extra security if someone can access your host machine where you have the one node swarm, as secrets can be retrieved from the running containers, or directly on the host if the secret is created from a file, like a private key.
Check this blog: https://www.docker.com/blog/docker-secrets-management/
And read the comments:
"Thank you very much for the introductory article. The steps are mentioned to view the contents of secrets in container will not work when the redis container is created on a worker node."
Is that the exact indentation of your docker-compose.yml file? I think secret secrets should be nested under a (i.e. one of the services), not directly under services section.
I guess the keyword is secrets not secret. That is at least what I understand from reading the schema.
The keyword is secrets instead of secret.
It should also properly indented under service a.