I'm trying to update an environment variable for a Docker image
The image builds like this in Jenkins:
insideNode { // executed inside the container docker-compose is building
withEnv(['COMPOSE_OPTIONS=-e http_proxy -e https_proxy -e no_proxy']) {
sh """
export http_proxy=${env.http_proxy}
export https_proxy=${env.https_proxy}
export no_proxy=${env.no_proxy}
VERSION=${project.version} docker-compose --file docker-compose.yml node-container
"""
}
}
The insideNode closure is executed inside the Docker container I'm building - so it uses itself to build a new version of itself. I'm trying to update those proxy variables, which you can see are drawn from the environment (the Jenkins container originally, but now its own container).
If I update the environment variables in the overall Jenkins environment, do they flow down to the container build here? I would assume that the closure would be using the container's own environment, which means you can't really update them since they need to be updated, to use the updated values, so it becomes a cyclical dependency. So in this scenario, how would overwrite those env vars?
Related
We have used the technique detailed here to expose host environment variables to Docker build in a secured fashion.
# syntax=docker/dockerfile:1.2
FROM golang:1.18 AS builder
# move secrets out of the build process (and docker history)
RUN --mount=type=secret,id=github_token,dst=/app/secret_github_token,required=true,uid=10001 \
export GITHUB_TOKEN=$(cat /app/secret_github_token) && \
<nice command that uses $GITHUB_TOKEN>
And this command to build the image:
export DOCKER_BUILDKIT=1
docker build --secret id=github_token,env=GITHUB_TOKEN -t cool-image-bro .
The above works perfectly.
Now we also have a docker-compose file running in CI that needs to be modified. However, even if I confirmed that the ENV vars are present in that job, I do not know how to assign the environment variable to the github_token named secret ID.
In other words, what is the equivalent docker-compose command (up --build, or build) that can accept a mapping of an environment variable with a secret ID?
Turns out I was a bit ahead of the times. docker compose v.2.5.0 brings support for secrets.
After having modified the Dockerfile as explained above, we must then update the docker-compose to defined secrets.
docker-compose.yml
services:
my-cool-app:
build:
context: .
secrets:
- github_user
- github_token
...
secrets:
github_user:
file: secrets_github_user
github_token:
file: secrets_github_token
But where are those files secrets_github_user and secrets_github_token coming from? In your CI you also need to export the environment variable and save it to the default secrets file location. In our project we are using Tasks so we added these too lines.
Note that we are running this task from our CI, so you could do it differently without Tasks for example.
- printenv GITHUB_USER > /root/project/secrets_github_user
- printenv GITHUB_TOKEN > /root/project/secrets_github_token
We then update the CircleCI config and add two environment variable to our job:
.config.yml
name-of-our-job:
environment:
DOCKER_BUILDKIT: 1
COMPOSE_DOCKER_CLI_BUILD: 1
You might also need a more recent Docker version, I think they introduced it in a late 19 release or early 20. I have used this and it works:
steps:
- setup_remote_docker:
version: 20.10.11
Now when running your docker-compose based commands, the secrets should be successfully mounted through docker-compose and available to correctly build or run your Dockerfile instructions!
I have a Dockerfile that sets environment variables that are common to all environments, whether dev, test, or production ones, but I have to set another environment variable that is only applicable to my development environment, so I can't set it in the Dockerfile because such file is managed by the version control, so the change would be deployed to all environments.
How can add an environment variable to a docker container only in my local development environment?
In case that the env variable can be specified when the image is being used, then just supplying the variable then makes more sense. For instance, if you are locally testing the image, by using the docker cli, you can set the variable with:
docker run -e KEY=VALUE $image
If you are using other tools to test the image, there are always other methods to set env keys.
If it's required for you to have set the variable at build time, you can specify built args inside the Dockerfile.
An example for that would be:
FROM someimage:v1
ARG DEV_ONLY_VAR
ENV KEY=$DEV_ONLY_VAR
Using this, you can specify the build arg DEV_ONLY_VAR in the build command by writing:
docker build --build-arg DEV_ONLY_VAR=VALUE .
Note, even without the ENV KEY=$DEV_ONLY_VAR line the build arg will be available like a env variable during build time, on other run steps.
More on build args here
I am using Docker composé ver 1.5 or 6 and nginx image. I want to parameterize the nginx.config. To do that I want to create a var from $(basename some path). But the problem is Docker does not accept dynamic vars like that in env part of the Dockerfile. Another problem is that I also cannot map those variables on build run as Docker compose does not accept dynamic, scripted vars. How to overcome that issue?
From nginx
ENV myvar=$(basename /)
This is impossible to build the image.
The other way was
ARG myvar
ENV myvar2=myvar
But my version of Docker compose allows only to set
Environment: myvar=$(basename mypathinthevolume/)
That also does not seem to work
After reading the config point of the 12 factor app I decided to override my config file containing default value with environment variable.
I have 3 Dockerfiles, one for an API, one for a front-end and one for a worker. I have one docker-compose.yml to run those 3 services plus a database.
Now I'm wondering if I should define the environment variables in Dockerfiles or docker-compose.yml ? What's the difference between using one rather than another ?
See this:
You can set environment variables in a service’s containers with the 'environment' key, just like with docker run -e VARIABLE=VALUE ...
Also, you can use ENV in dockerfile to define a environment variable.
The difference is:
Environment variable define in Dockerfile will not only used in docker build, it will also persist into container. This means if you did not set -e when docker run, it will still have environment variable same as defined in Dockerfile.
While environment variable define in docker-compose.yaml just used for docker run.
Maybe next example could make you understand more clear:
Dockerfile:
FROM alpine
ENV http_proxy http://123
docker-compose.yaml:
app:
environment:
- http_proxy=http://123
If you define environment variable in Dockerfile, all containers used this image will also has the http_proxy as http://123. But the real situation maybe when you build the image, you need this proxy. But, the container maybe run by other people maybe not need this proxy or just have another http_proxy, so they had to remove the http_proxy in entrypoint or just change to another value in docker-compose.yaml.
If you define environment variable in docker-compose.yaml, then user could just choose his own http_proxy when do docker-compose up, http_proxy will not be set if user did not configure it docker-compose.yaml.
I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.