Both ARGs an ENVs are available to a Dockerfile at build time, but apparently Docker Compose allows only to specify ARGs in service.build.args. ENVs specified in service.environment apparently are not visible at build time (which also makes sense given this path).
So if my build depends on ENVs (as well as ARGs) and if I build with docker-compose build, how can I provide the build-time ENVs inside my docker-compose.yaml?
There's no way to externally pass environment variables into a Dockerfile, whether via docker build or the Compose build: block. You can only specify arguments.
If you really need to specify an environment variable at build time, you can pass it as an argument and then set the environment variable in the Dockerfile
ARG FOO
ENV FOO ${FOO}
You must rebuild the image if you ever change one of these things. That makes this technique not work well for deployment-specific settings like user IDs, host names, etc. It's also okay for container-side port numbers and filesystem paths to be fixed properties of the image.
Related
I have a Dockerfile that sets environment variables that are common to all environments, whether dev, test, or production ones, but I have to set another environment variable that is only applicable to my development environment, so I can't set it in the Dockerfile because such file is managed by the version control, so the change would be deployed to all environments.
How can add an environment variable to a docker container only in my local development environment?
In case that the env variable can be specified when the image is being used, then just supplying the variable then makes more sense. For instance, if you are locally testing the image, by using the docker cli, you can set the variable with:
docker run -e KEY=VALUE $image
If you are using other tools to test the image, there are always other methods to set env keys.
If it's required for you to have set the variable at build time, you can specify built args inside the Dockerfile.
An example for that would be:
FROM someimage:v1
ARG DEV_ONLY_VAR
ENV KEY=$DEV_ONLY_VAR
Using this, you can specify the build arg DEV_ONLY_VAR in the build command by writing:
docker build --build-arg DEV_ONLY_VAR=VALUE .
Note, even without the ENV KEY=$DEV_ONLY_VAR line the build arg will be available like a env variable during build time, on other run steps.
More on build args here
After reading the config point of the 12 factor app I decided to override my config file containing default value with environment variable.
I have 3 Dockerfiles, one for an API, one for a front-end and one for a worker. I have one docker-compose.yml to run those 3 services plus a database.
Now I'm wondering if I should define the environment variables in Dockerfiles or docker-compose.yml ? What's the difference between using one rather than another ?
See this:
You can set environment variables in a service’s containers with the 'environment' key, just like with docker run -e VARIABLE=VALUE ...
Also, you can use ENV in dockerfile to define a environment variable.
The difference is:
Environment variable define in Dockerfile will not only used in docker build, it will also persist into container. This means if you did not set -e when docker run, it will still have environment variable same as defined in Dockerfile.
While environment variable define in docker-compose.yaml just used for docker run.
Maybe next example could make you understand more clear:
Dockerfile:
FROM alpine
ENV http_proxy http://123
docker-compose.yaml:
app:
environment:
- http_proxy=http://123
If you define environment variable in Dockerfile, all containers used this image will also has the http_proxy as http://123. But the real situation maybe when you build the image, you need this proxy. But, the container maybe run by other people maybe not need this proxy or just have another http_proxy, so they had to remove the http_proxy in entrypoint or just change to another value in docker-compose.yaml.
If you define environment variable in docker-compose.yaml, then user could just choose his own http_proxy when do docker-compose up, http_proxy will not be set if user did not configure it docker-compose.yaml.
If I have a docker-compose.yml file locally, and a .env file that contains secrets and variables that are valid for my local environment, I build the stack using docker-compose build, I push the stack to an image registry using docker-compose push.
Does this mean that any other environment that does a docker-compose pull && docker-compose up from that repository will receive an image with the private environment variables already available inside the image (which might contain secret stuff like access tokens)?
Or in other words: are the things defined in the .env file available at image build time or at container runtime?
It depends on your Dockerfile, i.e a way you define how to build the image.
If you accidentally copy the .env file into the image at build stage (in Dockerfile), it might get uploaded.
If you only use it as a part of docker-compose file (env_file parameter), the variables will only be passed to containers at runtime. In such case you will need to make sure that .env file exists wherever you decide to run the containers.
If I'm using Docker with nginx for hosting a web app, how can I use either
Variables in my docker-compose.yml file
Environment variables such as HOSTNAME=example.com.
So that when I build the container, it will insert the value into my nginx.conf file that I copy over when I build the container.
You can use environment variables is your compose file. According to official docs
Your configuration options can contain environment variables. Compose uses the variable values from the shell environment in which docker-compose is run. For example, suppose the shell contains POSTGRES_VERSION=9.3 and you supply this configuration:
db: image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for the POSTGRES_VERSION environment variable in the shell and substitutes its value in.
See the docs for more information. You will find various other approaches to supply environment variables in the link like passing them through env_file etc.
I'm creating a Docker image for Atlassian JIRA.
Dockerfile can be found here: https://github.com/joelcraenhals/docker-jira/blob/master/Dockerfile
However I want to enable the HTTPS connector on the Tomcat server inside the Docker image during image creation so that the server.xml file is configured during image creation.
How can I modify a certain file in the container?
Best regards,
Alternative a)
I would say you are going the wrong path here. You do not want to do this during image creation, but rather during the entrypoint.
It is very common and best practise in docker to configure the service during the first container start e.g. seed the database, generate passwords and seeds and, as in you case, generate configuration based on templates.
Usually those configuration files are either controlled by ENV variables you pass on to docker run or rather in your docker-compose.yml, in more complex environments the source of the configuration variables can be consul or etcd.
For your example, e.g. you could introduce a ENV variable 'USE_SSL' and then either use sed in your entrypoint to replace something in the server.xml when it is set, but since you need much more, like setting the revers_proxy domain and things, you should go with tiller : https://github.com/markround/tiller
Create a server.xml.erb file, place the variables you want to be dynamic, use if conditions if you want to exclude a section if USE_SSL is not set, and let tiller use ENVIRONMENT as a datasources.
Alternative b)
If you really want to stay with the "on image build" concept ( not recommended ) you should use the so called build_args https://docs.docker.com/engine/reference/commandline/build/
Add this to your docker file
ARG USE_SSL
RUN /some_script_you_created_to_generate_server_xml.sh $USE_SSL
You still need to have a bash/whatever script some_script_you_created_to_generate_server_xml.sh which takes the args, and creates by conditions, whatever you want. Tiller though will be much more convenient when stuff gets bigger (compared to running some seds/awks)
and then, when building the image, you could use
`docker build . --build-arg USE_SSL=no -t yourtag
You need to extend this image with your custom config file, write your own Dockerfile with following content:
FROM <docker-jira image name>:<tag>
COPY <path to the server.xml on your computer, relative to Dockerfile dir> <path to desired location of server.xml inside the container>
After that you need to build and run your new image:
docker build . --tag <name of your image>
docker run <name of your image>