Everything I tried following Dockerfile and docker compose references to pass an environment variable to the Docker image did not work.
I want to make this env var available during docker build when using docker-compose.
On the Docker host I have:
export BUILD_VERSION=1.0
app.js
console.log('BUILD_VERSION: ' + process.env.BUILD_VERSION);
Dockerfile:
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
If I build the image directly, the env var is passed fine:
docker build -t test --no-cache --build-arg BUILD_VERSION .
and is also available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
but not with docker compose.
docker-compose up --build
...
Step 5/7 : RUN echo Build Time: $BUILD_VERSION
---> Running in 6115161f33bf
Build Time:
---> c691c619018a
Removing intermediate container 6115161f33bf
Step 6/7 : RUN node /app.js
---> Running in f51831cc5e1e
BUILD_VERSION:
It's only available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
I also tried using environment in docker-compose.yml like below which again only makes it available at run-time but not build-time:
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
Please advise, how can I make it work in the least convoluted way?
Your example is working for me.
Have you tried deleting the images and building again? Docker won't re-build your image despite environment variables changed if the image is in cache.
You can delete them with:
docker-compose down --rmi all
Edit, I show here how it is working for me at build time:
$ cat Dockerfile
FROM alpine
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
$ cat docker-compose.yml
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Build:
$ export BUILD_VERSION=122221
$ docker-compose up --build
Creating network "a_default" with the default driver
Building app
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in b0a1a79967a0
Removing intermediate container b0a1a79967a0
---> 9fa331d63f6d
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in a602c27689a5
Removing intermediate container a602c27689a5
---> bf2181423c93
Step 4/4 : RUN echo Build Time: $BUILD_VERSION <<<<<< (*)
---> Running in 9d828cefcfab
Build Time: 122221
Removing intermediate container 9d828cefcfab
---> 2b3afa3d348c
Successfully built 2b3afa3d348c
Successfully tagged a_app:latest
Creating a_app_1 ... done
Attaching to a_app_1
a_app_1 exited with code 0
As the other answer mentioned, you can use docker-compose build --no-cache, and you can avoid mentioning "app" if you have multiple services, so docker-compose will build all the services. What you can do to handle different build versions in the same docker-compose build is to use different env vars, like:
$ cat docker-compose
version: '3'
services:
app1:
build:
context: .
args:
- BUILD_VERSION=$APP1_BUILD_VERSION
app2:
build:
context: .
args:
- BUILD_VERSION=$APP2_BUILD_VERSION
Export:
$ export APP1_BUILD_VERSION=1.1.1
$ export APP2_BUILD_VERSION=2.2.2
Build:
$ docker-compose build
Building app1
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in 0b66093bc2ef
Removing intermediate container 0b66093bc2ef
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in 9d89b48c875d
Removing intermediate container 9d89b48c875d
---> ca2480695149
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 52dec27874ec
Build Time: 1.1.1
Removing intermediate container 52dec27874ec
---> 1b3654924297
Successfully built 1b3654924297
Successfully tagged a_app1:latest
Building app2
Step 1/4 : FROM alpine
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Using cache
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in d29442339459
Removing intermediate container d29442339459
---> 8b26def5ef3a
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 4b3de2d223e5
Build Time: 2.2.2
Removing intermediate container 4b3de2d223e5
---> 89033b10b61e
Successfully built 89033b10b61e
Successfully tagged a_app2:latest
You need to set argument in docker-compose.yml as shown which will then be overriden to passed env variable -
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Next export environment variable you need to pass.
$ export BUILD_VERSION=1.0
Now build the image using command
$ docker-compose build --no-cache --build-arg BUILD_VERSION=$BUILD_VERSION app
You can pass in args to build, from the docker-compose file to the docker build. It is surprising the env vars aren't used for run and build.
// docker-compose.yml
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
args:
- BUILD_VERSION=${BUILD_VERSION}
volumes:
...
// Dockerfile
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION
Related
My projects structure looks as following.
app/
docker-compose.yml
test_some_pytest.py # this have some pytest code.
tests.Dockerfile
my tests.Dockerfile looks as following.
from python:3.4-alpine
RUN python --version
RUN pip --version
COPY . /APP
WORKDIR /APP
RUN pip install pytest
RUN ["pytest"]
and docker-compose.yml as following.
services
tests:
build:
context: .
dockerfile: tests.Dockerfile
When I run docker-compose up --build tests. the pytest also run but probably at some other place. it shows the following output.
.
.
.
Removing intermediate container 96f9a8ba43d2
---> 82c89715d4c0
Step 7/7 : RUN ["pytest"]
---> Running in c30ee497e5f5
============================= test session starts ==============================
platform linux -- Python 3.4.10, pytest-4.6.11, py-1.10.0, pluggy-0.13.1
rootdir: /python-test-calculator
collected 0 items
========================= no tests ran in 0.00 seconds =========================
The command 'pytest' returned a non-zero code: 5
ERROR: Service 'tests' failed to build : Build failed
If I use your tests.Dockerfile exactly as written, the following docker-compose.yaml:
version: "3"
services:
tests:
build:
context: .
dockerfile: tests.Dockerfile
And the following test_some_pytest.py:
def test_something():
assert True
It successfully runs pytest when I run docker-compose build:
$ docker-compose build
Building tests
[...]
Step 7/7 : RUN ["pytest"]
---> Running in 8d8a1f44913f
============================= test session starts ==============================
platform linux -- Python 3.4.10, pytest-4.6.11, py-1.10.0, pluggy-0.13.1
rootdir: /APP
collected 1 item
test_some_pytest.py . [100%]
=========================== 1 passed in 0.01 seconds ===========================
Removing intermediate container 8d8a1f44913f
---> 055afd5b1f8d
Successfully built 055afd5b1f8d
Successfully tagged docker_tests:latest
You can see from the above output that pytest discovered and successfully ran 1 test.
This is a cut-down example of a problem I'm having with a bigger Dockerfile.
Here's a Dockerfile:
FROM alpine:latest AS base
COPY docker-compose.yml /tmp/docker-compose.yml
RUN touch /tmp/foo
Here's a docker-compose.yml:
version: '3.5'
services:
web:
build:
context: .
What I expect is that docker build will be able to reuse the cached layers that docker-compose builds. What I see when I run docker-compose build web is:
$ docker-compose build web
Building web
Step 1/3 : FROM alpine:latest AS base
---> f70734b6a266
Step 2/3 : COPY docker-compose.yml /tmp/docker-compose.yml
---> 764c54eb3dd4
Step 3/3 : RUN touch /tmp/foo
---> Running in 77bdf96af899
Removing intermediate container 77bdf96af899
---> 7d8197f7004f
Successfully built 7d8197f7004f
Successfully tagged docker-compose-caching_web:latest
If I re-run docker-compose build web, I get:
...
Step 2/3 : COPY docker-compose.yml /tmp/docker-compose.yml
---> Using cache
---> 764c54eb3dd4
...
So it's clearly able to cache the layer with the file in it. However, when I run docker build ., here's the output I see:
$ docker build .
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM alpine:latest AS base
---> f70734b6a266
Step 2/3 : COPY docker-compose.yml /tmp/docker-compose.yml
---> e8679333ba0d
Step 3/3 : RUN touch /tmp/foo
---> Running in af26cc65312d
Removing intermediate container af26cc65312d
---> 186c8341ee96
Successfully built 186c8341ee96
Note step 2 didn't come from the cache. Why not? Or, more importantly, how can I ensure that it does without using --cache-from?
The problem this causes is that after this step in my bigger Dockerfile that I'm not showing, there's a honking great RUN command that takes an age to run. How can I get docker build and docker-compose build to share cache layers?
(Docker Desktop v 2.3.0.2 (45183) on OS X 10.14.6 for those playing along at home)
With Docker-compose 1.25+ (Dec. 2019), try and use:
COMPOSE_DOCKER_CLI_BUILD=1 docker-compose build
That is what is needed to enable the docker-cli, instead of the own internal docker-compose build.
See also "Faster builds in Docker Compose 1.25.1 thanks to BuildKit Support".
But be aware of docker-compose issue 7336, when using it with DOCKER_BUILDKIT=1 (in addition of COMPOSE_DOCKER_CLI_BUILD=1)
Looks like a known issue. For reasons I don't entirely understand, hashes generated by docker compose build are different from those generated by docker build.
https://github.com/docker/compose/issues/883
I have enabled user namespace mapping in docker and building an image using docker build works but when I use docker-compose for the image it fails with below message. What can be the reason for it?
db#vagrant:~/docker$ docker-compose up --build
Building db
Step 1/3 : FROM alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Using cache
---> 1491149423a1
Step 3/3 : COPY 1.txt .
ERROR: Service 'db' failed to build: failed to copy files: failed to copy file: Container ID 65536 cannot be mapped to a host ID
My user id is generated by some setup scripts which results in UID with larger than 65535 value.
db#vagrant:~/docker$ id
uid=65536(db) gid=1000(db) groups=1000(db),27(sudo),998(docker)
Docker configuration for namespace mapping
db#vagrant:~/docker$ cat /etc/docker/daemon.json
{
"userns-remap": "db"
}
db#vagrant:~/docker$ cat /etc/subuid /etc/subgid
db:100000:65536
db:100000:65536
Dockerfile contents(1.txt is empty file)
db#vagrant:~/docker$ cat Dockerfile
FROM alpine:latest
WORKDIR /app
COPY 1.txt .
docker-compose.yml file contents
db#vagrant:~/docker$ cat docker-compose.yml
version: "2"
services:
db:
build:
context: .
dockerfile: Dockerfile
image: sirishkumar/test
Output of docker build command
db#vagrant:~/docker$ docker build -t sirishkumar/test .
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM alpine:latest
latest: Pulling from library/alpine
c9b1b535fdd9: Pull complete
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Downloaded newer image for alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Running in 55f092b96268
Removing intermediate container 55f092b96268
---> 8af079e6a478
Step 3/3 : COPY 1.txt .
---> b3c14a691102
Successfully built b3c14a691102
Successfully tagged sirishkumar/test:latest
Output of docker-compose
db#vagrant:~/docker$ docker-compose up --build
Creating network "docker_default" with the default driver
Building db
Step 1/3 : FROM alpine:latest
latest: Pulling from library/alpine
c9b1b535fdd9: Pull complete
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Downloaded newer image for alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Running in fe39955aed1a
Removing intermediate container fe39955aed1a
---> fb23b8888f4a
Step 3/3 : COPY 1.txt .
ERROR: Service 'db' failed to build: failed to copy files: failed to copy file: Container ID 65536 cannot be mapped to a host ID
You have a range of 65,536 user id's to map into your docker user namespace:
db#vagrant:~/docker$ cat /etc/subuid /etc/subgid
db:100000:65536
db:100000:65536
And then you're telling docker to copy a file into the container owned by an ID outside of that range (user ID's start at 0):
db#vagrant:~/docker$ id
uid=65536(db) gid=1000(db) groups=1000(db),27(sudo),998(docker)
You need to set your user id on the host to be within the host user id range (less than 65536).
I'm working behind a corporate proxy. I'm on Windows 10 machine but I'm also running VMware. Inside VMware I'm running a Fedora 28 image. Because I'm behind a corporate proxy I have to run an NTLM proxy server on Windows 10 so that Fedora can connect to the internet.
I have everything set up and I can access the internet and pull docker images perfectly. I'm following this tutorial here on docker-compose.
This is my directory structure:
docker-compose-tut
├── commander
│ └── Dockerfile
└── docker-compose.yml
Dockerfile
FROM node:latest
ENV http_proxy=http://prx:3128
ENV https_proxy=http://prx:3128
ENV ftp_proxy=http://prx:3128
ENV no_proxy=localhost,127.0.0.1
RUN curl -L https://github.com/joeferner/redis-commander/tarball/v0.4.5 | tar zx
RUN npm install -g redis-commander
ENTRYPOINT ["redis-commander"]
CMD ["--redis-host", "redis"]
EXPOSE 8081
docker-compose.yml
backend:
image: redis:latest
restart: always
frontend:
build: commander
links:
- backend:redis
environment:
- ENV_VAR1 = some_value
ports:
- 8081:8081
environment:
- VAR1=value
restart: always
When I run the command docker-compose up -d I get the following output
[root#localhost docker-compose-tut]$ docker-compose up -d
Building frontend
Step 1/10 : FROM node:latest
---> b064644cf368
Step 2/10 : ENV http_proxy http://prx:3128
---> Using cache
---> f70ae2e24003
Step 3/10 : ENV https_proxy http://prx:3128
---> Using cache
---> 12a4e65a3874
Step 4/10 : ENV ftp_proxy http://prx:3128
---> Using cache
---> 77abdce2f8d7
Step 5/10 : ENV no_proxy localhost,127.0.0.1
---> Using cache
---> 467c4f25e4f7
Step 6/10 : RUN curl -L https://github.com/joeferner/redis-commander/tarball/v0.4.5 | tar zx
---> Using cache
---> e3f8b2d8ad64
Step 7/10 : RUN npm install -g redis-commander
---> Running in 3189b0fa1086
npm WARN deprecated ejs#0.8.8: Critical security bugs fixed in 2.5.5
When I enter the command docker-compose ps this is literally the output
Name Command State Ports
------------------------------
When I enter the command docker-compose --services the output is the following:
backend
frontend
If I open the browser and enter http://localhost:8081 the browser says that it can't establish a connection to http://localhost:8081.
What am I doing wrong?
I'm familiar with ARG, which allows for arguments to be passed into a dockerfile, like so:
Dockerfile
FROM ubuntu:latest
ARG foo
RUN echo $foo
$ docker build --build-arg foo=foo .
Sending build context to Docker daemon 2.048 kB
Step 1/3 : FROM ubuntu:latest
---> 00fd29ccc6f1
Step 2/3 : ARG foo
---> Running in 8f6ddda3254d
---> 9c658744762b
Removing intermediate container 8f6ddda3254d
Step 3/3 : RUN echo $foo
---> Running in 37bcbf3c5052
foo
---> 0e162e793204
Removing intermediate container 37bcbf3c5052
Successfully built 0e162e793204
However, what I want is to forward an environment variable from the host into the Dockerfile, without the need for the user to specify the --build-arg. So, for example, I want them to be able to execute this:
$ export foo=foo
$ docker build .
And get the same result.
Is this possible?
The easiest way to do this is to use docker-compose to build, with a docker-compose file like the following:
my-awesome-service:
build:
context: .
dockerfile: Dockerfile
args:
- FOO=${FOO}
Then your user can run docker-compose build and the FOO variable will be forwarded into the Dockerfile. See: https://docs.docker.com/compose/compose-file/#args