Why is it possible that `docker-compose build` writes to attached volume? - docker

As I know, commands in Dockerfile affects only the built image, not the container where it runs. But for me it seems that this scenario writing files to the attached volume which should be impossible, because the volume is not attached while building the image:
Dockerfile:
FROM alpine
RUN mkdir /data
RUN date > /data/timestamp
RUN echo "Content in image:" && cat /data/timestamp
docker-compose.yml:
version: "3"
services:
my-service:
build: .
image: my-image
entrypoint: cat /data/timestamp
volumes:
- my-volume:/data
other-service:
image: alpine
entrypoint: cat /data/timestamp
volumes:
- my-volume:/data
volumes:
my-volume:
Output:
$ docker-compose build --no-cache; docker-compose up
other-service uses an image, skipping
Building my-service
Step 1/4 : FROM alpine
---> e66264b98777
Step 2/4 : RUN mkdir /data
---> Running in 969f72e0e71c
Removing intermediate container 969f72e0e71c
---> 21277c5b67b6
Step 3/4 : RUN date > /data/timestamp
---> Running in 09b2e14d742a
Removing intermediate container 09b2e14d742a
---> ba94d6c58c1f
Step 4/4 : RUN echo "Directory content in image:" && cat /data/timestamp
---> Running in 985e8e48bd80
Content in image:
Fri Jul 15 11:20:14 UTC 2022
Removing intermediate container 985e8e48bd80
---> adcbeac42123
Successfully built adcbeac42123
Successfully tagged my-image:latest
Creating volume "docker-test_my-volume" with default driver
Creating docker-test_other-service_1 ... done
Creating docker-test_my-service_1 ... done
Attaching to docker-test_other-service_1, docker-test_my-service_1
my-service_1 | Fri Jul 15 11:20:14 UTC 2022
other-service_1 | Fri Jul 15 11:20:14 UTC 2022
docker-test_other-service_1 exited with code 0
docker-test_my-service_1 exited with code 0
The other-service should be not able to read the contents of /data/timestamp, because it's using a different image (alpine) and the file exists only in my-image, not in the volume. How is it possible that the file exists on the volume? It seems that nothing changes if I add VOLUME /data instead of RUN mkdir /data to Dockerfile, what should I expect from this command?

Related

Docker build and docker-compose build with user namespace mapping

I have enabled user namespace mapping in docker and building an image using docker build works but when I use docker-compose for the image it fails with below message. What can be the reason for it?
db#vagrant:~/docker$ docker-compose up --build
Building db
Step 1/3 : FROM alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Using cache
---> 1491149423a1
Step 3/3 : COPY 1.txt .
ERROR: Service 'db' failed to build: failed to copy files: failed to copy file: Container ID 65536 cannot be mapped to a host ID
My user id is generated by some setup scripts which results in UID with larger than 65535 value.
db#vagrant:~/docker$ id
uid=65536(db) gid=1000(db) groups=1000(db),27(sudo),998(docker)
Docker configuration for namespace mapping
db#vagrant:~/docker$ cat /etc/docker/daemon.json
{
"userns-remap": "db"
}
db#vagrant:~/docker$ cat /etc/subuid /etc/subgid
db:100000:65536
db:100000:65536
Dockerfile contents(1.txt is empty file)
db#vagrant:~/docker$ cat Dockerfile
FROM alpine:latest
WORKDIR /app
COPY 1.txt .
docker-compose.yml file contents
db#vagrant:~/docker$ cat docker-compose.yml
version: "2"
services:
db:
build:
context: .
dockerfile: Dockerfile
image: sirishkumar/test
Output of docker build command
db#vagrant:~/docker$ docker build -t sirishkumar/test .
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM alpine:latest
latest: Pulling from library/alpine
c9b1b535fdd9: Pull complete
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Downloaded newer image for alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Running in 55f092b96268
Removing intermediate container 55f092b96268
---> 8af079e6a478
Step 3/3 : COPY 1.txt .
---> b3c14a691102
Successfully built b3c14a691102
Successfully tagged sirishkumar/test:latest
Output of docker-compose
db#vagrant:~/docker$ docker-compose up --build
Creating network "docker_default" with the default driver
Building db
Step 1/3 : FROM alpine:latest
latest: Pulling from library/alpine
c9b1b535fdd9: Pull complete
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Downloaded newer image for alpine:latest
---> e7d92cdc71fe
Step 2/3 : WORKDIR /app
---> Running in fe39955aed1a
Removing intermediate container fe39955aed1a
---> fb23b8888f4a
Step 3/3 : COPY 1.txt .
ERROR: Service 'db' failed to build: failed to copy files: failed to copy file: Container ID 65536 cannot be mapped to a host ID
You have a range of 65,536 user id's to map into your docker user namespace:
db#vagrant:~/docker$ cat /etc/subuid /etc/subgid
db:100000:65536
db:100000:65536
And then you're telling docker to copy a file into the container owned by an ID outside of that range (user ID's start at 0):
db#vagrant:~/docker$ id
uid=65536(db) gid=1000(db) groups=1000(db),27(sudo),998(docker)
You need to set your user id on the host to be within the host user id range (less than 65536).

App not installed on docker image for dotnetcore visual studio template

I am starting with dockers so i create a basic dotnet core console app, to use as a pathfinder.
Then i add docker-compose support targeting windows containers.
I can build and run the image from the visual studio, even debug de app.
But when i try to run the same app from docker CLI, it seams the app was not publish to c:\app folder.
The app send "Hello World" to STDOUT.
Here the dockerfile:
#Depending on the operating system of the host machines(s) that will build or run the containers, the image specified in the FROM statement may need to be changed.
#For more information, please see https://aka.ms/containercompat
FROM mcr.microsoft.com/dotnet/core/runtime:3.0-nanoserver-1903 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-nanoserver-1903 AS build
WORKDIR /src
COPY ["dotnetCore3.csproj", "./"]
RUN dotnet restore "dotnetCore3.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "dotnetCore3.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "dotnetCore3.csproj" -c Release -o /app/publish
docker-compose.yml:
version: '3.4'
services:
dotnetcore3:
image: *****/myregistry/dotnetcore3
build:
context: .
dockerfile: Dockerfile
Running:
docker run *****/myregistry/dotnetcore3:dev
run the shell from inside the docker instead of running the app.
Using the shell, i see there is nothing at c:\app folder.
Here the full log from Container tools window:
========== Preparing Containers ==========
Getting Docker containers ready...
docker-compose -f "C:\Users\MS20004\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\MS20004\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\MS20004\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi config
networks:
default:
external:
name: nat
services:
dotnetcore3:
build:
context: C:\Users\*****\source\repos\dotnetCore3
dockerfile: Dockerfile
labels:
com.microsoft.created-by: visual-studio
com.microsoft.visual-studio.project-name: dotnetCore3
target: base
entrypoint: cmd /c "set DISABLE_PERFORMANCE_DEBUGGER=1 & C:\\remote_debugger\\x64\\msvsmon.exe
/noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn
/nowowwarn /timeout:2147483646 /LogDebuggeeOutputToStdOut"
environment:
NUGET_FALLBACK_PACKAGES: c:\.nuget\fallbackpackages
NUGET_PACKAGES: C:\.nuget\packages
image: dotnetcore3:dev
labels:
com.microsoft.visualstudio.debuggee.arguments: ' --additionalProbingPath c:\.nuget\packages
--additionalProbingPath c:\.nuget\fallbackpackages "bin\Debug\netcoreapp3.0\dotnetCore3.dll"'
com.microsoft.visualstudio.debuggee.killprogram: C:\remote_debugger\x64\utils\KillProcess.exe
dotnet.exe
com.microsoft.visualstudio.debuggee.program: '"C:\Program Files\dotnet\dotnet.exe"'
com.microsoft.visualstudio.debuggee.workingdirectory: C:\app
volumes:
- C:\Users\*****\source\repos\dotnetCore3:C:\app:rw
- C:\Users\*****\onecoremsvsmon\16.3.0040.0:C:\remote_debugger:ro
- C:\Program Files\dotnet\sdk\NuGetFallbackFolder:c:\.nuget\fallbackpackages:ro
- C:\Users\*****\.nuget\packages:c:\.nuget\packages:ro
version: '3.4'
docker ps --filter "status=running" --format {{.ID}};{{.Names}}
docker-compose -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\*****\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi build
Building dotnetcore3
Step 1/4 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0-nanoserver-1903 AS base
---> 279077ab63e3
Step 2/4 : WORKDIR /app
---> Using cache
---> 6ce0262ac12a
Step 3/4 : LABEL com.microsoft.created-by=visual-studio
---> Using cache
---> 3756662eccd6
Step 4/4 : LABEL com.microsoft.visual-studio.project-name=dotnetCore3
---> Using cache
---> 71d353776b98
Successfully built 71d353776b98
Successfully tagged dotnetcore3:dev
docker-compose -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\*****\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi up -d --no-build --force-recreate --remove-orphans
Creating dockercompose15898560444299855188_dotnetcore3_1 ...
Creating dockercompose15898560444299855188_dotnetcore3_1 ... done
Done! Docker containers are ready.
========== Preparing Containers ==========
Getting Docker containers ready...
docker-compose -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\*****\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi config
networks:
default:
external:
name: nat
services:
dotnetcore3:
build:
context: C:\Users\*****\source\repos\dotnetCore3
dockerfile: Dockerfile
labels:
com.microsoft.created-by: visual-studio
com.microsoft.visual-studio.project-name: dotnetCore3
target: base
entrypoint: cmd /c "set DISABLE_PERFORMANCE_DEBUGGER=1 & C:\\remote_debugger\\x64\\msvsmon.exe
/noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn
/nowowwarn /timeout:2147483646 /LogDebuggeeOutputToStdOut"
environment:
NUGET_FALLBACK_PACKAGES: c:\.nuget\fallbackpackages
NUGET_PACKAGES: C:\.nuget\packages
image: *****/dockerhub/dotnetcore3:dev
labels:
com.microsoft.visualstudio.debuggee.arguments: ' --additionalProbingPath c:\.nuget\packages
--additionalProbingPath c:\.nuget\fallbackpackages "bin\Debug\netcoreapp3.0\dotnetCore3.dll"'
com.microsoft.visualstudio.debuggee.killprogram: C:\remote_debugger\x64\utils\KillProcess.exe
dotnet.exe
com.microsoft.visualstudio.debuggee.program: '"C:\Program Files\dotnet\dotnet.exe"'
com.microsoft.visualstudio.debuggee.workingdirectory: C:\app
volumes:
- C:\Users\*****\source\repos\dotnetCore3:C:\app:rw
- C:\Users\*****\onecoremsvsmon\16.3.0040.0:C:\remote_debugger:ro
- C:\Program Files\dotnet\sdk\NuGetFallbackFolder:c:\.nuget\fallbackpackages:ro
- C:\Users\*****\.nuget\packages:c:\.nuget\packages:ro
version: '3.4'
docker ps --filter "status=running" --format {{.ID}};{{.Names}}
365e9e5b6bb8;dockercompose15898560444299855188_dotnetcore3_1
docker exec -i 365e9e5b6bb8 C:\remote_debugger\x64\utils\KillProcess.exe dotnet.exe
docker-compose -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\*****\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi build
Building dotnetcore3
Step 1/4 : FROM mcr.microsoft.com/dotnet/core/runtime:3.0-nanoserver-1903 AS base
---> 279077ab63e3
Step 2/4 : WORKDIR /app
---> Using cache
---> 6ce0262ac12a
Step 3/4 : LABEL com.microsoft.created-by=visual-studio
---> Using cache
---> 3756662eccd6
Step 4/4 : LABEL com.microsoft.visual-studio.project-name=dotnetCore3
---> Using cache
---> 71d353776b98
Successfully built 71d353776b98
Successfully tagged *****/myregistry/dotnetcore3:dev
docker-compose -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.yml" -f "C:\Users\*****\source\repos\dotnetCore3\docker-compose.override.yml" -f "C:\Users\*****\source\repos\dotnetCore3\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose15898560444299855188 --no-ansi up -d --no-build --force-recreate --remove-orphans
Recreating dockercompose15898560444299855188_dotnetcore3_1 ...
Recreating dockercompose15898560444299855188_dotnetcore3_1 ... done
Done! Docker containers are ready.
========== Debugging ==========
docker ps --filter "status=running" --filter "name=dockercompose15898560444299855188_dotnetcore3_" --format {{.ID}} -n 1
54b4bc125895
docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}" 54b4bc125895
172.30.67.88
========== Debugging ==========
docker ps --filter "status=running" --filter "name=dockercompose15898560444299855188_dotnetcore3_" --format {{.ID}} -n 1
54b4bc125895
docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}" 54b4bc125895
172.30.67.88
========== Debugging ==========
docker ps --filter "status=running" --filter "name=dockercompose15898560444299855188_dotnetcore3_" --format {{.ID}} -n 1
314dff0ffdf6
docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}" 314dff0ffdf6
172.30.75.30
========== Debugging ==========
docker ps --filter "status=running" --filter "name=dockercompose15898560444299855188_dotnetcore3_" --format {{.ID}} -n 1
314dff0ffdf6
docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}" 314dff0ffdf6
172.30.75.30
Update:
I try to build it from docker CLI and found out that the proxy was blocking the access to internet from the container. I don't know if is why the build fails inside visual studio. The log shows no errors but also i can't see any output from dotnet commands.
The solution was to modify dockerfile in order to get the app pre-compiled into the build, instead of building from the container.
Here the final DockerFile:
FROM mcr.microsoft.com/dotnet/core/runtime:3.0-nanoserver-1903 AS base
WORKDIR /app
COPY ".\bin\Release\netcoreapp3.0" "/app"
#just for debug purposes
RUN dir
FROM base AS final
ENTRYPOINT ["dotnet", "dotnetCore3.dll"]

Docker Unable to find file

I'm trying to build and run a docker image with docker-compose up
However, I get the error can't open /config/config.template: no such file
My Dockerfile is as follows:
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
CMD envsubst < /config/config.template > /config/config.yaml && rm -f /config/config.template && exec /clair -config=/config/config.yaml
ENTRYPOINT []
when adding the line RUN ls -la /config/ the following is returned after running docker-compose up --build:
drwxr-xr-x 2 root root 4096 Sep 16 06:46 .
drwxr-xr-x 1 root root 4096 Sep 16 06:46 ..
-rw-rw-r-- 1 root root 306 Sep 6 05:55 config.template
Here is the error:
clair_1_9345a64befa1 | /bin/sh: can't open /config/config.template: no such file
I've tried changing line endings and checking the docker version. It seems to work on a different machine running a different OS.
I'm using Ubuntu 18.04 and have docker version docker-compose version 1.23.1, build b02f1306
My docker-compose.yml file:
version: '3.3'
services:
clair:
build:
context: clair/
dockerfile: Dockerfile
environment:
- PASSWORD=johnnybegood
- USER=clair
- PORT=5432
- INSTANCE=postgres
ports:
- "6060:6060"
- "6061:6061"
depends_on:
- postgres
postgres:
build:
context: ../blah/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=johnnybegood
- POSTGRES_USER=clair
- POSTGRES_DB=clair
Docker CMD is only designed to run a single process, following the docker philosophy of one process per container. Try using a start script to modify your template and then launch clair.
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
COPY start.sh /start.sh
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
ENTRYPOINT ["/start.sh"]
and have a startscript (with executable permissions) copied into the container using your Dockerfile
!#/bin/sh
envsubst </config/config.template > /config/config.yaml
/clair -config=/config/config.yaml
edit: changed the answer after a comment from David maze

Passing environment variable to docker image at build time with docker-compose

Everything I tried following Dockerfile and docker compose references to pass an environment variable to the Docker image did not work.
I want to make this env var available during docker build when using docker-compose.
On the Docker host I have:
export BUILD_VERSION=1.0
app.js
console.log('BUILD_VERSION: ' + process.env.BUILD_VERSION);
Dockerfile:
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
If I build the image directly, the env var is passed fine:
docker build -t test --no-cache --build-arg BUILD_VERSION .
and is also available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
but not with docker compose.
docker-compose up --build
...
Step 5/7 : RUN echo Build Time: $BUILD_VERSION
---> Running in 6115161f33bf
Build Time:
---> c691c619018a
Removing intermediate container 6115161f33bf
Step 6/7 : RUN node /app.js
---> Running in f51831cc5e1e
BUILD_VERSION:
It's only available at run-time:
$ docker run --rm test
Run Time: 1.0
$ docker run --rm test node /app
BUILD_VERSION: 1.0
I also tried using environment in docker-compose.yml like below which again only makes it available at run-time but not build-time:
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
Please advise, how can I make it work in the least convoluted way?
Your example is working for me.
Have you tried deleting the images and building again? Docker won't re-build your image despite environment variables changed if the image is in cache.
You can delete them with:
docker-compose down --rmi all
Edit, I show here how it is working for me at build time:
$ cat Dockerfile
FROM alpine
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
$ cat docker-compose.yml
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Build:
$ export BUILD_VERSION=122221
$ docker-compose up --build
Creating network "a_default" with the default driver
Building app
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in b0a1a79967a0
Removing intermediate container b0a1a79967a0
---> 9fa331d63f6d
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in a602c27689a5
Removing intermediate container a602c27689a5
---> bf2181423c93
Step 4/4 : RUN echo Build Time: $BUILD_VERSION <<<<<< (*)
---> Running in 9d828cefcfab
Build Time: 122221
Removing intermediate container 9d828cefcfab
---> 2b3afa3d348c
Successfully built 2b3afa3d348c
Successfully tagged a_app:latest
Creating a_app_1 ... done
Attaching to a_app_1
a_app_1 exited with code 0
As the other answer mentioned, you can use docker-compose build --no-cache, and you can avoid mentioning "app" if you have multiple services, so docker-compose will build all the services. What you can do to handle different build versions in the same docker-compose build is to use different env vars, like:
$ cat docker-compose
version: '3'
services:
app1:
build:
context: .
args:
- BUILD_VERSION=$APP1_BUILD_VERSION
app2:
build:
context: .
args:
- BUILD_VERSION=$APP2_BUILD_VERSION
Export:
$ export APP1_BUILD_VERSION=1.1.1
$ export APP2_BUILD_VERSION=2.2.2
Build:
$ docker-compose build
Building app1
Step 1/4 : FROM alpine
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Running in 0b66093bc2ef
Removing intermediate container 0b66093bc2ef
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in 9d89b48c875d
Removing intermediate container 9d89b48c875d
---> ca2480695149
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 52dec27874ec
Build Time: 1.1.1
Removing intermediate container 52dec27874ec
---> 1b3654924297
Successfully built 1b3654924297
Successfully tagged a_app1:latest
Building app2
Step 1/4 : FROM alpine
---> 11cd0b38bc3c
Step 2/4 : ARG BUILD_VERSION
---> Using cache
---> 906130ee5da8
Step 3/4 : ENV BUILD_VERSION=$BUILD_VERSION
---> Running in d29442339459
Removing intermediate container d29442339459
---> 8b26def5ef3a
Step 4/4 : RUN echo Build Time: $BUILD_VERSION
---> Running in 4b3de2d223e5
Build Time: 2.2.2
Removing intermediate container 4b3de2d223e5
---> 89033b10b61e
Successfully built 89033b10b61e
Successfully tagged a_app2:latest
You need to set argument in docker-compose.yml as shown which will then be overriden to passed env variable -
version: '3'
services:
app:
build:
context: .
args:
- BUILD_VERSION
Next export environment variable you need to pass.
$ export BUILD_VERSION=1.0
Now build the image using command
$ docker-compose build --no-cache --build-arg BUILD_VERSION=$BUILD_VERSION app
You can pass in args to build, from the docker-compose file to the docker build. It is surprising the env vars aren't used for run and build.
// docker-compose.yml
version: '3'
services:
app:
build:
context: .
environment:
- BUILD_VERSION
args:
- BUILD_VERSION=${BUILD_VERSION}
volumes:
...
// Dockerfile
FROM node
ADD app.js /
ARG BUILD_VERSION
ENV BUILD_VERSION=$BUILD_VERSION
RUN echo Build Time: $BUILD_VERSION
RUN node /app.js
CMD echo Run Time: $BUILD_VERSION

Using volume directory in build process

I'm trying to use mounted volume directory in build process, but it's either not being mounted at the moment or mounted incorrectly.
docker-compose.yml
version: '2'
services:
assoi:
restart: on-failure
build:
context: ./assoi
expose:
- "4129"
links:
- assoi-redis
- assoi-postgres
- assoi-mongo
- assoi-rabbit
volumes:
- ./ugmk:/www
command: pm2 start /www/ugmk.json
...
Dockerfile
...
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
...
sudo docker-compose build out
...
Step 12 : WORKDIR /www
---> Using cache
---> 73504ed64194
Step 13 : RUN ls -al
---> Running in 37bb9f70d4ac
total 8
drwxr-xr-x 2 root root 4096 Aug 22 13:31 .
drwxr-xr-x 65 root root 4096 Aug 22 14:05 ..
---> be1ac6edce56
...
During build, you do not mount or more specifically, you cannot mount any volume.
What you do is COPY, so in your case
COPY ./ugmk /www
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
Volumes are for containers, not for images - volumes should store persistent user-generated data. By definition, this can only happen during the runtime, thus for "containers".
Nevertheless, the upper COPY is the default practice to what you want to achive "build a image with the application pre-deployed/assets compiled" ..

Resources