I wanna copy new static files from Docker container via named volume to nginx container that has old static.
Prerequisites:
Host machine directory tree:
.
├── data
│ ├── bar.2.css
│ └── foo.2.js
├── docker-compose.yml
├── Dockerfile
Dockerfile:
FROM busybox:latest
COPY data /data
docker-compose.yml:
version: '3'
services:
static:
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Directory tree of named volume myvolume with old static:
.
├── bar.1.css
└── foo.1.js
Sequence of steps:
Build myimage with Dockerfile: docker build -t myimage .
Check new static files in myimage: docker run myimage ls /data
bar.2.css
foo.2.js
Run: docker-compose up -d --build static
In my mind it must rebuild service static and overwrite old static files. But it did't. Why and how to fix it? Also, what is a better approach?
I think that you are just coping the new files alongside the old files with the docker build -t myimage .
Maybe you can delete the previous data before you insert new, by running a one-time container??
docker exec -it static rm /data
and then just copy the new data, or build the new image:
docker cp /data static:/data
You can also, implement the build step inside the docker-compose file:
version: '3'
services:
static:
build: /
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Why -- I believe that you are mounting the pre-existing volume myvolume atop your /data folder of the static container. This is because your myvolume already exists. If myvolume did not exist, the content of /data would be copied to the volume.
See: Docker-Volume-Docs -- "If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents will be copied into the volume."
Sample Solution
Give this a shot. With the structure and content below do a:
docker-compose up --build
This is additive, so if you update/add content to the newdata folder and re-run your compose, then the new content will be present in the shared volume.
You can mount and inspect the shared volume, like this:
docker run -it --rm --mount type=volume,src={docker-volume-name},target=/shared busybox sh
Environment
Folder structure:
.
├── dockerfile
├── docker-compose.yml
├── newdata/
── apple.txt
── banana.txt
dockerfile
FROM busybox:latest
# From host machine to image
COPY newdata/* /newdata/
# #Runtime from image to where a shared volume could be mounted.
ENTRYPOINT [ "cp", "-r", "/newdata/", "/shared" ]
docker-compose.yml
version: '3.2'
services:
data-provider:
image: data-provider
build: .
volumes:
- type: volume
source: so
target: /shared
destination:
image: busybox:latest
volumes:
- type: volume
source: so
target: /shared-data
depends_on:
- data-provider
command: ls -la /shared-data/newdata
volumes:
so:
Sample Output:
$ docker-compose up --build
Creating volume "sodockervol_so" with default driver
Building data-provider
Step 1/3 : FROM busybox:latest
---> c75bebcdd211
Step 2/3 : COPY newdata/* /newdata/
---> bc85fc19ed7b
Removing intermediate container 2a39f4be8dd2
Step 3/3 : ENTRYPOINT cp -r /newdata/ /shared
---> Running in e755c3179b4f
---> 6e79a32bf668
Removing intermediate container e755c3179b4f
Successfully built 6e79a32bf668
Successfully tagged data-provider:latest
Creating sodockervol_data-provider_1 ...
Creating sodockervol_data-provider_1 ... done
Creating sodockervol_destination_1 ...
Creating sodockervol_destination_1 ... done
Attaching to sodockervol_data-provider_1, sodockervol_destination_1
destination_1 | total 16
destination_1 | drwxr-xr-x 2 root root 4096 Oct 9 17:50 .
destination_1 | drwxr-xr-x 3 root root 4096 Oct 9 17:50 ..
destination_1 | -rwxr-xr-x 1 root root 25 Oct 9 17:50 apple.txt
destination_1 | -rwxr-xr-x 1 root root 28 Oct 9 17:50 banana.txt
sodockervol_data-provider_1 exited with code 0
sodockervol_destination_1 exited with code 0
Related
I'm probably just being stupid here, but I thought this shouldn't work, yet it does and I don't get why. I'm coping test files to /var/www in my Docker image during build and subsequently mounting a named volume on /var/www, but I still see the files.
~/test$ tree
.
├── docker
│ ├── data
│ │ └── Dockerfile
│ └── docker-compose.yml
└── src
├── testfile1
└── testfile2
3 directories, 4 files
./docker/docker-compose.yml
version: '3'
services:
test-data:
container_name: test-data
build:
context: ..
dockerfile: ./docker/data/Dockerfile
volumes:
- test-data:/var/www
volumes:
test-data:
name: test-data
./docker/data/Dockerfile
FROM alpine
COPY src/ /var/www/
CMD sleep infinity
From what I thought I understand the volume isn't available at build time and should overlay/hide the files as it's mounted on /var/www too when the container starts, but it doesn't?
~/test$ docker inspect -f '{{ .Mounts }}' test-data
[{volume test-data /var/lib/docker/volumes/test-data/_data /var/www local rw true }]
~/test$ docker exec test-data ls -l /var/www
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile1
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile2
Running Docker Destop 3.6.0 on Windows + WSL2 Ubuntu 20.04
The very first time (only) a Docker named volume (only) is attached to a container, Docker copies files from the underlying image into the volume. The volume contents never get updated after this initial copy. This copy also doesn't happen for host-directory bind-mounts, or on Kubernetes or other not-actually-Docker environments.
You'd see the behavior you expect in two ways. First, if you change the volumes: to a bind mount
volumes:
- ./local-empty-directory:/var/www
you'll see that replace the image content the way you expect. The other thing you can change is to run your existing setup once, change the contents of the image, and run it again
docker-compose build
docker-compose run --rm test-data ls -l /var/www
touch src/testfile3
docker-compose build
docker-compose run --rm test-data ls -l /var/www
# testfile3 isn't in the volume and won't be in this listing
With its limitations, I tend to not recommend actually relying on the "Docker copies files into volumes" behavior. There are a couple of common patterns that use it, but then are surprised when the volume never gets updated or the setup doesn't run in a different container runtime.
Docker volumes exists independently of your image/container. If you run docker volume ls you will see your volumes, which is where the data exists and becomes mounted to the container at run-time
I have a central Dockerfile. It's locate in ~/base/Dockerfile.
Let's say it only builds a simple debian image.
FROM debian
COPY test.js .
I also have a central docker-compose.yml file that uses this Dockerfile.
It is located in ~/base/docker-compose.yml.
version: "3.9"
services:
test:
build: ~/base/Dockerfile
ports:
- "5000:5000"
I also have a bash file that calls this docker-compose.yml from another directory.
For example:
mkdir temp
cd temp
setup
setup is a bash file that is registered in the /etc/bash.bashrc as a global alias.
It contains these lines:
docker-compose -f ~/base/docker-compose.yml build
docker-compose up -d
docker-compose logs --f test
I can run setup from inside any folder, and it should build a container based on that debian image. And it does.
However, it shows me that the name of the container is base_test_1, which is the default convention of docker.
This shows that it uses ~/base/. as the context.
How can I pass my current directory as the context?
Created a docker-compose.yml in the same location . Added a context and the value used is a environment variable.
~/base$ cat docker-compose.yml
version: "2.2"
services:
test:
build:
context: ${contextdir}
dockerfile: /home/myname/base/Dockerfile
ports:
- "5000:5000"
~/base$ cat Dockerfile
FROM python:3.6-alpine
COPY testfile.js .
Before triggering the docker-compose.yml build command, export the current working directory.
~/somehere$ ls
testfile.js
~/somehere$ export contextdir=$(pwd)
~/somehere$ docker-compose -f ~/base/docker-compose.yml build
Building test
Step 1/2 : FROM python:3.6-alpine
---> 815c1103df84
Step 2/2 : COPY testfile.js .
---> Using cache
---> d0cc03f02bdf
Successfully built d0cc03f02bdf
Successfully tagged base_test:latest
my composefile and dockerfile are located in ~/base/ while the testfile.js is located in ~/somehere/ (which i am assuming as the current working directory)
I need to copy the files of src folder to the container chowning them using www-data user and group, so in my Dockerfile I did:
COPY --chown=www-data:www-data src ./
when I access to the container I can see all the copied file but if I edit a file on the host, I'm not able to see the changes, so I have to rebuild the project using docker-compose up --build -d.
This is my docker-compose:
version: '3.9'
services:
php-fpm:
container_name: php_app
restart: always
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
#volumes:
# - ./src:/var/www/html
if I comment out volumes I can work on the host directory and see the changes, but in this way I lose the www-data chown.
How can I manage such situation? Essentially I want:
chown all files as www-data
update files in real time
There's no special feature to apply chown to mounted files. Leaving that and manual use of chown aside, you can make php-fpm workers to run with your uid. Here's how for php:8.0.2-fpm-alpine image (in other images path to config file can be different):
# Copy pool config out of a running container
docker cp php_app:/usr/local/etc/php-fpm.d/www.conf .
# Change user in config
sed "s/user = www-data/user = $(id -u)/" www.conf -i
# and/or change group
sed "s/group = www-data/group = $(id -g)/" www.conf -i
Now mount the edited config into the container using volumes in docker-compose.yml:
services:
php-fpm:
volumes:
- ./src:/var/www/html # code
- ./www.conf:/usr/local/etc/php-fpm.d/www.conf # pool config
And restart the container.
I'm trying to build and run a docker image with docker-compose up
However, I get the error can't open /config/config.template: no such file
My Dockerfile is as follows:
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
CMD envsubst < /config/config.template > /config/config.yaml && rm -f /config/config.template && exec /clair -config=/config/config.yaml
ENTRYPOINT []
when adding the line RUN ls -la /config/ the following is returned after running docker-compose up --build:
drwxr-xr-x 2 root root 4096 Sep 16 06:46 .
drwxr-xr-x 1 root root 4096 Sep 16 06:46 ..
-rw-rw-r-- 1 root root 306 Sep 6 05:55 config.template
Here is the error:
clair_1_9345a64befa1 | /bin/sh: can't open /config/config.template: no such file
I've tried changing line endings and checking the docker version. It seems to work on a different machine running a different OS.
I'm using Ubuntu 18.04 and have docker version docker-compose version 1.23.1, build b02f1306
My docker-compose.yml file:
version: '3.3'
services:
clair:
build:
context: clair/
dockerfile: Dockerfile
environment:
- PASSWORD=johnnybegood
- USER=clair
- PORT=5432
- INSTANCE=postgres
ports:
- "6060:6060"
- "6061:6061"
depends_on:
- postgres
postgres:
build:
context: ../blah/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=johnnybegood
- POSTGRES_USER=clair
- POSTGRES_DB=clair
Docker CMD is only designed to run a single process, following the docker philosophy of one process per container. Try using a start script to modify your template and then launch clair.
FROM quay.io/coreos/clair-git
COPY config.template /config/config.template
COPY start.sh /start.sh
#Set Defaults
ENV USER=clair PASSWORD=johnnybegood INSTANCE_NAME=postgres PORT=5432
RUN apk add gettext
ENTRYPOINT ["/start.sh"]
and have a startscript (with executable permissions) copied into the container using your Dockerfile
!#/bin/sh
envsubst </config/config.template > /config/config.yaml
/clair -config=/config/config.yaml
edit: changed the answer after a comment from David maze
I'm trying to use mounted volume directory in build process, but it's either not being mounted at the moment or mounted incorrectly.
docker-compose.yml
version: '2'
services:
assoi:
restart: on-failure
build:
context: ./assoi
expose:
- "4129"
links:
- assoi-redis
- assoi-postgres
- assoi-mongo
- assoi-rabbit
volumes:
- ./ugmk:/www
command: pm2 start /www/ugmk.json
...
Dockerfile
...
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
...
sudo docker-compose build out
...
Step 12 : WORKDIR /www
---> Using cache
---> 73504ed64194
Step 13 : RUN ls -al
---> Running in 37bb9f70d4ac
total 8
drwxr-xr-x 2 root root 4096 Aug 22 13:31 .
drwxr-xr-x 65 root root 4096 Aug 22 14:05 ..
---> be1ac6edce56
...
During build, you do not mount or more specifically, you cannot mount any volume.
What you do is COPY, so in your case
COPY ./ugmk /www
WORKDIR /www
RUN ls -la
RUN npm i
RUN node install.js
Volumes are for containers, not for images - volumes should store persistent user-generated data. By definition, this can only happen during the runtime, thus for "containers".
Nevertheless, the upper COPY is the default practice to what you want to achive "build a image with the application pre-deployed/assets compiled" ..