Data is visible in Docker container + named volume although it shouldn't - docker

I'm probably just being stupid here, but I thought this shouldn't work, yet it does and I don't get why. I'm coping test files to /var/www in my Docker image during build and subsequently mounting a named volume on /var/www, but I still see the files.
~/test$ tree
.
├── docker
│   ├── data
│   │   └── Dockerfile
│   └── docker-compose.yml
└── src
├── testfile1
└── testfile2
3 directories, 4 files
./docker/docker-compose.yml
version: '3'
services:
test-data:
container_name: test-data
build:
context: ..
dockerfile: ./docker/data/Dockerfile
volumes:
- test-data:/var/www
volumes:
test-data:
name: test-data
./docker/data/Dockerfile
FROM alpine
COPY src/ /var/www/
CMD sleep infinity
From what I thought I understand the volume isn't available at build time and should overlay/hide the files as it's mounted on /var/www too when the container starts, but it doesn't?
~/test$ docker inspect -f '{{ .Mounts }}' test-data
[{volume test-data /var/lib/docker/volumes/test-data/_data /var/www local rw true }]
~/test$ docker exec test-data ls -l /var/www
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile1
-rw-r--r-- 1 root root 0 Oct 21 09:01 testfile2
Running Docker Destop 3.6.0 on Windows + WSL2 Ubuntu 20.04

The very first time (only) a Docker named volume (only) is attached to a container, Docker copies files from the underlying image into the volume. The volume contents never get updated after this initial copy. This copy also doesn't happen for host-directory bind-mounts, or on Kubernetes or other not-actually-Docker environments.
You'd see the behavior you expect in two ways. First, if you change the volumes: to a bind mount
volumes:
- ./local-empty-directory:/var/www
you'll see that replace the image content the way you expect. The other thing you can change is to run your existing setup once, change the contents of the image, and run it again
docker-compose build
docker-compose run --rm test-data ls -l /var/www
touch src/testfile3
docker-compose build
docker-compose run --rm test-data ls -l /var/www
# testfile3 isn't in the volume and won't be in this listing
With its limitations, I tend to not recommend actually relying on the "Docker copies files into volumes" behavior. There are a couple of common patterns that use it, but then are surprised when the volume never gets updated or the setup doesn't run in a different container runtime.

Docker volumes exists independently of your image/container. If you run docker volume ls you will see your volumes, which is where the data exists and becomes mounted to the container at run-time

Related

How to mount host directory after copying the files to the container?

I need to copy the files of src folder to the container chowning them using www-data user and group, so in my Dockerfile I did:
COPY --chown=www-data:www-data src ./
when I access to the container I can see all the copied file but if I edit a file on the host, I'm not able to see the changes, so I have to rebuild the project using docker-compose up --build -d.
This is my docker-compose:
version: '3.9'
services:
php-fpm:
container_name: php_app
restart: always
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
#volumes:
# - ./src:/var/www/html
if I comment out volumes I can work on the host directory and see the changes, but in this way I lose the www-data chown.
How can I manage such situation? Essentially I want:
chown all files as www-data
update files in real time
There's no special feature to apply chown to mounted files. Leaving that and manual use of chown aside, you can make php-fpm workers to run with your uid. Here's how for php:8.0.2-fpm-alpine image (in other images path to config file can be different):
# Copy pool config out of a running container
docker cp php_app:/usr/local/etc/php-fpm.d/www.conf .
# Change user in config
sed "s/user = www-data/user = $(id -u)/" www.conf -i
# and/or change group
sed "s/group = www-data/group = $(id -g)/" www.conf -i
Now mount the edited config into the container using volumes in docker-compose.yml:
services:
php-fpm:
volumes:
- ./src:/var/www/html # code
- ./www.conf:/usr/local/etc/php-fpm.d/www.conf # pool config
And restart the container.

Deploying Golang app on Docker exits process

I'm very new to Docker. I have a Golang app that has the following structure:
.
├── 404.html
├── Dockerfile
├── index.html
├── scripts
├── server.go
├── static
│   ├── jquery.min.js
│   ├── main.css
│   └── main.js
└── styles
I got the Dockerfile from DockerHub. It's too large to post here, but the full version is here. The last few lines of the Dockerfile, which I think might be relevant, are:
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
When I go into my directory and type in docker build -t my-app . then it successfully builds. When I type in docker run -d -p 80:80 url-shortener, it gives me a string which I assume is the ID.
But when I do docker ps, it doesn't show the process running.
If I do docker ps -a, it shows the process but it says,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6adc34244350 my-app "/bin/sh" 6 minutes ago
Exited (0) 6 minutes ago
I apologize if this is a very dumb question. I'm a complete Docker noob and could use some guidance.
From your docker ps output, your image is configured to only run a shell. Since you ran it as a background process without any input, it processed all of its input and exited successfully (status code 0). docker logs 6adc34244350 will show you what output (if any) it produced.
Docker has an excellent tutorial on writing and running custom images that's worth reading. In particular, you shouldn't copy the official golang Dockerfile; your own Dockerfile can start with FROM golang:1.10 and it will inherit everything in that image. You also almost certainly want to make sure you have a CMD command that runs your application (by default) when the container starts up.

Golang compilation cache from Docker

I'm using the official golang alpine image to compile my source code (my host machine is a Mac), and I've noticed that even when mounting whole $GOPATH inside of the container it doesn't use cached data from previous builds. I checked that it creates it in the $GOPATH/pkg directory, but it does not affect the subsequent builds speed.
However, if I reuse the same container for several compilation, it does make use of some kind of cache, you can see the results in this experiment I did:
Using different containers, time remains around 28-30s in each build:
$ rm -r $GOPATH/pkg/linux_amd64
$ time docker run -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 go build -i github.com/myrepo/mypackage
...
0.02s user 0.08s system 0% cpu 30.914 total
$ time docker run -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 go build -i github.com/myrepo/mypackage
...
0.02s user 0.07s system 0% cpu 28.128 total
Reusing the same container, subsequent builds are much faster:
$ rm -r $GOPATH/pkg/linux_amd64
$ docker run -d -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 tail -f /dev/null
bb4c08867bf2a28ad87facf00fa9dcf2800ad480fe1e66eb4d8a4947a6efec1d
$ time docker exec bb4c08867bf2 go build -i github.com/myrepo/mypackage
...
0.02s user 0.05s system 0% cpu 27.028 total
$ time docker exec bb4c08867bf2 go build -i github.com/myrepo/mypackage
0.02s user 0.06s system 0% cpu 7.409 total
Is Go using any kind of cache in some place outside of $GOPATH?
To anyone who landed here from google search, i have found a working answer on a reddit post.
It basically says to map the /root/.cache/go-build to your host go build cache folder.
In my case, i am on windows and have a project that requires cross compilation with gcc, i had to spin up a linux container to build the binary to be deploy to a alpine container, and i map it to a data volume instead:
some-volume-name:/root/.cache/go-build
When you are building inside the golang container, it is using the directory $GOPATH/pkg inside this container. If you then start another golang container, it has an empty $GOPATH/pkg. However if you continue to use the same container (with exec), the $GOPATH/pkg is re-used.
rm -r $GOPATH/pkg/linux_amd64 will only remove this directory on your local machine. So this has no effect.
A possible alternative to re-using the same container could be
to commit the container after the first build, or
to mount $GOPATH/pkg as volume from your host machine or a data volume.
Use the -v flag to print which packages are getting compiled. This might be a better indicator than time spent.
I was able to produce the desired result by mounting the gopath as volume (as you have done, so it should work...). Please see below snippet. First time it compiles both packages, second time just the main.
Side note: one issue i've had with this approach is the volume dir will "overwrite" (i.e. shadow) anything already in the image at that dir, which is fine if you are using just the base golang alpine image since /go should be empty.
pkm$ tree
.
└── src
└── github.com
├── org1
│   └── mine
│   └── main.go
└── org2
└── somelib
└── lib.go
6 directories, 2 files
pkm$ docker run --rm -v $GOPATH:/go golang:1.9-alpine go build -i -v github.com/org1/mine
github.com/org2/somelib
github.com/org1/mine
pkm$ tree
.
├── mine
├── pkg
│   └── linux_amd64
│   └── github.com
│   └── org2
│   └── somelib.a
└── src
└── github.com
├── org1
│   └── mine
│   └── main.go
└── org2
└── somelib
└── lib.go
10 directories, 4 files
pkm$ docker run --rm -v $GOPATH:/go golang:1.9-alpine go build -i -v github.com/org1/mine
github.com/org1/mine
pkm$

Copy static files from Docker container to non empty named volume

I wanna copy new static files from Docker container via named volume to nginx container that has old static.
Prerequisites:
Host machine directory tree:
.
├── data
│   ├── bar.2.css
│   └── foo.2.js
├── docker-compose.yml
├── Dockerfile
Dockerfile:
FROM busybox:latest
COPY data /data
docker-compose.yml:
version: '3'
services:
static:
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Directory tree of named volume myvolume with old static:
.
├── bar.1.css
└── foo.1.js
Sequence of steps:
Build myimage with Dockerfile: docker build -t myimage .
Check new static files in myimage: docker run myimage ls /data
bar.2.css
foo.2.js
Run: docker-compose up -d --build static
In my mind it must rebuild service static and overwrite old static files. But it did't. Why and how to fix it? Also, what is a better approach?
I think that you are just coping the new files alongside the old files with the docker build -t myimage .
Maybe you can delete the previous data before you insert new, by running a one-time container??
docker exec -it static rm /data
and then just copy the new data, or build the new image:
docker cp /data static:/data
You can also, implement the build step inside the docker-compose file:
version: '3'
services:
static:
build: /
image: 'myimage'
volumes:
- 'myvolume:/data'
nginx:
image: 'nginx'
volumes:
- 'myvolume:/data'
volumes:
myvolume:
Why -- I believe that you are mounting the pre-existing volume myvolume atop your /data folder of the static container. This is because your myvolume already exists. If myvolume did not exist, the content of /data would be copied to the volume.
See: Docker-Volume-Docs -- "If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents will be copied into the volume."
Sample Solution
Give this a shot. With the structure and content below do a:
docker-compose up --build
This is additive, so if you update/add content to the newdata folder and re-run your compose, then the new content will be present in the shared volume.
You can mount and inspect the shared volume, like this:
docker run -it --rm --mount type=volume,src={docker-volume-name},target=/shared busybox sh
Environment
Folder structure:
.
├── dockerfile
├── docker-compose.yml
├── newdata/
── apple.txt
── banana.txt
dockerfile
FROM busybox:latest
# From host machine to image
COPY newdata/* /newdata/
# #Runtime from image to where a shared volume could be mounted.
ENTRYPOINT [ "cp", "-r", "/newdata/", "/shared" ]
docker-compose.yml
version: '3.2'
services:
data-provider:
image: data-provider
build: .
volumes:
- type: volume
source: so
target: /shared
destination:
image: busybox:latest
volumes:
- type: volume
source: so
target: /shared-data
depends_on:
- data-provider
command: ls -la /shared-data/newdata
volumes:
so:
Sample Output:
$ docker-compose up --build
Creating volume "sodockervol_so" with default driver
Building data-provider
Step 1/3 : FROM busybox:latest
---> c75bebcdd211
Step 2/3 : COPY newdata/* /newdata/
---> bc85fc19ed7b
Removing intermediate container 2a39f4be8dd2
Step 3/3 : ENTRYPOINT cp -r /newdata/ /shared
---> Running in e755c3179b4f
---> 6e79a32bf668
Removing intermediate container e755c3179b4f
Successfully built 6e79a32bf668
Successfully tagged data-provider:latest
Creating sodockervol_data-provider_1 ...
Creating sodockervol_data-provider_1 ... done
Creating sodockervol_destination_1 ...
Creating sodockervol_destination_1 ... done
Attaching to sodockervol_data-provider_1, sodockervol_destination_1
destination_1 | total 16
destination_1 | drwxr-xr-x 2 root root 4096 Oct 9 17:50 .
destination_1 | drwxr-xr-x 3 root root 4096 Oct 9 17:50 ..
destination_1 | -rwxr-xr-x 1 root root 25 Oct 9 17:50 apple.txt
destination_1 | -rwxr-xr-x 1 root root 28 Oct 9 17:50 banana.txt
sodockervol_data-provider_1 exited with code 0
sodockervol_destination_1 exited with code 0

docker-compose with docker-swarm under docker-machine not mounting volumes

I have a working docker-compose example running a static html with nginx.
The directory tree:
├── docker-compose.yml
├── nginx
│   ├── app
│   │   └── index.html
│   └── Dockerfile
The docker-compose.yml:
nginx:
build: ./nginx
volumes:
- ./nginx/app:/usr/share/nginx/html
ports:
- 8080:80
The nginx directory has the Dockerfile:
FROM nginx
Everything is working right.
The problem is that I build a Docker-Swarm infrastructure with docker-machine following the Docker documentation: The Local, the swarm-master and two nodes.
Everything seems to work fine too.
$ eval $(docker-machine env --swarm swarm-master)
$[swarm-master]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d8afafeb9f05 few_nginx "nginx -g 'daemon off" 20 minutes ago Up 3 seconds 443/tcp, 192.168.99.102:8080->80/tcp swarm-host-00/few_nginx_1
But nginx is returning a Forbidden
$[swarm-master]$ curl http://192.168.99.102:8080/
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.9.10</center>
</body>
</html>
I enter into the virtualbox and into the container:
$[swarm-master]$ docker-machine ssh swarm-host-00
docker#swarm-host-00:~$ docker exec -ti d8afafeb9f05 bash
and nothing inside the nginx/html directory:
root#d8afafeb9f05:/# ls /usr/share/nginx/html/
Is it necessary to do something different in compose/swarm for the volumes? Am I doing something wrong?
It's necessary to copy the files to the hosts or build the image with the COPY command into the Dockerfile
Reference: https://github.com/docker/compose/issues/2799#issuecomment-177998133

Resources