I realize other people have had similar questions but this uses v2 compose file format and I didn't find anything for that.
I want to make a very simple test app to play around with MemSQL but I can't get volumes to not get deleted after docker-compose down. If I've understood Docker Docs right, volumes shouldn't be deleted without explicitly telling it to. Everything seems to work with docker-compose up but after going down and then up again all data gets deleted from the database.
As recommended as a good practice, I'm using separate memsqldata service as a separate data layer.
Here's my docker-compose.yml:
version: '2'
services:
app:
build: .
links:
- memsql
memsql:
image: memsql/quickstart
volumes_from:
- memsqldata
ports:
- "3306:3306"
- "9000:9000"
memsqldata:
image: memsql/quickstart
command: /bin/true
volumes:
- memsqldatavolume:/data
volumes:
memsqldatavolume:
driver: local
I realize this is an old and solved thread where the OP was pointing to a directory in the container rather than the volume they had mounted, but wanted to clear up some of the misinformation I'm seeing.
docker-compose down does not remove volumes, you need to run docker-compose down -v if you also want to delete volumes. Here's the help text straight from docker-compose (note the "by default" list):
$ docker-compose down --help
Stops containers and removes containers, networks, volumes, and images
created by `up`.
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
...
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
...
$ docker-compose --version
docker-compose version 1.12.0, build b31ff33
Here's a sample yml with a named volume to test and a dummy command:
$ cat docker-compose.vol-named.yml
version: '2'
volumes:
data:
services:
test:
image: busybox
command: tail -f /dev/null
volumes:
- data:/data
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating volume "test_data" with default driver
Creating test_test_1
After starting the container, the volume is initialized empty since the image is empty at that location. I created a quick hello world in that location:
$ docker exec -it test_test_1 /bin/sh
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
/ # echo "hello volume" >/data/hello.txt
/ # ls -al /data
total 12
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
-rw-r--r-- 1 root root 13 May 23 01:24 hello.txt
/ # cat /data/hello.txt
hello volume
/ # exit
The volume is visible outside of docker and is still there after a docker-compose down:
$ docker volume ls | grep test_
local test_data
$ docker-compose -f docker-compose.vol-named.yml down
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
$ docker volume ls | grep test_
local test_data
Recreating the container uses the old volume with the file still visible inside:
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
$ docker exec -it test_test_1 /bin/sh
/ # cat /data/hello.txt
hello volume
/ # exit
And running a docker-compose down -v finally removes both the container and the volume:
$ docker-compose -f docker-compose.vol-named.yml down -v
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
Removing volume test_data
$ docker volume ls | grep test_
$
If you find your data is only being persisted if you use a stop/start rather than a down/up, then your data is being stored in the container (or possibly an anonymous volume) rather than your named volume, and the container is not persistent. Make sure the location for your data inside the container is correct to avoid this.
To debug where data is being stored in your container, I'd recommend using docker diff on a container. That will show all of the files created, modified, or deleted inside that container which will be lost when the container is deleted. E.g.:
$ docker run --name test-diff busybox \
/bin/sh -c "echo hello docker >/etc/hello.conf"
$ docker diff test-diff
C /etc
A /etc/hello.conf
You are using docker-compose down and if you look at the docs here
Stop containers and remove containers, networks, volumes, and images
created by up. Only containers and networks are removed by default.
You are right, it should not remove volumes (by default). It may be a bug or you may have changed the default configuration. But I think the right command for you is docker-compose stop. I will try to make some tests with simplier cases for down command.
This was traced back to a bad documentation from MemSQL. MemSQL data path in memsql/quickstart container is /memsql and not /var/lib/memsql like in a stand-alone installation (and in MemSQL docs), and definitely not /data like somebody told me.
Not sure if this helps or not. When you use docker-compose up -d the container is downloaded and images created. To stop the docker images, use docker-compose down, and the images will remain and can be restarted with docker-compose start
I was using the up/down commands and kept losing my data until I tried the stop/start and now data persists.
The simplest solution is to use docker-compose stop instead of docker-compose down. And then docker-compose start to restart.
According to the docs, down "stops containers and removes containers, networks, volumes, and images created by up."
Related
I'm having trouble demonstrating that data I generate on a shared volume is persistent, and I can't figure out why. I have a very simple docker-compose file:
version: "3.9"
# Define network
networks:
sorcernet:
name: sorcer_net
# Define services
services:
preclean:
container_name: cleaner
build:
context: .
dockerfile: DEESfile
image: dees
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
#command: python run dees.py
process:
container_name: processor
build:
context: .
dockerfile: OASISfile
image: oasis
networks:
- sorcernet
volumes:
- pgdata:/usr/share/appdata
volumes:
pgdata:
name: pgdata
Running the docker-compose file to keep the containers running in the background:
vscode ➜ /com.docker.devenvironments.code $ docker compose up -d
[+] Running 4/4
⠿ Network sorcer_net Created
⠿ Volume "pgdata" Created
⠿ Container processor Started
⠿ Container cleaner Started
Both images are running:
vscode ➜ /com.docker.devenvironments.code $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oasis latest e2399b9954c8 9 seconds ago 1.09GB
dees latest af09040befd5 31 seconds ago 1.08GB
and the volume shows up as expected:
vscode ➜ /com.docker.devenvironments.code $ docker volume ls
DRIVER VOLUME NAME
local pgdata```
Running the docker container, I navigate to the volume folder. There's nothing in the folder -- this is expected.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#049dac037802 opt]# cd /usr/share/appdata/
[root#049dac037802 appdata]# ls
[root#049dac037802 appdata]#
Since there's nothing in the folder, I create a file in called "dog.txt" and recheck the folder contents. The file is there. I exit the container.
[root#049dac037802 appdata]# touch dog.txt
[root#049dac037802 appdata]# ls
dog.txt
[root#049dac037802 appdata]# exit
exit
To check the persistence of the data, I re-run the container, but nothing is written to the volume.
vscode ➜ /com.docker.devenvironments.code $ docker run -it oasis
[root#1787d76a54b9 opt]# cd /usr/share/appdata/
[root#1787d76a54b9 appdata]# ls
[root#1787d76a54b9 appdata]#
What gives? I've tried defining the volume as persistent, and I know each of the images have a folder location at /usr/share/appdata.
If you want to check the persistence of the data in the containers defined in your docker compose, the --volumes-from flag is the way to go
When you run
docker run -it oasis
This newly created container shares the same image, but it doesn't know anything about the volumes defined.
In order to link the volume to the new container run this
docker run -it --volumes-from $CONTAINER_NAME_CREATED_FROM_COMPOSE oasis
Now this container shares the volume pgdata.
You can go ahead and create files at /usr/share/appdata and validate their persistence
I want to shared a volume between 2 docker-compose files. There are 2 interconnected apps and I need to create a symlink between them.
I tried using named volumes and the external feature.
On the first container, I can see the contents of the /var/www/s1 folder, but the /var/www/s2 folder is empty, while on the second container I can see the contents of the /var/www/s2 folder, but the /var/www/s1 folder seems empty. Since I can't see the contents of the folder created by the other app in /var/www, I can't do a symlink.
I made some dummy docker-compose files to try to expose the problem in an easier way.
In /var/www/s1 there should be a "magazine.txt" file, while in /var/www/s2 there should be a "paper.txt" file.
The first docker-compose file looks like this:
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ../:/var/www/s1
- shared-s:/var/www
volumes:
shared-s:
name: shared-s
The second docker-compose file looks like this:
version: '3.8'
services:
php:
image: php
container_name: php
command: tail -F /var/www/s2/paper.txt
volumes:
- ../:/var/www/s2
- shared-s:/var/www
volumes:
shared-s:
external:
name: shared-s
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80b83a60a0e5 php "docker-php-entrypoi…" 2 seconds ago Up 1 second php
05addf1fc24e nginx "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 80/tcp nginx
8c596d21cf7b portainer/portainer "/portainer" 2 hours ago Up About a minute 9000/tcp, 0.0.0.0:9001->9001/tcp portainer
$ docker exec -it 05addf1fc24e sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
docker magazine.txt
# cd ..
# cd s2
# ls
# exit
$ docker exec -it 80b83a60a0e5 sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
# cd ..
# cd s2
# ls
docker paper.txt
# exit
At a mechanical level, volumes and bind mounts don't "nest" in the way you're suggesting. The named volume shared-s will wind up containing only empty directories s1 and s2, but none of the content from either host directory.
What happens is something like this:
Docker starts (say) the nginx container first. It sorts the volumes: mounts on that container from shortest to longest.
Since the shared-s volume is empty, the content from the nginx base image in /var/www is copied to the volume; then the volume is mounted on /var/www in the container.
Docker creates the mount point /var/www/s1 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
Docker starts the php container and sorts its volumes:.
Since the shared-s volume is not empty, Docker just mounts it into the container, hiding any content that might have been in /var/www in the image.
Docker creates the mount point /var/www/s2 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
You'll notice a couple of problems with this sequence. Other mounted volumes' content never gets copied into the "shared" volume, which breaks the file sharing you're attempting here. Whichever container starts up first copies content from its image into the shared volume, and anything in that image in the other container gets lost. For that matter, if there is an update in the base image, Docker will ignore that in favor of the (old) content that's in the shared volume.
I'd suggest avoiding volumes here entirely. Build a separate image for each container, COPYing your application code into it. If you can use a static file server in the backend application, that will be much easier than trying to copy files from one container to the other. If that's not avoidable, you can use the COPY --from=image syntax that's normally used with multi-stage builds to also copy content from one built image to another.
Could you advise how to make run freeradius using dockercompose?
Compose file here which is stop automatically in a sec.
version: '3'
services:
freeradius:
image: freeradius/freeradius-server
restart: always
volumes:
- ./freeradius:/etc/freeradius
ports:
- "1812-1813:1812-1813/udp"
volumes:
freeradius:
But when I run it with docker directly, then it runs
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
In here, it display configuration file,
root#945f7bcb3520:/# ls /etc/freeradius
README.rst clients.conf experimental.conf huntgroups mods-config panic.gdb
proxy.conf sites-available templates.conf users
certs dictionary hints mods-available mods-enabled policy.d
radiusd.conf sites-enabled trigger.conf
but then volume folder, ./freeradius don't include any conf file.
So, how can make it work properly in general?
I have gotten a similar setup up and running with my config being loaded. All my configuration has been done according to the docker hub documentation. Here is my docker-compose.yml and Dockerfile for reference.
(I am aware that I could probably avoid the Dockerfile completely, but the advantage of this is that the Dockerfile is basically 1:1 to the official documentation..)
run docker-compose up -d to run it. Both files should be in the parent directory of raddb
Dockerfile
FROM freeradius/freeradius-server:latest
COPY raddb/ /etc/raddb/
EXPOSE 1812 1813
docker-compose.yml
version: '2.2'
services:
freeradius:
build:
context: .
container_name: freeradius
ports:
- "1812-1813:1812-1813/udp"
restart: always
You don't display your Dockerfile here. But I can guess that you are running a command in the Dockerfile that doesn't persist. It works from the command line, because /bin/bash will persist until you exit.
I have had this problem a couple times recently.
Your command to run the container directly
docker run --name my-radius -i -t freeradius/freeradius-server /bin/bash
is not equivalent to your dockercompose setup.
You are not mounting the config directory (also you are not publishing the container ports to the host - this will prevent you from accessing freeradius from outside a container).
I assume if you run your docker container mounting the volume
docker run --name my-radius -v ./freeradius:/etc/freeradius -i -t freeradius/freeradius-server /bin/bash
it will not work either.
For me, it didn't work when I tried to replace the whole config directory with a volume mount.
I had to mount components of the configuration individually. E.g.
-v ./freeradius/clients.conf:/etc/freeradius/clients.conf
Apparently, when you replace the whole directory something fails when starting freeradius. Excerpt from radius.log when mounting the whole config directory:
Fri Jan 13 10:49:22 2023 : Info: Debug state unknown (cap_sys_ptrace capability not set)
Fri Jan 13 10:49:22 2023 : Info: systemd watchdog is disabled
Fri Jan 13 10:49:22 2023 : Error: rlm_preprocess: Error reading /etc/freeradius/mods-config/preprocess/huntgroups
Fri Jan 13 10:49:22 2023 : Error: /etc/freeradius/mods-enabled/preprocess[13]: Instantiation failed for module "preprocess"
I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.
I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql