I am trying to setup a gitea container and while checking the official docs, for the volumes sections, the following is defined:
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
I know that the volumes section is used to configure DB in docker-compose but I could not find why this specific configuration is done here. Can someone explain to me what do we achieve with the lines added here in the volumes section?
To be more specific, what do we achieve with ./gitea:/data, /etc/timezone:/etc/timezone:ro and /etc/localtime:/etc/localtime:ro and why is this needed?
Thanks.
The volume section is a way to share files and directories between the host system and the container. With :ro the shared files can be made read-only to the container.
One must understand, that a container is just a snapshot of a current build from e.g. docker hub. Whenever you delete this container, all data are deleted too.
So volumes are also used to create a place for data which shall be persistent and not affected by the removal of the container.
So what happens here:
With /etc/timezone:/etc/timezone:ro the file /etc/timezone on the host system (where the docker daemon is running on) is made available under /etc/timezone (:ro means readonly) inside the container. And the same for /etc/localtime.
Those files define the timezone used on the host. By sharing it with the container it can be used inside to recognize the systems timezone.
Now about the line ./gitea:/data.
The same way you can share files you also can share directories. In your case it's expected, that in whatever directory you are currently in, there is a folder gitea (./ means >here<). And if your execute the docker command the folder ./getea on the host is mapped to /data inside the container.
So when you start the container, the apps inside the container will write the data to /data - and you would be also able to access those data on the host under ./gitea.
Related
I have a task that I already solved, but where I'm not satisfied with the solution. Basically, I have a webserver container (Nginx) and a fast-CGI container (PHP-FPM). The webserver container is built on an off-the-shelf image, the FCGI container is based on a custom image and contains the application files. Now, since not everything is sourcecode and processed on the FCGI container, I need to make the application files available inside the webserver container as well.
Here's the docker-compose.yml that does the job:
version: '3.3'
services:
nginx:
image: nginx:1-alpine
volumes:
- # customize just the Nginx configuration file
type: bind
source: ./nginx.conf
target: /etc/nginx/nginx.conf
- # mount application files from PHP-FPM container
type: volume
source: www-data
target: /var/www/my-service
read_only: true
volume:
nocopy: true
ports:
- "80:80"
depends_on:
- php-fpm
php-fpm:
image: my-service:latest
command: ["/usr/sbin/php-fpm7.3", "--nodaemonize", "--force-stderr"]
volumes:
- # create volume from application files
# This one populates the content of the volume.
type: volume
source: www-data
target: /var/www/my-service
volumes:
# volume with application files shared between nginx and php-fpm
www-data:
What I don't like here is mostly reflected by the comments concerning the volumes. Who creates and stores data should be obvious from the code and not from comments. Also, what I really dislike is that docker actually creates a place where it stores data for this volume. Not only does this use up disk space and increase startup time, it also requires me to never forget to use docker-compose down --volumes in order to refresh the content on next start. Imagine my anger when I found out that down didn't tear down what up created and that I was hunting ghosts from previous runs.
My questions concerning this:
Can I express in code that one container contains data that should be made available to other containers more clearly? The above code works, but it fails utterly to express the intent.
Can I avoid anything persistent being created to avoid above mentioned downsides?
I would have liked to investigate into things like tmpfs volumes or other volume options. My problem is that I can't find documentation for available volume drivers or even explore which volume drivers exist. Maybe I have missed some CLI for that, I'd really appreciate a nudge in the right direction here.
You can use the local driver with option type=tmpfs, for example:
volumes:
www-data:
driver: local
driver_opts:
type: tmpfs
device: tmpfs
Which will follow your requirements:
Data will be shared between containers at runtime
Data should not be persisted, i.e. volumes will be emptied when container are stopped, restarted or destroyed
This is CLI equivalent of
docker volume create --driver local --opt type=tmpfs --opt device=tmpfs www-data
Important note: this is NOT a Docker tmpfs mount, but a Docker volume using a tmpfs option. As stated in the local volume driver documentation it uses the Linux mount options, in our case --types to specify a tmpfs filesystem. Contrary to a simple tmpfs mount, it will allow you to share the volume between container while retaining classic behavior of a temporary filesystem
I can't find documentation for available volume drivers or even explore which volume drivers exist
Volume doc, including tmpfs, Bind mount and Volumes
local driver options are found in the docker volume create doc
Volume driver plugins - some are still updated regularly or seem maintained, but most of them have not been updated for a long time or are deprecated. The list does not seem exhaustive though, for instance vieux/sshfs is not mentioned.
Can I express in code that one container contains data that should be made available to other containers more clearly? The above code works, but it fails utterly to express the intent.
I don't think so, your code is are already quite clear as to the intent of this volume:
That's what volume are for: sharing data between containers. As stated in the doc Volumes can be more safely shared among multiple container, furthermore only containers are supposed to write into volumes.
nocopy and read_only clearly express that nginx relies on data written by another container as it will only be able to read from this volume
Given your volume is not external, it is safe to assume only another container from the same stack can use it
A bit of logic and experience with Docker allow to quickly come to the previous point, but even for less experienced Docker users your comments gives clear indications, and your comments are part of the code ;)
You also can make custom nginx image with copy of static from your php image
here is Dockerfile for nginx
FROM my-service:latest AS src-files
FROM nginx
COPY --from=src-files /path-to-static-in-my-service-image /path-to-static-in-nginx
This will allow you to use no volumes with source code
Also can use TAG from env variables in Dockerfile
FROM my-service:${TAG} AS src-files
...
It depends of the usecase of your setup. If it's only for local dev or if you want the same thing on production. On dev, having a volume populated manually or by one container for others could be OK.
But if you want something that will run the same way in production, you may need something else. For exemple, in production, I don't want to have my code in a volume but in my image in an immutable way and I just need to redeploy it.
For me a volume is not for storing application code but for storing data like cache, user uploaded content, etc. Something we want to keep between deployments.
So, if we want to have 2 images with the same files not in a volume, I will build 2 images with the application code and static content, one for php, one for nginx.
But the deployment is usually not synchrone for the 2 images. We solve this issue by deploying the PHP application first and the nginx application after. On nginx, we add a config that try to serve static content from it first and if the file doesn't exist, to ask it to PHP.
For the dev environment, we will reuse the same image but use a volume to mount the current code inside the container (an host bind mount).
But in some case, the bind mount could have some issues:
- On Mac the file sharing is slow but it should be better with the latest version of Docker Desktop in the Edge channel (2.3.1.0)
- On Windows, the file sharing is slow too
- On Linux, we need to be careful about the file permission and the user used inside the container
If you try to solve one/many of this issues with the volume solution, we could find some solution for that. Ex, on Mac, I will try to Edge release of docker first, on Windows, if possible, I will use WSL2 and Docker set to use the WSL2 backend.
In a container,
anonymous volume can be created
with syntax(VOLUME /build) in Dockerfile
or
below syntax with volumes having /build entry
cache:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- /tmp/cache:/cache
- /build
entrypoint: "true"
My understanding is, both approach(above) make volume /build also available after container goes in Exited state.
Volume is anonymous because /build points to some random new location(in /var/lib/docker/volumes directory) in docker host
I see that anonymous volumes are more safer than named volumes(like /tmp/cache:/cache).
Because /tmp/cache location is vulnerable because there is more chance that this location is used by more than one docker container.
1)
Why anonymous volume usage is discouraged?
2)
Is
VOLUME /build in Dockerfile
not same as
volumes:
- /build
in docker-compose.yml file? Is there a scenario, where we need to mention both?
You're missing a key third option, named volumes. If you declare:
version: '3'
volumes:
build: {}
services:
cache:
image: ...
volumes:
- build:/build
Docker Compose will create a named volume for you; you can see it with docker volume ls, for example. You can explicitly manage named volumes' lifetime, and set several additional options on them which are occasionally useful. The Docker documentation has a page describing named volumes in some detail.
I'd suggest that named volumes are strictly superior to anonymous volumes, for being able to explicitly see when they are created and destroyed, and for being able to set additional options on them. You can also mount the same named volume into several containers. (In this sequence of questions you've been asking, I'd generally encourage you to use a named volume and mount it into several containers and replace volumes_from:.)
Named volumes vs. bind mounts have advantages and disadvantages in both directions. Bind mounts are easy to back up and manage, and for content like log files that you need to examine directly it's much easier; on MacOS systems they are extremely slow. Named volumes can run independently of any host-system directory layout and translate well to clustered environments like Kubernetes, but it's much harder to examine them or back them up.
You almost never need a VOLUME directive. You can mount a volume or host directory into a container regardless of whether it's declared as a volume. Its technical effect is to mount a new anonymous volume at that location if nothing else is mounted there; its practical effect is that it prevents future Dockerfile steps from modifying that directory. If you have a VOLUME line you can almost always delete it without affecting anything.
Actually, anonymous volumes (/build) usage is encouraged over the use of bind mounts (/tmp/cache:/cache):
Volumes have several advantages over bind mounts:
Volumes are easier to back up or migrate than bind mounts.
You can manage volumes using Docker CLI commands or the Docker API.
Volumes work on both Linux and Windows containers.
Volumes can be more safely shared among multiple containers.
Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other
functionality.
New volumes can have their content pre-populated by a container.
Regarding your second question, yes. You can create anonymous volumes in docker-compose file or in the Dockerfile. No need to specify in both places.
What's the difference between declaring in the docker-compose.yml file a volume section and just using the volumes keyword under a service?
For example, I map a volume this way for a container:
services:
mysqldb:
volumes:
- ./data:/var/lib/mysql
This will map to the folder called data from my working directory.
But I could also map a volume by declaring a volume section and use its alias for the container:
services:
mysqldb:
volumes:
- data_volume:/var/lib/mysql
volumes:
data_volume:
driver: local
In this method, the actual location of where the mapped files are stored appears to be somewhat managed by docker compose.
What are the differences between these 2 methods or are they the same? Which one should I really use?
Are there any benefits of using one method over the other?
The difference between the methods you've described is that first method is a bind mount, and the other is a volume. These are more of Docker functions (rather than Docker Compose), and there are several benefits volumes provide over mounting a path from your host's filesystem. As described in the documentation, they:
are easier to back up or migrate
can be managed with docker volumes or the API (as opposed to the raw filesystem)
work on both Linux and Windows containers
can be safely shared among multiple containers
can have content pre-populated by a container (with bind mounts sometimes you have to copy data out, then restart the container)
Another massive benefit to using volumes are the volume drivers, which you'd specify in place of local. They allow you to store volumes remotely (i.e. cloud, etc) or add other features like encryption. This is core to the concept of containers, because if the running container is stateless and uses remote volumes, then you can move the container across hosts and it can be run without being reconfigured.
Therefore, the recommendation is to use Docker volumes. Another good example is the following:
services:
webserver_a:
volumes:
- ./serving/prod:/var/www
webserver_b:
volumes:
- ./serving/prod:/var/www
cache_server:
volumes:
- ./serving/prod:/cache_root
If you move the ./serving directory somewhere else, the bind mount breaks because it's a relative path. As you noted, volumes have aliases and have their path managed by Docker, so:
you wouldn't need to find and replace the path 3 times
the volume using local stores data somewhere else on your system and would continue mounting just fine
TL;DR: try and use volumes. They're portable, and encourage practices that reduce dependencies on your host machine.
I have a docker-compose.yml file with the following:
volumes:
- .:/usr/app/
- /usr/app/node_modules
First option maps current host directory to /usr/app, but what does the second option do?
[Refreshing this answer since it seems others have similar questions]
There are three kinds of volumes in docker:
Host volumes: these map a path from the host into the container with a bind mount. They have the short syntax /path/on/host:/path/in/container. Whatever exists on the host is what will be visible in the container, there's no merging of files or initialization from the image, and uid/gid's do not get any special mapping so you need to take care to allow the container uid/gid read and write access to this location (an exception is Docker for Mac with OSXFS). If the path on the host does not exist, docker will create an empty directory as root, and if it is a file, you can mount a single file into the container this way.
Named volumes: these have a name, instead of a host path as the source. They have the short syntax name:/path/in/container and in a compose file, you also need to define the named volume used in containers at the top level. By default, these are also a bind mount, but to a docker specific directory under /var/lib/docker/volumes that should be considered internal. However these defaults can be changed to allow things like NFS mounts, mounting disks, or even your own bind mounts to other locations. Named volumes also have a feature in docker, when they are new or empty and first used, docker copies the contents from the image into named volume before mounting it. This includes files, directories, uid/gid owners, and permissions. After that, they behave identical to a host volume, whatever is inside the volume overlays the image location.
Anonymous volumes: these only have a path inside the container. They are in the form /path/in/container and docker will create a default named volume with a guid as the name. They share the behaviors of named volumes, storing files under /var/lib/docker/volumes, initializing with the contents of the image, except they have a randomly generated guid that gives you no indication of how or even if they are being used. You can mount the volume in another container and inspect the contents, or you can find the container using the volume by inspecting each container to find the guid. If you create a container with the --rm flag, anonymous volumes will also be deleted automatically.
tmpfs: Wait, I said 3, and this is 4? That's because tmpfs isn't considered a volume, the syntax to mount it is different. The result is a pointer to an empty in memory filesystem. This is useful if you have temporary files you don't wish to save, they are relatively small, and you either need speed or want to be sure they aren't saved to disk.
In the OP's case:
/usr/app is mounted from the host, commonly used for development
/usr/app/node_modules is an anonymous volume initialized from the image
Why do this? Likely because you do not want to modify the node_modules directory on the host, particularly if there's platform specific data and you're running on Docker desktop where it's Mac/Win on the host and Linux in the container. It's also possible there's data in the image you want to get access to within the directory structure of the other volume mount.
Are there downsides to anonymous volumes? Two that I can think of:
If there's anything in /usr/app/node_modules that you want to reuse in a future container, you're unlikely to find the old volume. I tend to consider any data written to these as likely lost.
You'll often find the volumes on the host full of guids over time, and it's unclear which are in use and which can be deleted. Unused anonymous volumes are one of several causes of excessive disk use in docker.
For more details on docker volumes, see: https://docs.docker.com/storage/
Original answer:
The second one creates an anonymous volume. It will be listed in docker volume ls with a long unique id rather than a name. Docker-compose will be able to reuse this if you update your image, but it's easy to lose track of which volume belongs to what with those names, so I recommend always giving your volume a name.
Just to complement the accepted answer, according to Docker's Knowledge Base there are three types of volumes: host, anonymous, and named:
A host volume lives on the Docker host's filesystem and can be
accessed from within the container. Example volume path:
/path/on/host:/path/in/container
An anonymous volume is useful for when you would rather have
Docker handle where the files are stored. It can be difficult,
however, to refer to the same volume over time when it is an
anonymous volumes. Example volume path:
/path/in/container
A named volume is similar to an anonymous volume. Docker manages
where on disk the volume is created, but you give it a volume name. Example volume path:
name:/path/in/container
The path used in your example is an anonymous volume.
I had the same question while I was going through this tutorial, and the answer to what those lines could actually be doing is this:
Without the anonymous volume ('/usr/src/app/node_modules'), the node_modules directory would essentially disappear by the mounting of the host directory at runtime:
Build - The node_modules directory is created.
Run - The current directory is copied into the container, overwriting the node_modules that were just installed when the container was built.
The docker-compose.yml file for this:
version: '3.5'
services:
something-clever:
container_name: something-clever
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '4200:4200'
I've seen Docker volume definitions in docker-compose.yml files like so:
-v /path/on/host/modules:/var/www/html/modules
I noticed that Drupal's official image, their docker-compose.yml file is using anonymous volumes.
Notice the comments:
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
Is there a way to associate an anonymous volume with a path on the host machine after the container is running? If not, what is the point of having anonymous volumes?
Full docker-compose.yml example:
version: '3.1'
services:
drupal:
image: drupal:8.2-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: example
restart: always
Adding a bit more info in response to a follow-up question/comment from #JeffRSon asking how anonymous volumes add flexibility, and also to answer this question from the OP:
Is there a way to associate an anonymous volume with a path on the host machine after the container is running? If not, what is the point of having anonymous volumes?
TL;DR: You can associate a specific anonymous volume with a running container via a 'data container', but that provides flexibility to cover a use case that is now much better served by the use of named volumes.
Anonymous volumes were helpful before the addition of volume management in Docker 1.9. Prior to that, you didn't have the option of naming a volume. With the 1.9 release, volumes became discrete, manageable objects with their own lifecycle.
Before 1.9, without the ability to name a volume, you had to reference it by first creating a data container
docker create -v /data --name datacontainer mysql
and then mounting the data container's anonymous volume into the container that needed access to the volume
docker run -d --volumes-from datacontainer --name dbinstance mysql
These days, it's better to use named volumes since they are much easier to manage and much more explicit.
Anonymous volumes are equivalent to having these directories defined as VOLUME's in the image's Dockerfile. In fact, directories defined as VOLUME's in a Dockerfile are anonymous volumes if they are not explicitly mapped to the host.
The point of having them is added flexibility.
PD:
Anonymous volumes already reside in the host somewhere in /var/lib/docker (or whatever directory you configured). To see where they are:
docker inspect --type container -f '{{range $i, $v := .Mounts }}{{printf "%v\n" $v}}{{end}}' $CONTAINER
Note: Substitute $CONTAINER with the container's name.
One possible usecase of anonymous volumes in these days is in combination with Bind Mounts. When you want to bind some folder but without any specific subfolders. These specific subfolders should be then set as named or anonymous volumes. It will guarantee that these subfolders will be present in your container folder which is bounded outside the container but you do not have to have it in your bound folder on the host machine at all.
For example you can have your frontend NodeJS project built in container where is needed node_modules folder for it but you dont need this folder for your coding at all. You can then map your project folder to some folder outside the container and set the node_modules folder as an anonymous volume. Node_modules folder will be present in the container all the time even if you do not have it on the host machine in your working folder.
Not sure why Drupal developers suggest such settings. Anyways, I can think of two differences:
With named volumes you have a name that suggests to which project it belongs.
After docker-compose down && docker-compose up -d a new empty anonymous volume gets attached to the container. (But the old one doesn't disappear. docker doesn't delete volumes unless you tell it to.) With named volumes you'll get the volume that was attached to the container before docker-compose down.
As such, you probably don't want to put data you don't want to lose into an anonymous volume (like db or something). Again, they won't disappear by themselves. But after docker-compose down && docker-compose up -d && docker volume prune a named volume will survive.
For something less critical (like node_modules) I don't have strong argument for or against named volumes.
Is there a way to associate an anonymous volume with a path on the host machine after the container is running?
For that you need to change the settings, e.g. /var/www/html/modules -> ./modules:/var/www/html/modules, and do docker-compose up -d. But that will turn an anonymous volume into a bind mount. And you will need to copy the data from the volume to ./modules. Similarly, you can turn an anonymous volume into a named volume.