How is the Docker Mount Point decided? - docker

I pull an image from Docker Hub (say) Ghost CMS and after reading the documentation, I see that the default mount point is /var/lib/ghost/content
Now, when I make my own application with Ghost as the base image, I map some folder (say) CMS-Content and mount it on /var/lib/ghost/content written like this -
volumes:
- CMS-Content: /var/lib/ghost/content
The path /var/lib/ghost/content are System Level paths. However, CMS-Content is a folder I created to host my files (persistent data).
Finally, I decide to publish my application as an image in Docker Hub, so what will be the mount point now?

If you want to make a pesistent data for the container :
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: ghost-cms
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /persistent-volume
CONTAINER_PATH: /var/lib/ghost/content
Examples :
First create /persistent-volume.
$ mkdir -p /persistent-volume
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /persistent-volume:/var/lib/ghost/content

Each container has its own isolated filesystem. Whoever writes an image's Dockerfile gets to decide what filesystem paths it uses, but since it's isolated from other containers and the host it's very normal to use "system" paths. As another example, the standard Docker Hub database images use /var/lib/mysql and /var/lib/postgresql/data. For a custom application you might choose to use /app/data or even just /data, if that makes sense for you.
If you're creating an image FROM a pre-existing image like this, you'll usually inherit its filesystem layout, so the mount point for your custom image would be the same as in the base image.
Flipping through the Ghost Tutorials, it looks like most things you could want to do either involve using the admin UI or making manual changes in the content directory. The only files that changes are in the CMS-Content named volume in your example (and even if you didn't name a volume, the Docker Hub ghost image specifies an anonymous volume there). That means you can't create a derived image with a standard theme or other similar setup: you can't change the content directory in a derived image, and if you experiment with docker commit (not recommended) the image it produces won't have the content from the volume.

Related

Exclude sub-folder when mounting host to volume docker [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

use volume defined in Dockerfile from docker-compose

I have for example this service and volume defined in my docker-compose file
postgres:
image: postgres:9.4
volumes:
- db_data:/var/lib/postgresql/data
volumes:
blue_prod_db:
driver: rancher-nfs
Then. if you define a volume inside a Dockerfile like this:
RUN mkdir /stuff
COPY ./stuff/* /stuff/
VOLUME /stuff
How can you later access it through the docker-compose configuration and add it to a container?
When configured in the Dockerfile, a volume will result in any container started from that image, including temporary containers later in the build process from the RUN command, to have a volume defined at the specified location, e.g. /stuff. If you do not define a source for that volume at run time, you will get an anonymous volume created by docker for you at that location. However, you can always define a volume with a source at run time (even without the volume being defined) by specifying the location in your compose file:
version: "3"
services:
app:
image: your_image
volumes:
- data:/stuff
volumes:
data:
Note that there are two volumes sections, one for a specific service that specifies where the volume is mounted inside the container, and another at the top level where you can specify the source of the volume. Without specifying a source, you'll get a local volume driver with a directory under /var/lib/docker bind mounted into the container.
I do not recommend specifying volumes inside the Dockerfile in general, it breaks the ability to extend the image in later steps for child images, and clutters the filesystem with anonymous volumes that are not easy to track back to their origin. It's best to define them at runtime with something like a compose file.

Moving volume between containers docker-composer

I have image A (some_laravel_project) and B (laravel_module). Image A is a Laravel project that looks like this.
app
modules
core
Volume Image b here
config
As the list above suggests I want to share a volume from Image B in Image A using docker-compose. I want to access the files in container B.
This is the docker-compose I tried and didn't receive any errors creating those images in gitlab ci. I checked and the volume and its files are in stored in the module_user:latest container.
I think I made a mistake mounting the volume to some_laravel_project.
version: '3'
services:
laravel:
image: some_laravel_project
working_dir: /var/www
volumes:
- /var/www/storage
- userdata:/var/www/Modules
user:
image: laravel_module
volumes:
- userdata:/user
volumes:
userdata:
webroot:
The method you used to share volumes across container in docker compose is the correct one. You can find this documented under docker-compose volumes
if you want to reuse a volume across multiple services, then define a
named volume in the top-level volumes key. Use named volumes with
services,
In you case, the directory /var/www/Modules in laravel will have the same content as that in /user inside user service. You can verify that by going into the containers and checking each directoty by running;
docker exec -it <container-name> bash

application configuration as a docker image

I'm trying to separate my application configuration file out from the application itself so that I can have a single docker image for application code and multiple images for storing different configurations.
I understand that you can use --volume-from to attach a data only container to the application container. But what I'd like to achieve is to have an image with all the configuration files and the application container can attach to a volume container from that image. I'm not sure if this is possible?
As far as I can see, there seems to be no way of running a data only container from a image. Data only container are normally generated with docker create -v. Hence this question.
You can run data only container from a custom image having your config files.
for e.g.
docker create -v /volumeexposedinyourimage --name datacontainer yourimagename /bin/true
yourimagename - is the name of your custom image (which has your config files)
and volumeexposedinyourimage is the volume exposing your config files.
docker-compose example
version: '2'
services:
datacontainer :
image: yourimagename
volumes:
- /volumeexposedinyourimage
command: ["/bin/true"]
appcontainer:
image: appimage
volumes_from:
- datacontainer
or you could mount host directory containing config files as volume
e.g.
docker run -d -P --name appcontainer -v /host/config:/path appimage
place your config files on /host/config on the host and in your appcontainer , you could refer to your config files using /path

how to create data volume from image for use in nginx container

I have an image (not container) which has the data baked in at /store. The image does not do anything else, it is only a vessel for holding the data in. I want to create a docker-compose file so that my nginx container has access to everything in /store at /usr/share/nginx/html
Do I need to make an intermediary container first? I am not sure how the docker-compose file would look. thanks
This is a quick step process:
docker run -v nginx-store:/store --rm store true
docker run -v nginx-store:/usr/share/nginx/html -d --name nginx nginx
The first run creates a named volume nginx-store from the contents of your store image (this happens any time you mount an empty volume in a container), and immediately exits and deletes the container.
The second run uses that named volume with the future nginx containers. To modify the nginx-store volume in the future, you can run any side container that mounts it with a similar -v nginx-store:/target flag.
The best way for managing that would probably do use a docker volume to store you're /store datas.
You can do it once by creating a container from that image, mount an empty docker volume in it, and then copy the content of /store in the external docker volume.
If you still need to use the /store from an existing image you will need to instanciate a container from if and retrieve the exposed volume from your nginx container. (using volume_from). In this case both containers would need to be on the same host.
You could try using the local-persist volume plugin as follows:
version: '2'
services:
web:
image: nginx
volumes:
- data:/usr/share/nginx/html
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
Obviously other volume plugin types might offer more flexibility.
https://docs.docker.com/engine/extend/plugins/
I'd suggest you to consider bake /store inside your nginx container. It will reduce number of mounted volumes and thus simplify overall structure. And maybe improves performance.
You could do it in several ways:
Use you data image as base for your nginx image. In this case you will need to write Dockerfile for nginx but it's not very difficult.
You can extract /store from your data image by creating container with true endpoint and docker cp desired data. Then copy it to your nginx image.

Resources