Docker volume mount issue to a mounted folder - docker

We have mounted a folder in a Linux machine to our docker container application using (docker-compose)
volumes:
- /mnt/share:/mnt/share
The /mnt/share is a mounted folder in the machine(Not a real folder in the machine, its our file server). IF for some reason that mount is lost and then remounted again.
The application running in the docker container is not having access to the mounted folder until the container is restarted.

You might want to use to use a Volume Driver instead of bind-mounting a local filesystem.
See Share data among machines
Without knowing more about your environment it is impossible to give a more detailed answer. It would be helpful to know if your container runs in a AWS data center or if you use nfsv3, nfsv4 or cifs for mounting.

The following solution helped me to continue.
I wrote a script to check whether the folder exists.
The script is then called a command in the docker-compose file.
version:"3"
services:
flowable-task-handler:
build: flowable-task-handler
ports:
- "8085:8085"
command: bash -c "/wait_for_file_mount.sh /mnt/share/fileshares/ && java -jar /app.jar"
wait_for_file_mount.sh
#!/bin/sh
# Used to check whether the mount folder is ready for flowable to use
mountedfolder="$1"
until [ -d "$mountedfolder" ];
do sleep 2;
echo error "Mounted folder not found : $mountedfolder";
done;
Its a spring boot application. I have removed the entrypoint in the DockerFile and application is started using the command in docker compose(java -jar /app.jar")

defining the mount propagation as ":shared" should fix this:
-v /autofs:/autofs:shared \
not sure about docker-compose - I don't really use that. but you can define a docker volume with mount propagation and put this into your DC file.

Related

Docker-Compose mount volume overwrites host files

I am mounting a directory from a CMS with content files inside a docker container.
The mounting works absolutely.
The CMS got some basic files, which are copied into the mounted folder in the container during build. Then it will be mounted to a directory on the host. Now the files from the Container are also on the host. I can change them and they will be kept in sync.
If i restart my container docker-compose stop && docker-compose up -d the files on the host will be overwritten by the default ones from the container build.
Is there a possibility to force the local state of the file to overwrite the file in the container?
Kind regards
maybe try configuring it as read only
docker run -v volume-name:/path/in/container:ro my/image
You can set it as read only as gCoh answer. See the following for docker-compose:
volumes:
- type: bind
source: ./host-source-folder
target: /container-folder
read_only: true

Wanted persistent data with Docker volumes but have empty directories instead

I'm writing a Dockerfile for a java application but I'm struggling with volumes: the mounted volumes are empty.
I've read the Dockerfile reference guide and the best pratice to write Dockerfiles, but, for a start, my example is quite complicated.
What I want to do is to be able to have the following items on the host (in a mounted volume):
configuration folder,
log folder,
data folder,
properties files
Let me summarize :
When the application is installed (extracted from the tar.gz with the RUN command), it writes a bunch of files and directories (including log and conf).
When the application is started (with CMD or ENTRYPOINT), it creates a data folder if it doesn't exist and put data files in it.
I'm only interested in:
/rootapplicationfolder/conf_folder
/rootapplicationfolder/log_folder
/rootapplicationfolder/data_folder
/rootapplicationfolder/properties_files
I'm not interested in /rootapplicationfolder/binary_files
There is something taht I dont't see. I've read and applied the information found in the two following links without success.
Questions:
Should I 'mkdir'only the top level dir on the host to be mapped with /rootapplicationfolder ?What about the files ?
Is the order of 'VOLUME' in my Dockerfile important ?
Does it need to be placed before or after the deflating (RUN tar zxvf compressed_application) ?
https://groups.google.com/forum/#!topic/docker-user/T84nlzw_vpI
Docker on Linux - Empty mounted volumes
Try using Docker-compose, use the volumes property to set what path you want to mount between your machine and your container.
version 2 Example
web:
image: my-web-app
build:.
command: bash -c "npm start"
ports:
- "8888:8888"
volumes:
- .:/home/app/code (This mount your current path with /home/app/code)
- /home/app/code/node_modules/ (unmount sub directory)
environment:
NODE_ENV: development
You can look at this repository too.
https://github.com/williamcabrera4/docker-flask/blob/master/docker-compose.yml
Well, I've managed to get want I want.
First, I haven't ant VOLUME directive in my Dockerfile.
All the shared directories are created with the -v option of the docker run command.
After that, I had issues with extracting the archive whenever the "extracting" would overwrite an existing directory mounted with -v because it's simply not possible.
So, I deflate the archive somewhere where the -v mounted volumes don't exist and AFTER this step, I mv the contents of deflated/somedirectory to -vMountedSomeDirectory.
I still had issues with docker on CentOS, the mv would copy the files to the destination but would be unable to delete them at the source after the move. I got tired and simply use a debian distribution instead.

Docker volumes mounting on Windows 8 is not working

Context
I want to run a Docker Compose application on a Windows 8. I made it under a Ubuntu 16.04 and it's perfectly working on it.
This Docker Compose run:
nginx
php-fpm
The two containers use volumes.
Files
My .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/Users/my_user/Documents/Development/my_application
My docker-compose.yml file:
version: '2'
services:
web:
build: ../application-web/
ports:
- "80:80"
tty: true
# Add a volume to link php code on the host and inside the container
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
# Add hostnames to allow devs to call special url to open sites
extra_hosts:
- "localhost:127.0.0.1"
- "assistant.docker:127.0.0.1"
- "application.dev:127.0.0.1"
depends_on:
- custom-php
links:
- custom-php:custom-php
custom-php:
build: ../application-php/
ports:
- "50:50"
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
Problem
When I run docker-compose up, everything goes well. Containers start.
But when I try to reach http://192.168.99.100 in my web browser, I got a 403 error.
My investigations show that there is no mounted volumes in the nginx and the php containers:
docker exec -it compose_web_1 bash
ls -la /usr/share/nginx/html/assistant/
shows
drwxr.xr.x 2 root root 80 May 18 15:30 .
drwxr.xr.x 2 root root 4096 May 18 16:10 ..
It seems that Docker cannot mount volumes. Why?
Other information
I am using the Docker Toolbox: https://www.docker.com/products/docker-toolbox
I know that's the good IP address because when I try to reach it in my web browser, I see my nginx container displaying logs.
The environment variable APPLICATION_PATH set as //C:/Users/my_user/Documents/Development/my_application cannot work because Docker use the ":" character as separator for volume declaration:
ERROR: Volume //C:/Users/my_user/Documents/Development/my_application://C:/Users/my_user/Documents/Development/my_application has incorrect format, should be external:internal[:mode]
It's not a nginx problem because when I create an index.phtml file in the folder, I am able to run it:
<?php
echo 'Hello world!';
Ok, I finally did it!
TL;DR
Follow those instructions to be able to access C:\ inside your containers.
1. Install the Docker Toolbox
Go get it here: https://www.docker.com/products/docker-toolbox
Install it.
2. Run a Hello world
Open a Docker Quickstart Terminal.
Run in it:
docker run hello-world
3. Share C:\ with Docker
Open Virtualbox
Open configuration of the default virtual machine and go to shared folders
Modify or create a new shared folder by clicking on buttons to the right. Set options to:
C:\
C
Auto mount
Permanent configuration
Then validate.
4. Activate sharing
Shutdown the default virtual machine then restart it.
5. Set your paths
e.G. if you have a .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/path_from_C_to_the_folder_you_want_to_share_on_the_volume
/!\ you need to set COMPOSE_CONVERT_WINDOWS_PATHS to 1!
6. Start your Compose
In the Docker Quickstart Terminal:
Go to your Docker Compose folder, then start it:
cd /path_to_your_compose_folder
docker-compose up
Why have I to do that? It's so complicated!
The Docker technology rely on Linux namespaces. Without Linux, it can't work. To allow use of Docker on a Windows, Docker needs to install a Linux virtual machine. All the containers will run inside it.
The default virtual machine is now created and running within Virtualbox, that's why you have to share your folders using Virtualbox.
After sharing, the default virtual machine will have a mounted folder in it with a custom name (in the above example, it's C but it could be elephant or whatever).
Finally, Docker will mount volumes from the default virtual machine to the container: you have to use the name of the default machine shared folder in your volume declaration (in the above example, it's C but it could be elephant or whatever).

How to create data volume in docker?

I am having some gitcode (around 10gb) kept in a folder "src" in my home directory. I have read somewhere that we can mount this code as a data volume in docker.
I am a newbie to docker. I only have an idea of using "docker volume create" command, but totally unsure about how to use it.
Could someone help me in achieving this.
Bishal's answer has instructions how to use mapping with Docker compose. When using plain docker, use command
docker run -v <absolute path to src folder on host>:<absolute path on container> some-image
# Real example:
docker run -v ~/src:/src some-image
Docker allows for easy volume mapping. This can be configured in your docker-compose.yaml file.
Volume mapping allows you to share a directory in your host machine to your docker-container.
version: '2'
services:
web:
build: .
volumes:
- .:/code
In the above snippet, the files in the current directory of host machine will be mapped to /code of the docker container.
This article has detailed explanation.

Docker Data/Named Volumes

I am trying to wrap my mind around Docker volumes but I must have some things missing to understand it.
Let's say I have a Python app that require some initialisation depending on env variables. What I'm trying to achieve is having a "Code only image" from which I can start containers that would be mounted at executions. The entrypoint script of the Main container will then read and generate some files from/on the Code only container.
I tried to create an image to have a copy of the code
FROM ubuntu
COPY ./app /usr/local/code/app
Then docker create --name code_volume
And with docker-compose:
app:
image: python/app
hostname: app
ports:
- "2443:443"
environment:
- ENV=stuff
volumes_from:
- code_volume
I get an error from the app container saying it can't find a file in /usr/local/code/app/src but when I run code_volume with bash then ls into the folder, the file is sitting there...
I tried to change access rights, add /bin/true (seeing it in some examples) but I just can't get what I want to be working. I checked the docker volume create feature but it seems to be for storing/sharing data afterward
What am I missing ? Is the entrypoint script executed before volumes are mounted ? Is there any best practices for cases like this that don't involve mounting folders and keeping one copy for every container ? Should I be thinking my containers over again ?
You do not declare the volume on code_volume container upon creation.
docker create -v /usr/local/code/app --name code_volume

Resources