How to mount the folder as volume that docker image build creates? - docker

The docker-compose file is as follows:
version: "3"
services:
backend:
build:
context: .
dockerfile: dockerfile_backend
image: backend:dev1.0.0
entrypoint: ["sh", "-c"]
command: python manage.py runserver
ports:
- "4000:4000"
The docker build creates a folder lets say /docker_container/configs which has files like config.json and db.sqlite3. The mounting of this folder as volumes is necessary because during runtime the content of the folder gets modified or updated,these changes should not be lost.
I have tried adding a volumes as follows :
volumes:
- /host_path/configs:/docker_container/configs
Here the problem is mount point of the hostpath(/host_path/configs) is empty initially so the container image folder(/docker_container/configs) also gets empty.
How could this problem be solved?.

You are using a Bind Mount which will "hide" the content already existing in your image as you describe - /host_path/configs being empty, /docker_container/configs will be empty as well.
You can use named Volumes instead which will automatically populate the volume with content already existing in the image and allow you to perform updates as you described:
services:
backend:
# ...
#
# content of /docker_container/configs from the image
# will be copied into backend-volume
# and accessible at runtime
volumes:
- backend-volume:/docker_container/configs
volumes:
backend-volume:
As stated in the Volume doc:
If you start a container which creates a new volume [...] and the container has files or directories in the directory to be mounted [...] the directory’s contents are copied into the volume

You can pre-populate the host directory once by copying the content from the image to directory.
docker run --rm backend:dev1.0.0 tar -cC /docker_container/config/ . | tar -xC /host_path/configs
Then start your compose project as it is and the host path already has the original content from the image.
Another approach is to have an entrypoint script that copies content to the mounted volume.
You can mount the host path to a different path(say /docker_container/config_from_host) and have an entrypoint script which copies content from /docker_container/configs into /docker_container/config_from_host if the directory is empty.
Sample pseudo code:
$ cat Dockerfile
RUN cp entrypoint.sh /entrypoint.sh
CMD /entrypoint.sh
$ cat entrypoint.sh:
#!/bin/bash
if /docker_container/config_from_host is empty; then
cp -r /docker_container/config/* /docker_container/config_from_host
fi
python manage.py runserver

Related

docker-compose subfolders doesn't appear in volume folder

Dockerfile:
FROM golang:latest
RUN mkdir /app/
RUN mkdir /app/subfolder1
RUN mkdir /app/subfolder2
VOLUME /app/
docker-compose.yml
version: '3.3'
services:
my_test:
build: .
volumes:
- ./app:/app
I watched (in mysql Dockerfile) how the database mysql files are shared, I decided to do the same. I expect that the first time start docker-compose up, two subfolders from outside will be created in the /app folder. But during running docker-compose up, only one folder /app is created without subfolders inside. What am I doing wrong?
Please tell me how can I achieve the same behavior as with the MySQL container, when at the first start my external folder is filled with files and folders, and then it’s just used:
version: '3'
services:
mysql:
image: mysql:5.7
volumes:
- ./data/db:/var/lib/mysql
Example above works, but my first example doesn't work
The mysql image has an involved entrypoint script that does the first-time setup. That specifically checks to see whether the data directory exists or not:
if [ -d "$DATADIR/mysql" ]; then
DATABASE_ALREADY_EXISTS='true'
fi
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir "$#"
...
fi
Note that this does not rely on any built-in Docker functionality, and does not copy any content out of the original image; it runs a fairly involved sequence of steps to populate the initial database setup, configure users, and run the contents in the /docker-entrypoint-initdb.d directory.
If you want to copy some sort of seed data into a mounted volume, your container generally needs to handle this itself. You could write an entrypoint script like:
#!/bin/sh
# If the data directory doesn't have content, copy it
if ! [ -d /data/content ]; then
cp -a /app/data/content /data
fi
# Run whatever the container's main command is
exec "$#"
(There is a case where Docker will populate named volumes from image content. This has some severe limitations: it only works on named volumes and not bind-mounted host directories; it doesn't work on Kubernetes, if that's in your future; if the image content is updated, the volume will not be changed. Writing out the setup code explicitly at startup will give you more predictable behavior.)

when is docker volume available using docker compose?

I'm relatively new to Docker. I have a docker-compose.yml file that creates a volume. In one of my Dockerfiles I check to see the volume is created by listing the volume's contents. I get an error saying the volume doesn't exist. When does a volume actually become available when using docker compose?
Here's my docker-compse.yml:
version: "3.7"
services:
app-api:
image: api-dev
container_name: api
build:
context: .
dockerfile: ./app-api/Dockerfile.dev
ports:
- "5000:5000"
volumes:
- ../library:/app/library
environment:
ASPNETCORE_ENVIRONMENT: Development
I also need to have the volume available when creating my container because I use it in my dotnet restore command.
Here my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS api-env
#list volume contents
RUN ls -al /app/library
WORKDIR /app/app-api
COPY ./app-api/*.csproj .
#need to have volume created before this command
RUN dotnet restore --source https://api.nuget.org/v3/index.json --source /app/library
#copies all files into current directory
COPY ./app-api/. .
RUN dotnet run Api.csproj
EXPOSE 5000
RUN echo "'dotnet running'"
I thought by adding -volumes: .... to docker-compose.yml it automatically creates the volume. Do I still need to add a create volume command in my Dockerfile?
TL;DR:
The commands you give in RUN are executed before mounting volumes.
The CMD will be executed after mounting the volumes.
Longer answer
The Dockerfile is used when building an image of the container. The image will then be used in a docker-compose.yml file to start up a container, to which a volume will be connected. The RUN command you are executing is executed when the image is built, so it will not have access to the volume.
You would normally issue a set of RUN commands, which would prepare the container image. Finally, you would define a CMD command, which would tell what program should be executed when a container starts, based on this image.

Operation of the mkdir command with dockerfile

I cannot create a directory with the mkdir command in a container with dockerfile.
My Dockerfile file is simply ;
FROM php:fpm
WORKDIR /var/www/html
VOLUME ./code:/var/www/html
RUN mkdir -p /var/www/html/foo
In this way I created a simple php: fpm container.
and I wrote to create a directory called foo.
docker build -t phpx .
I have built with the above code.
In my docker-compose file as follows.
version: '3'
services:
web:
container_name: phpx
build : .
ports:
- "80:80"
volumes:
- ./code:/var/www/html
later; run the following code and I entered the container kernel.
docker exec -it phpx /bin/bash
but there is no a directory called foo in / var / www / html.
I wonder where I'm doing wrong.
Can you help me?
The reason is that you are mounting a volume from your host to /var/www/html.
Step by step:
RUN mkdir -p /var/www/html/foo creates the foo directory inside the filesystem of your container.
docker-compose.yml ./code:/var/www/html "hides" the content of /var/www/html in the container filesystem behind the contents of ./code on the host filesystem.
So actually, when you exec into your container you see the contents of the ./code directory on the host when you look at /var/www/html.
Fix: Either you remove the volume from your docker-compose.yml or you create the foo-directory on the host before starting the container.
Additional Remark: In your Dockerfile you declare a volume as VOLUME ./code:/var/www/html. This does not work and you should probably remove it. In a Dockerfile you cannot specify a path on your host.
Quoting from docker:
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability. since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.
I am able to create a directory inside the 'workdir' for docker as follows:
Dockerfile content
COPY src/ /app
COPY logging.conf /app
COPY start.sh /app/
COPY Pipfile /app/
COPY Pipfile.lock /app/
COPY .env /app/
RUN mkdir -p /app/logs
COPY logs/some_log.log /app/logs/
WORKDIR /app
I have not mentioned the volume parameter in my 'docker-compose.yaml' file
So here is what I suggest: Remove the volume parameter from the 'Dockerfile' as correctly pointed by the Fabian Braun.
FROM php:fpm
RUN mkdir -p /var/www/html/foo
WORKDIR /var/www/html
And remove the volume parameter from the docker-compose file. It will work. Additionally, I would like to know how you tested of there is a directory named 'foo'.
Docker-compose file content
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile # The name of your docker file
container_name: phpx
ports:
- "80:80"
You can use the SHELL instruction of Dockerfile.
ENV HOME /usr/local
SHELL ["/bin/sh", "-c"]
RUN mkdir $HOME/logs

Mapping volume related with WORKDIR

How can I map a volume using the Image WORKDIR in docker-compose?
I'm trying to use
services:
my-app:
image: <image>
volumes:
- ./scripts:./scripts
But when I try to execute docker-compose up -d, I get the error bellow:
Cannot create container for service my-app: invalid volume spec "scripts": invalid volume specification: 'scripts': invalid mount config for type "volume": invalid mount path: 'scripts' mount path must be absolute
Is there any way to map my scripts folder in the WORKDIR of a image without knowing where is this folder?
No there is no way to do that by default as such. But you can use a workaround if you would like
services:
my-app:
image: <image>
volumes:
- ./scripts:/scripts
command: bash -c "ln -s /scripts scripts && orignal command"
But this will require you to know the command before hand. So either you know command before hand or the WORKDIR before hand.
You can also change working directory to any other directory you want. If you don't want to override the command then another possible option is below
docker-compose up -d
docker-compose exec my-app ln -s /scripts scripts

docker-compose file removes the files extracted by dockerfile in container directory

I want to build drupal from dockerfile and install a module in drupal using that dockerfile in container directory - /var/www/html/sites/all/modules.
but when I build the dockerfile by docker-compose build it extracts correctly ..
as soon as I perform docker-compose up , the files are gone but the volume is mapped .
please look the both the docker-compose and dockerfile
DockerFile
FROM drupal:7
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
ENV DRUPAL_VERSION 7.36
ENV DRUPAL_MD5 98e1f62c11a5dc5f9481935eefc814c5
ADD . /var/www/html/sites/all/modules
WORKDIR /var/www/html
RUN chown -R www-data:www-data sites
WORKDIR /var/www/html/sites/all/modules
# Install drupal-chat
ADD "http://ftp.drupal.org/files/projects/{drupal-module}.gz {drupal-module}.tar.gz"
RUN tar xzvf {drupal-module} \
&& rm {drupal-module} \
docker-compose file
# PHP Web Server
version: '2'
drupal_box:
build: .
ports:
- "3500:80"
external_links:
- docker_mysqldb_1
volumes:
- ~/Desktop/mydockerbuild/drupal/modules:/var/www/html/sites/all/modules
- ~/Desktop/mydockerbuild:/var/log/apache2
networks:
- default
- docker_default
environment:
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: drupal
restart: always
#entrypoint: ./Dockerfile
networks:
docker_default:
external: true
executing:
sudo docker-compose build
sudo docker-compose up
on executing both of the commands above-
the directory in the container doesnot have the {drupal-module folder} but i see it is successfully extracting in the directory in console(due to xzvf in tar command in dockerfile).
but this helps me in mapping both the host directory and the container directory and files added or deleted can be seen both virtually and locally.
but as soon as I remove the first mapping in volume (i.e ~/Desktop...) the module is extracted in the directory but the mapping is not done.
My main aim is to extract the {drupal-module} folder in /var/www/html/sites/all/modules and map the same folder to the host directory.
Please Help !
So yes.
The answer is that you cannot have the extracted contents of the container folder into your host folder specified in the volumes mapping in docker-compose. i.e (./modules:/var/www/html/sites/all/modules) is not acceptable for drupal docker.
I did it with named volumes where you can achieve this.
Eg - modules:/var/www/html/sites/all/modules
this will create a volume in var/lib/docker/volumes... (you can have the address by "docker inspect ") and the volume will have the same data as extracted in your container.
Note- the difference lies in ./ !!!

Resources