Docker shared volume not updating container for new files - docker

Im working with node 18.4.0-alpine image and using vue 3 with it. Mounted my app with volume option on docker-compose.yml file. When i add a new file from my finder or ide, i have to rebuild container to use it inside container. But if i add it directly in container with terminal, it works. Also i can't create new file in terminal to container after created from finder/ide.
For an example, i just created a new folder named account and index.vue file with my ide and it's not worked in app. Then i try to build that folder again with terminal to container, it throwed error as the folder already exists. But it's not listing on ls result.
mkdir: can't create directory 'account': File exists
But when i try "cd account" command, it works and ls command lists new index.vue file.
Here's my docker-compose file
frontend:
container_name: frontend
stdin_open: true
build:
context: ./etc/node
dockerfile: Dockerfile
volumes:
- ./frontend:/app
depends_on:
- backend
- mysql
networks:
- main
tty: true
command: sh -c "yarn install && yarn dev"
And terminal results:
/app/src/pages # ls
[...all].vue auth index.vue misc
/app/src/pages # mkdir account
mkdir: can't create directory 'account': File exists
/app/src/pages # cd account
/app/src/pages/account # ls
index.vue
/app/src/pages/account # cd ..
/app/src/pages # ls
[...all].vue auth index.vue misc
/app/src/pages #
So, when i update existed files on build, it works. But for new files, have to rebuild it or create the file fith terminal. Is there a way to update volumes for new created files ?

Related

How to mount host directory after copying the files to the container?

I need to copy the files of src folder to the container chowning them using www-data user and group, so in my Dockerfile I did:
COPY --chown=www-data:www-data src ./
when I access to the container I can see all the copied file but if I edit a file on the host, I'm not able to see the changes, so I have to rebuild the project using docker-compose up --build -d.
This is my docker-compose:
version: '3.9'
services:
php-fpm:
container_name: php_app
restart: always
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
#volumes:
# - ./src:/var/www/html
if I comment out volumes I can work on the host directory and see the changes, but in this way I lose the www-data chown.
How can I manage such situation? Essentially I want:
chown all files as www-data
update files in real time
There's no special feature to apply chown to mounted files. Leaving that and manual use of chown aside, you can make php-fpm workers to run with your uid. Here's how for php:8.0.2-fpm-alpine image (in other images path to config file can be different):
# Copy pool config out of a running container
docker cp php_app:/usr/local/etc/php-fpm.d/www.conf .
# Change user in config
sed "s/user = www-data/user = $(id -u)/" www.conf -i
# and/or change group
sed "s/group = www-data/group = $(id -g)/" www.conf -i
Now mount the edited config into the container using volumes in docker-compose.yml:
services:
php-fpm:
volumes:
- ./src:/var/www/html # code
- ./www.conf:/usr/local/etc/php-fpm.d/www.conf # pool config
And restart the container.

How to mount the folder as volume that docker image build creates?

The docker-compose file is as follows:
version: "3"
services:
backend:
build:
context: .
dockerfile: dockerfile_backend
image: backend:dev1.0.0
entrypoint: ["sh", "-c"]
command: python manage.py runserver
ports:
- "4000:4000"
The docker build creates a folder lets say /docker_container/configs which has files like config.json and db.sqlite3. The mounting of this folder as volumes is necessary because during runtime the content of the folder gets modified or updated,these changes should not be lost.
I have tried adding a volumes as follows :
volumes:
- /host_path/configs:/docker_container/configs
Here the problem is mount point of the hostpath(/host_path/configs) is empty initially so the container image folder(/docker_container/configs) also gets empty.
How could this problem be solved?.
You are using a Bind Mount which will "hide" the content already existing in your image as you describe - /host_path/configs being empty, /docker_container/configs will be empty as well.
You can use named Volumes instead which will automatically populate the volume with content already existing in the image and allow you to perform updates as you described:
services:
backend:
# ...
#
# content of /docker_container/configs from the image
# will be copied into backend-volume
# and accessible at runtime
volumes:
- backend-volume:/docker_container/configs
volumes:
backend-volume:
As stated in the Volume doc:
If you start a container which creates a new volume [...] and the container has files or directories in the directory to be mounted [...] the directory’s contents are copied into the volume
You can pre-populate the host directory once by copying the content from the image to directory.
docker run --rm backend:dev1.0.0 tar -cC /docker_container/config/ . | tar -xC /host_path/configs
Then start your compose project as it is and the host path already has the original content from the image.
Another approach is to have an entrypoint script that copies content to the mounted volume.
You can mount the host path to a different path(say /docker_container/config_from_host) and have an entrypoint script which copies content from /docker_container/configs into /docker_container/config_from_host if the directory is empty.
Sample pseudo code:
$ cat Dockerfile
RUN cp entrypoint.sh /entrypoint.sh
CMD /entrypoint.sh
$ cat entrypoint.sh:
#!/bin/bash
if /docker_container/config_from_host is empty; then
cp -r /docker_container/config/* /docker_container/config_from_host
fi
python manage.py runserver

Dockerfile file not found in keycloak docker image

I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build
Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy
Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.

Can't access a volume during building a docker image

I am trying to create a docker container with a volume called 'example', which has some files inside it, but I can't access it during build. Here are the files I am using:
# docker-compose.yml
version: "3"
services:
example:
build: .
volumes:
- "./example:/var/example"
stdin_open: true
tty: true
And:
# Dockerfile
FROM ubuntu
RUN ls /var/example
CMD ["/bin/bash"]
When I run:
sudo docker-compose up
It gives me an error:
ls: cannot access /var/example: No such file or directory
But when I delete the RUN command from the Dockerfile and run sudo docker-compose up again, and then run:
docker exec -it c949eef14fcd /bin/bash # c949eef14fcd is the id of the created container
ls /var/example
... in another terminal window, there is no error, and I can see all the files of the example directory. Why?
Sorry, I have just found out that volumes are not accessible during build. They are accessible during run of the container, which is said here in the point 9. But when I changed my Dockerfile to this:
# Dockerfile
FROM ubuntu:14.04
CMD ["ls", "/var/example"]
... it worked perfectly well and printed out all the files inside the example folder.

docker-compose file removes the files extracted by dockerfile in container directory

I want to build drupal from dockerfile and install a module in drupal using that dockerfile in container directory - /var/www/html/sites/all/modules.
but when I build the dockerfile by docker-compose build it extracts correctly ..
as soon as I perform docker-compose up , the files are gone but the volume is mapped .
please look the both the docker-compose and dockerfile
DockerFile
FROM drupal:7
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
ENV DRUPAL_VERSION 7.36
ENV DRUPAL_MD5 98e1f62c11a5dc5f9481935eefc814c5
ADD . /var/www/html/sites/all/modules
WORKDIR /var/www/html
RUN chown -R www-data:www-data sites
WORKDIR /var/www/html/sites/all/modules
# Install drupal-chat
ADD "http://ftp.drupal.org/files/projects/{drupal-module}.gz {drupal-module}.tar.gz"
RUN tar xzvf {drupal-module} \
&& rm {drupal-module} \
docker-compose file
# PHP Web Server
version: '2'
drupal_box:
build: .
ports:
- "3500:80"
external_links:
- docker_mysqldb_1
volumes:
- ~/Desktop/mydockerbuild/drupal/modules:/var/www/html/sites/all/modules
- ~/Desktop/mydockerbuild:/var/log/apache2
networks:
- default
- docker_default
environment:
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: drupal
restart: always
#entrypoint: ./Dockerfile
networks:
docker_default:
external: true
executing:
sudo docker-compose build
sudo docker-compose up
on executing both of the commands above-
the directory in the container doesnot have the {drupal-module folder} but i see it is successfully extracting in the directory in console(due to xzvf in tar command in dockerfile).
but this helps me in mapping both the host directory and the container directory and files added or deleted can be seen both virtually and locally.
but as soon as I remove the first mapping in volume (i.e ~/Desktop...) the module is extracted in the directory but the mapping is not done.
My main aim is to extract the {drupal-module} folder in /var/www/html/sites/all/modules and map the same folder to the host directory.
Please Help !
So yes.
The answer is that you cannot have the extracted contents of the container folder into your host folder specified in the volumes mapping in docker-compose. i.e (./modules:/var/www/html/sites/all/modules) is not acceptable for drupal docker.
I did it with named volumes where you can achieve this.
Eg - modules:/var/www/html/sites/all/modules
this will create a volume in var/lib/docker/volumes... (you can have the address by "docker inspect ") and the volume will have the same data as extracted in your container.
Note- the difference lies in ./ !!!

Resources