I cannot create a directory with the mkdir command in a container with dockerfile.
My Dockerfile file is simply ;
FROM php:fpm
WORKDIR /var/www/html
VOLUME ./code:/var/www/html
RUN mkdir -p /var/www/html/foo
In this way I created a simple php: fpm container.
and I wrote to create a directory called foo.
docker build -t phpx .
I have built with the above code.
In my docker-compose file as follows.
version: '3'
services:
web:
container_name: phpx
build : .
ports:
- "80:80"
volumes:
- ./code:/var/www/html
later; run the following code and I entered the container kernel.
docker exec -it phpx /bin/bash
but there is no a directory called foo in / var / www / html.
I wonder where I'm doing wrong.
Can you help me?
The reason is that you are mounting a volume from your host to /var/www/html.
Step by step:
RUN mkdir -p /var/www/html/foo creates the foo directory inside the filesystem of your container.
docker-compose.yml ./code:/var/www/html "hides" the content of /var/www/html in the container filesystem behind the contents of ./code on the host filesystem.
So actually, when you exec into your container you see the contents of the ./code directory on the host when you look at /var/www/html.
Fix: Either you remove the volume from your docker-compose.yml or you create the foo-directory on the host before starting the container.
Additional Remark: In your Dockerfile you declare a volume as VOLUME ./code:/var/www/html. This does not work and you should probably remove it. In a Dockerfile you cannot specify a path on your host.
Quoting from docker:
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability. since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.
I am able to create a directory inside the 'workdir' for docker as follows:
Dockerfile content
COPY src/ /app
COPY logging.conf /app
COPY start.sh /app/
COPY Pipfile /app/
COPY Pipfile.lock /app/
COPY .env /app/
RUN mkdir -p /app/logs
COPY logs/some_log.log /app/logs/
WORKDIR /app
I have not mentioned the volume parameter in my 'docker-compose.yaml' file
So here is what I suggest: Remove the volume parameter from the 'Dockerfile' as correctly pointed by the Fabian Braun.
FROM php:fpm
RUN mkdir -p /var/www/html/foo
WORKDIR /var/www/html
And remove the volume parameter from the docker-compose file. It will work. Additionally, I would like to know how you tested of there is a directory named 'foo'.
Docker-compose file content
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile # The name of your docker file
container_name: phpx
ports:
- "80:80"
You can use the SHELL instruction of Dockerfile.
ENV HOME /usr/local
SHELL ["/bin/sh", "-c"]
RUN mkdir $HOME/logs
Related
The docker-compose file is as follows:
version: "3"
services:
backend:
build:
context: .
dockerfile: dockerfile_backend
image: backend:dev1.0.0
entrypoint: ["sh", "-c"]
command: python manage.py runserver
ports:
- "4000:4000"
The docker build creates a folder lets say /docker_container/configs which has files like config.json and db.sqlite3. The mounting of this folder as volumes is necessary because during runtime the content of the folder gets modified or updated,these changes should not be lost.
I have tried adding a volumes as follows :
volumes:
- /host_path/configs:/docker_container/configs
Here the problem is mount point of the hostpath(/host_path/configs) is empty initially so the container image folder(/docker_container/configs) also gets empty.
How could this problem be solved?.
You are using a Bind Mount which will "hide" the content already existing in your image as you describe - /host_path/configs being empty, /docker_container/configs will be empty as well.
You can use named Volumes instead which will automatically populate the volume with content already existing in the image and allow you to perform updates as you described:
services:
backend:
# ...
#
# content of /docker_container/configs from the image
# will be copied into backend-volume
# and accessible at runtime
volumes:
- backend-volume:/docker_container/configs
volumes:
backend-volume:
As stated in the Volume doc:
If you start a container which creates a new volume [...] and the container has files or directories in the directory to be mounted [...] the directory’s contents are copied into the volume
You can pre-populate the host directory once by copying the content from the image to directory.
docker run --rm backend:dev1.0.0 tar -cC /docker_container/config/ . | tar -xC /host_path/configs
Then start your compose project as it is and the host path already has the original content from the image.
Another approach is to have an entrypoint script that copies content to the mounted volume.
You can mount the host path to a different path(say /docker_container/config_from_host) and have an entrypoint script which copies content from /docker_container/configs into /docker_container/config_from_host if the directory is empty.
Sample pseudo code:
$ cat Dockerfile
RUN cp entrypoint.sh /entrypoint.sh
CMD /entrypoint.sh
$ cat entrypoint.sh:
#!/bin/bash
if /docker_container/config_from_host is empty; then
cp -r /docker_container/config/* /docker_container/config_from_host
fi
python manage.py runserver
I have a very simple project:
Dockerfile:
from node:lts
VOLUME /scripts
WORKDIR /scripts
RUN bash -c 'ls /'
RUN bash -c 'ls /scripts'
RUN script.sh
docker-compose.yml:
version: '3.7'
services:
service:
build: .
volumes:
- .:/scripts
Then I run docker-compose build but it fails with /bin/sh: 1: script.sh: not found
From the ls /scripts I can see that Docker isn't binding my script to the container. I have Docker 19.03.8. Do you know what I am doing wrong?
When you run a Docker Compose file, the build: block is run first, and it ignores all of the options outside that block. A Dockerfile never has mounted volumes, it can never make network calls to other Compose containers, and it won't see environment: variables that are set elsewhere.
That means you must explicitly COPY code into your image before you can RUN it.
FROM node:ls
WORKDIR /scripts
COPY script.sh .
RUN ./script.sh
I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build
Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy
Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.
Can somebody explain to me why some Dockerfiles have steps to copy files rather than just mount a volume with the files on.
I have been looking at the setup for a Django project with Docker and the dockerfile has steps with copy commands in:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
In other Dockerfiles I have used (homeassistant) I have just mounted a directory as a volume and it's worked. What's going on here?
Can't I just keep the code and requirements in the same folder and mount them?
Just can't get my head around it
Edit:
For reference I'm looking at the Docker site tutorial for Django and it mounts the root dir as /code
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why is that volume mounted to /code if we copy the files there anyway. Maybe that is what is throwing me off?
Volumes are used to manage files stored by the Docker container. It allows the Docker container to write to that specific location on the file system. If the only thing you want is to execute a piece of code, it is better to just copy it over to the Docker container so that it does not have write access to the file-system of the host.
Edit:
I do not actually know why they specify the volume in the docker compose setup. The build: . specifies it should use the Dockerfile in the current directory, which already includes the copy statement. It seems a bit pointless. Might be a mistake in the tutorial.
I have a docker setup that does not have the Dockerfile or docker-compose at the root because there are many services.
build
client.Dockerfile
deployments
docker-compose.yml
web
core
scripts
run.sh
docker-compose
version: "3.1"
services:
client:
build:
context: ..
dockerfile: ./build/client.Dockerfile
volumes:
- ./web/core:/app
ports:
- 3000:3000
- 35729:35729
And then the dockerfile:
FROM node:10.11
ADD web/core/yarn.lock /yarn.lock
ADD web/core/package.json /package.json
ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn
WORKDIR /app
ADD web/core /app
EXPOSE 3000
EXPOSE 35729
RUN cat /app/scripts/run.sh
ENTRYPOINT ["/bin/bash", "/app/scripts/run.sh"]
CMD ["start"]
Now the RUN command displays the result of the file so it is there. However, when running docker-compose up the client_1 | /bin/bash: /app/scripts/run.sh: No such file or directory
I'm guessing it has something to do with the docker-compose context because when the dockerfile was at the root, it seemed to work fine.
I'm getting the feeling that docker is designed essentially to work only at the root.
Context:
I want a live reloading create-react-app server like this: https://www.peterbe.com/plog/how-to-create-react-app-with-docker.
I would like to setup my project this way: https://github.com/golang-standards/project-layout
Your volume is wrongly mounting. This should fix the issue. I created the similar folder structure. From the root folder of build ran docker-compose -f ./deployments/docker-compose.yml up. It works normally only thing i change volume path.
volumes:
- ../web/core:/app