This is an error message I get when building a Docker image:
Step 18 : RUN mkdir /var/www/app && chown luqo33:www-data /var/www/app
---> Running in 7b5854406120 mkdir: cannot create directory '/var/www/app': No such file or directory
This is a fragment of Dockerfile that causes the error:
FROM ubuntu:14.04
RUN groupadd -r luqo33 && useradd -r -g luqo33 luqo33
<installing nginx, fpm, php and a couple of other things>
RUN mkdir /var/www/app && chown luqo33:www-data /var/www/app
VOLUME /var/www/app
WORKDIR /var/www/app
mkdir: cannot create directory '/var/www/app': No such file or directory sound so nonsensical - of course there is no such directory. I want to create it. What is wrong here?
The problem is that /var/www doesn't exist either, and mkdir isn't recursive by default -- it expects the immediate parent directory to exist.
Use:
mkdir -p /var/www/app
...or install a package that creates a /var/www prior to reaching this point in your Dockerfile.
When creating subdirectories hanging off from a non-existing parent directory(s) you must pass the -p flag to mkdir ... Please update your Dockerfile with
RUN mkdir -p ...
I tested this and it's correct.
You can also simply use
WORKDIR /var/www/app
It will automatically create the folders if they don't exist.
Then switch back to the directory you need to be in.
Apart from the previous use cases, you can also use Docker Compose to create directories in case you want to make new dummy folders on docker-compose up:
volumes:
- .:/ftp/
- /ftp/node_modules
- /ftp/files
Also if you want to use multiple directories and then their change owner, you can use:
RUN set -ex && bash -c 'mkdir -p /var/www/{app1,app2}' && bash -c 'chown luqo33:www-data /var/www/{app1,app2}'
Just this!
Related
RUN adduser -D appUser
RUN mkdir /usr/share/app
RUN mkdir /logs
ADD Function/target/foo.jar /usr/share/app
WORKDIR /usr/share/app
RUN chown -R appUser /usr/share/app
RUN chown -R appUser /logs
USER appUser
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "foo.jar"]`
I've got this weird issue I can't seem to work my head around.
My root folder contains two directories, (both with subdirectories) 'Demo/Dockerfile' and 'Function/target/foo.jar'
I have a copy command in my Dockerfile that reads
COPY Function/target/foo.bar /usr/share/app
but when I run docker build -f Demo/Dockerfile from the root folder, I get an error
stat /var/lib/docker/tmp/docker-builder238823934/Function/target/foo.jar: no such file or directory
I find this a bit strange because when I edit the copy command to read COPY /target/foo.bar /usr/share/app and then I cd into the Function directory and run
docker build -f ../Demo/Dockerfile
it builds successfully, or if I edit the Dockerfile to read COPY foo.bar /usr/share/app and then cd into the target directory and run docker build -f ../../Demo/Dockerfile, this also works.
Is there an explanation for this sort of behavior?
This is what my dockerignore file looks like
!**/Dockerfile
!DockerServiceDescription/**
!Function/target/*.war
!server.xml
!tomcat-users.xml
Docker uses context directory and children only and does not allow using any files outside for security reasons.
You should show context directory to docker using '.' or so:
cd myproject
docker build -f Demo/Dockerfile .
I'm trying to customize a Dockerfile, I just want to create a folder and assign the user (PID and GID) on the new folder.
Here is my full Dockerfile :
FROM linuxserver/nextcloud
COPY script /home/
RUN /home/script
The content of the script file :
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
I gave him the following permission : chmod +x script
At this moment it doesn't create the folder, and I see no error in logs.
Command to run the container :
docker run -d \
--name=nextcloud \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/Paris \
-p 443:443 \
-p 8080:80 \
-v /home/foouser/nextcloud:/config \
-v /home/foouser/data:/data \
--restart unless-stopped \
nextcloud_custom
Logs from build :
Step 1/3 : FROM linuxserver/nextcloud
---> d1af592649f2
Step 2/3 : COPY script /home/
---> 0b005872bd3b
Step 3/3 : RUN /home/script
---> Running in 9fbd3f9654df
Removing intermediate container 9fbd3f9654df
---> 91cc65981944
Successfully built 91cc65981944
Successfully tagged nextcloud_custom:latest
you can try to run the commands directly:
RUN mkdir -p /data/local_data && chown -R abc:abc /data/local_data
you may try also to chabge your shebang to:
#!/bin/bash
to debugging you may try to set -xin your script a well.
EDIT:
I had notice this Removing intermediate container in your logs , the solution to it would be to use volume with your docker run command:
-v /path/your/new/folder/HOST:/path/your/new/folder/container
You are trying to modify a folder which is specified as a VOLUME in your base image, but as per Docker documentation on Volumes:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
linuxserver/nextcloud does declare a volume /data which you are trying to change afterward, it's like doing:
VOLUME /data
...
RUN mkdir -p /data/local_data
The directory created will be discarded. You can however create your directory on container startup by modifying it's entrypoint so when container starts the directory is created. Currently linuxserver/nextcloud uses /init as entrypoint, so you can do:
Your script content which you then define as entrypoint:
#!/bin/sh
mkdir -p /data/local_data
chown -R abc:abc /data/local_data
# Call the base image entrypoint with parameters
/init "$#"
Dockerfile:
FROM linuxserver/nextcloud
# Copy the script and call it at entrypoint instead
COPY script /home/
ENTRYPOINT ["/home/script"]
I am trying to fix some tests we're running on Jenkins with Docker, but the script that the ENTRYPOINT in my Dockerfile points to keeps running as root, even though I set the USER in the Dockerfile. This works fine on my local machine but not when running on our Jenkins box.
I've tried running su within my entrypoint script to make sure that the rest of the script run as the correct user, but they still run as root.
So my Dockerfile looks like this:
FROM python:3.6
RUN apt-get update && apt-get install -y gettext libgettextpo-dev
ARG DOCKER_UID # set to 2000 in docker-compose file
ARG ENV=prod
ENV ENV=${ENV}
ARG WORKERS=2
ENV WORKERS=${WORKERS}
RUN useradd -u ${DOCKER_UID} -ms /bin/bash app
RUN chmod -R 777 /home/app
ENV PYTHONUNBUFFERED 1
ADD . /code
WORKDIR /code
RUN chown -R app:app /code
RUN mkdir /platform
RUN chown -R app:app /platform
RUN pip install --upgrade pip
RUN whoami # outputs `root`
USER app
RUN whoami # outputs `app`
RUN .docker/deploy/install_requirements.sh $ENV # runs as `app`
EXPOSE 8000
ENTRYPOINT [".docker/deploy/start.sh", "$ENV"]
and my start.sh looks like:
#!/bin/bash
ENV=$1
echo "USER"
echo `whoami`
echo Running migrations...
python manage.py migrate
mkdir -p static
chmod -R 0755 static
cd /code/
if [ "$ENV" == "performance-dev" ];
then
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini -p 4 --uid app
else
/home/app/.local/bin/uwsgi --ini .docker/deploy/uwsgi.ini --uid app
fi
but the
echo "USER"
echo `whoami`
outputs:
USER
root
which causes commands later in the script the fail as they're the wrong user.
I'd except the output to be
USER
app
and my understanding is that this issue is typically resolved by setting the USER command in the Dockerfile, but I do that and it looks like it is switching user when running the Dockerfile itself.
Edit
The issue was with my docker-compose configuration. My docker-compose config looks like:
version: '3'
services:
service:
user: "${DOCKER_UID}:${DOCKER_UID}"
build:
context: .
dockerfile: .docker/Dockerfile
args:
- ENV=prod
- DOCKER_UID=2000
DOCKER_UID is a variable set on my local machine but not on the Jenkins box, so I set it to 2000 in the override file
The issue I was having, as David Maze pointed out in the comments, was that I was setting the user when actually building the container, via my docker-compose file. I had set the user param to ${DOCKER_UID}, which was never actually set anywhere, so it was defaulting to an empty string. Setting it to 2000 fixed my issue.
I am trying to add a directory to my docker image. I tried the below methods. During the build I dont see any errors, but once I run the container ( I am using docker-compose) and get into it docker exec -it 410e434a7304 /bin/sh I dont see the directory in the path I am copying it into nor do I see it as a volume when I do docker inspect.
Approach 1 : Classic mkdir
# Define working directory
WORKDIR /root
RUN cd /var \
mdkir www \\ no www directory created
COPY <fileDirectory> /var/www/<fileDirectory>
Approach 2 : Volume
FROM openjdk:8u171 as build
# Define working directory
WORKDIR /root
VOLUME["/var/www"]
COPY <fileDirectory> /var/www/<fileDirectory>
Your first approach is correct in principle, only that your RUN statement is faulty. Try:
RUN cd /var && mkdir www
Also, please note the fundamental difference between RUN mkdir and VOLUME: the former simply creates a directory on your container, while the latter is chiefly intended for mounting directories from your container to the host your container is running on.
This is how I made it work:
# Define working directory
WORKDIR /root
COPY <fileDirectory> /root/<fileDirectory>
RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
RUN rm -rf /root/email-media
I had to copy the from my host machine to docker image's working directory /root and from /root to the desired destination. Later removed the directory from/root`
Not sure if thats the cleanest way, if I followed the approach 1 with the right syntax suggested by #Fritz it could never find the the path created and throw an error.
After running the RUN layer it would remove the container (as below) and in the COPY line it would not have the reference to the path created in the run line.
Step 16/22 : RUN cd /var && mkdir www && cp -R /root/<fileDirectory> /var/www
---> Running in a9c7df27116e
Removing intermediate container a9c7df27116e
I have been trying to make the searchguard setup script init_sg.sh to run after elasticsearch automatically. I don't want to do it manually with docker exec. Here's what I have tried.
entrypoint.sh:
#! /bin/sh
elasticsearch
source init_sg.sh
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
# Search Guard plugin
# https://github.com/floragunncom/search-guard/wiki
RUN elasticsearch-plugin install --batch com.floragunn:search-guard-6:6.1.0-20.1 \
&& chmod +x \
plugins/search-guard-6/tools/hash.sh \
plugins/search-guard-6/tools/sgadmin.sh \
&& chown -R elasticsearch config/sg/ \
&& chmod -R go= config/sg/
# This custom entrypoint script is used instead of
# the original's /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["bash","-c","entrypoint.sh"]
However, it'd throw cannot run elasticsearch as root error:
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
So I guess I cannot run elasticsearch directly in entrypoint.sh, which is confusing because there's no problem when the Dockerfile is like this:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
....
CMD ["elasticsearch"]
This thread's accepted answer doesn't work. There's no "/run/entrypoint.sh" in the container.
Solution:
Finally I've managed to get it done. Here's my custom entrypoint script that will run the searchguard setup script automatically:
source init_sg.sh
while [ $? -ne 0 ]; do
sleep 10
source init_sg.sh
done &
/bin/bash -c "source /usr/local/bin/docker-entrypoint.sh;"
If you have any alternative solutions, please feel free to answer.