I have a Dockerfile as follows:
FROM jenkins/jenkins:2.119
USER jenkins
ENV HOME /var/jenkins_home
COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
RUN chmod 700 ${HOME}/.ssh && \
chmod 600 ${HOME}/.ssh/*
The ssh directory has 755/644 on the dir/file on the build machine. However, when I build with
docker build -t my/temp .
and start the image with an ls command
docker run -it --rm my/temp ls -la /var/jenkins_home/.ssh
neither of the chmod commands are applied to the image
drwxr-xr-x 2 jenkins jenkins 4096 May 3 12:46 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 12:46 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
During the build I see
Step 4/6 : COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
---> 58e0d8242fac
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
---> bbfc828ace79
It looks like the chmod is discarded. How can I stop this happening?
I'm using latest Docker (Edge) on Mac OSX
Version 18.05.0-ce-rc1-mac63 (24246); edge 3b5a9a44cd
EDIT
With --rm didn't work either (after deleting image and rebuilding) but didn't get remove message
docker build -t my/temp --rm=false .
run -it --rm my/temp ls -la /var/jenkins_home/.ssh
drwxr-xr-x 2 jenkins jenkins 4096 May 3 15:42 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 15:42 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
EDIT 2
So basically a bug in Docker where a base image with a VOLUME causes chmod to fail and similarly RUN mkdir on the volume failed but COPY did, but left the directory with the wrong permissions. Thanks to bkconrad.
EDIT 3
Created fork with a fix here https://github.com/systematicmethods/jenkins-docker
build.sh will build an image locally
This has to do with how Docker handles VOLUMEs for images.
From docker inspect my/temp:
"Volumes": {
"/var/jenkins_home": {}
},
There's a helpful ticket about this from the moby project:
https://github.com/moby/moby/issues/12779
Basically you'll need to do your chmod at run time.
Setting your HOME envvar to a non-volume path like /tmp shows the expected behavior:
$ docker run -it --rm my/temp ls -la /tmp/.ssh
total 8
drwx------ 2 jenkins jenkins 4096 May 3 17:31 .
drwxrwxrwt 6 root root 4096 May 3 17:31 ..
-rw------- 1 jenkins jenkins 0 May 3 17:24 dummy
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
As you can see "intermediate container " is being removed , which is a normal behavior of the docker to keep , if you wanted to keep those use below command.
docker build -t my/temp --rm=false .
Its also been explained in one of the post
Why docker build image from docker file will create container when build exit incorrectly?
Related
I want to use a volume mounted on my container but it throws the next error when trying to run:
docker: Error response from daemon: error while creating mount source
path '/var/skeeter/templates': mkdir /var/skeeter: read-only file
system.
This is my Dockerfile:
FROM maven:3-jdk-13-alpine
RUN mkdir -p /var/container/skeeter/templates
WORKDIR /project
ADD ./target/skeeter-0.0.1-SNAPSHOT.jar skeeter-0.0.1-SNAPSHOT.jar
EXPOSE 8080
CMD java -jar skeeter-0.0.1-SNAPSHOT.jar
And this is the run cmd:
docker run -t -p 8080:8080 -v
/var/skeeter/templates:/var/container/skeeter/templates --name
skeeter-docker-container skeeter-docker-image:latest
This is the CMD output when i'm checking the directories permissions:
ls -l /var/skeeter/
total 4 drwxrwxrwx 2 root root 4096 ago 11 16:45 templates
ls -ld /var/skeeter/
drwxrwxrwx 3 root root 4096 ago 11 16:45 /var/skeeter/
Update:
I created a new Volume and used its name at -v parameter and it runned, but java app cannot find files inside the directory
It was just a permissions issue.
I moved the source directory to /home/myuser/directory/ and worked.
So here is my Dockerfile, simplified. The original file is https://github.com/gremo/docker-folder-mirror/blob/master/Dockerfile:
FROM alpine:latest
COPY ./docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
When I run my image locally (assuming the tag is folder-mirror:latest) everything works fine:
docker run --rm --env-file ./.env -v "${PWD}/data:/data" -v "${PWD}/logs:/var/log" folder-mirror:latest
However, if I try to run the image directory (from Docker Hub):
docker run --rm --env-file ./.env -v "${PWD}/data:/data" -v "${PWD}/logs:/var/log" gremo1982/folder-mirror
... it gives me the following error:
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error
response from daemon: OCI runtime create failed:
container_linux.go:346: starting container process caused "exec:
\"docker-entrypoint.sh\": executable file not found in $PATH":
unknown.
This is my first Docker image, so I'm pretty sure... I'm missing something. In fact, if I check the remote image i get:
/ # ls -la /usr/local/bin/
total 12
drwxr-xr-x 1 root root 4096 Feb 18 2020 .
drwxr-xr-x 1 root root 4096 Jan 16 22:52 ..
-rw-r--r-- 1 root root 890 Feb 18 2020 docker-entrypoint.sh
That is my entrypoint is not executable. Why? Why it's working locally then?
Why it is working locally, I cannot answer that. Two things you should try. One as the other commenters have suggested, do a chmod on the script to make it executable.
The other change, according to the Docker reference, the ENTRYPOINT command takes two forms. The exec form and the shell form. You are using the exec. It is possible that your environment variables are not being loaded correctly.
Try change to shell form.
In case anyone run into this problem, I was having a similar issue when creating a new user:
RUN useradd --create-home appuser
WORKDIR /home/appuser
USER appuser
COPY . .
ENTRYPOINT [ "/home/appuser/docker-entrypoint.sh" ]
I dropped the new user and it started working. (I added the chmod just for good measure).
Notice I had to use the full path "/app/docker-entrypoint.sh", otherwise it wouldn't work.
WORKDIR /app
COPY . .
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT [ "/app/docker-entrypoint.sh" ]
I build a docker image based on following Dockerfile on Ubuntu:
FROM openjdk:8-jre-alpine
USER root
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
RUN chmod 777 /
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
ENTRYPOINT [ "sh", "-c", "echo test" ]
I'm expecting that the root path obtains the set permissions but building the docker image outputs following (consider the output of ls -ald /):
docker build . -f Dockerfile
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM openjdk:8-jre-alpine
---> b76bbdb2809f
Step 2/6 : USER root
---> Using cache
---> 18045a1e2d82
Step 3/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in 2309a8753729
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container 2309a8753729
---> 809221ec8f71
Step 4/6 : RUN chmod 777 /
---> Running in 81df09ec266c
Removing intermediate container 81df09ec266c
---> 9ea5e2282356
Step 5/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in ef91613577da
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container ef91613577da
---> cd7914160661
Step 6/6 : ENTRYPOINT [ "sh", "-c", "echo test" ]
---> Running in 3d724aca37fe
Removing intermediate container 3d724aca37fe
---> 143e46ec55a8
Successfully built 143e46ec55a8
How can I determine the permissions?
UPDATE: I have specific reasons why I'm temporarily forced to set these permissions on root folder: Unfortunately, I'm running a specific application within the container with another user than root and this application writes something directly into /. Currently, this isn't configurable.
If I do it on another folder under root, it works as expected:
...
Step 6/8 : RUN mkdir -p /mytest && chmod 777 /mytest
---> Running in 7aa3c7b288fd
Removing intermediate container 7aa3c7b288fd
---> 1717229e5ac0
Step 7/8 : RUN echo ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ && ls -ald /mytest
---> Running in 2238987e1dd6
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
drwxrwxrwx 2 root root 4096 Mar 19 14:42 /mytest
...
On execution of container:
drwxrwxrwx 2 root root 4096 Mar 19 14:42 mytest
To check the permission of your root folder bash inside your container, perform following opertations
docker exec -it container_id bash
cd /
ls -ald
I have following folder structure:
nova-components
component1
dist
...
component2
dist
...
component3
dist
...
...
Is there any way to copy only dist folders in docker.
I am thinking about something like:
COPY --from=assets /nova-components/*/dist /var/www/nova-components/*/dist
The end goal is to include generated dist folders in the final image and keep the directory tree structure.
Currently multi-stage docker build do not respect .dockerignore, see this discussion, so you had to do it yourself, one way is to clean things in the first stage, like follows:
Dockerfile:
FROM ubuntu:16.04 AS assets
RUN mkdir -p /nova-components/component1/dist && \
mkdir -p /nova-components/component1/others && \
mkdir -p /nova-components/component2/dist && \
mkdir -p /nova-components/component2/others
RUN find /nova-components/*/* ! -name "dist" -maxdepth 0 | xargs rm -fr
FROM ubuntu:16.04
COPY --from=assets /nova-components /var/www/nova-components/
RUN ls -alh /var/www/nova-components
RUN ls -alh /var/www/nova-components/*
test:
# docker build --no-cache -t try:1 .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM ubuntu:16.04 AS assets
---> b9e15a5d1e1a
Step 2/7 : RUN mkdir -p /nova-components/component1/dist && mkdir -p /nova-components/component1/others && mkdir -p /nova-components/component2/dist && mkdir -p /nova-components/component2/others
---> Running in d4c9c422d53a
Removing intermediate container d4c9c422d53a
---> d316032dd59d
Step 3/7 : RUN find /nova-components/*/* ! -name "dist" -maxdepth 0 | xargs rm -fr
---> Running in b6168b027f4c
Removing intermediate container b6168b027f4c
---> 9deb57cb5153
Step 4/7 : FROM ubuntu:16.04
---> b9e15a5d1e1a
Step 5/7 : COPY --from=assets /nova-components /var/www/nova-components/
---> 49301f701db2
Step 6/7 : RUN ls -alh /var/www/nova-components
---> Running in 9ed0cafff2fb
total 16K
drwxr-xr-x 4 root root 4.0K Nov 6 02:13 .
drwxr-xr-x 3 root root 4.0K Nov 6 02:13 ..
drwxr-xr-x 3 root root 4.0K Nov 6 02:13 component1
drwxr-xr-x 3 root root 4.0K Nov 6 02:13 component2
Removing intermediate container 9ed0cafff2fb
---> f1ee82cff972
Step 7/7 : RUN ls -alh /var/www/nova-components/*
---> Running in 23a27e5ce853
/var/www/nova-components/component1:
total 12K
drwxr-xr-x 3 root root 4.0K Nov 6 02:13 .
drwxr-xr-x 4 root root 4.0K Nov 6 02:13 ..
drwxr-xr-x 2 root root 4.0K Nov 6 02:13 dist
/var/www/nova-components/component2:
total 12K
drwxr-xr-x 3 root root 4.0K Nov 6 02:13 .
drwxr-xr-x 4 root root 4.0K Nov 6 02:13 ..
drwxr-xr-x 2 root root 4.0K Nov 6 02:13 dist
Removing intermediate container 23a27e5ce853
---> b9d5ab8f5157
Successfully built b9d5ab8f5157
Successfully tagged try:1
With the clean in first stage using RUN find /nova-components/*/* ! -name "dist" -maxdepth 0 | xargs rm -fr, then you can make it, let's wait for possible official feature support.
Add a .dockerignore file with your Dockerfile
nova-components/*/*
!nova-components/*/dist
And copy like
COPY nova-components/ /var/www/nova-components
EDIT
So with multi stage build, this is currently not working. A new solution is to run, on the last stage,
rsync -avz --include='dist/' nova-components/ nova-components-dist/.
then at the final stage,
COPY --from=assets /nova-components-dist/ /var/www/nova-components
Im having troubles with Docker.
Here is my DockerFile
FROM alpine:3.3
WORKDIR /
COPY myinit.sh /myinit.sh
ENTRYPOINT ["/myinit.sh"]
myinit.sh
#!/bin/bash
set -e
echo 123
Thats the way i build my image
docker build -t test --no-cache
Run logs
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM alpine:3.3
---> d7a513a663c1
Step 2 : WORKDIR /
---> Running in eda8d25fe880
---> 1dcad4f11062
Removing intermediate container eda8d25fe880
Step 3 : COPY myinit.sh /myinit.sh
---> 49234cc3903a
Removing intermediate container ffe6227c921f
Step 4 : ENTRYPOINT /myinit.sh
---> Running in 56d3d748b396
---> 060f6da19513
Removing intermediate container 56d3d748b396
Successfully built 060f6da19513
Thats how i run the container
docker run --rm --name test2 test
docker: Error response from daemon: Container command '/myinit.sh' not found or does not exist..
myinit.sh exist for sure. Here is ls -al
ls -al
total 16
drwxr-xr-x 4 lorddaedra staff 136 10 май 19:43 .
drwxr-xr-x 25 lorddaedra staff 850 10 май 19:42 ..
-rw-r--r-- 1 lorddaedra staff 82 10 май 19:51 Dockerfile
-rwxr-xr-x 1 lorddaedra staff 29 10 май 19:51 myinit.sh
Why it cant see my entrypoint script? Any solutions?
Thanks
It's not the entrypoint script it can't find, but the shell it's referencing -- alpine:3.3 doesn't by default have bash inside it. Change your myinit.sh to:
#!/bin/sh
set -e
echo 123
i.e. referencing /bin/sh instead of /bin/bash