Docker container command not found or does not exist - docker

Im having troubles with Docker.
Here is my DockerFile
FROM alpine:3.3
WORKDIR /
COPY myinit.sh /myinit.sh
ENTRYPOINT ["/myinit.sh"]
myinit.sh
#!/bin/bash
set -e
echo 123
Thats the way i build my image
docker build -t test --no-cache
Run logs
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM alpine:3.3
---> d7a513a663c1
Step 2 : WORKDIR /
---> Running in eda8d25fe880
---> 1dcad4f11062
Removing intermediate container eda8d25fe880
Step 3 : COPY myinit.sh /myinit.sh
---> 49234cc3903a
Removing intermediate container ffe6227c921f
Step 4 : ENTRYPOINT /myinit.sh
---> Running in 56d3d748b396
---> 060f6da19513
Removing intermediate container 56d3d748b396
Successfully built 060f6da19513
Thats how i run the container
docker run --rm --name test2 test
docker: Error response from daemon: Container command '/myinit.sh' not found or does not exist..
myinit.sh exist for sure. Here is ls -al
ls -al
total 16
drwxr-xr-x 4 lorddaedra staff 136 10 май 19:43 .
drwxr-xr-x 25 lorddaedra staff 850 10 май 19:42 ..
-rw-r--r-- 1 lorddaedra staff 82 10 май 19:51 Dockerfile
-rwxr-xr-x 1 lorddaedra staff 29 10 май 19:51 myinit.sh
Why it cant see my entrypoint script? Any solutions?
Thanks

It's not the entrypoint script it can't find, but the shell it's referencing -- alpine:3.3 doesn't by default have bash inside it. Change your myinit.sh to:
#!/bin/sh
set -e
echo 123
i.e. referencing /bin/sh instead of /bin/bash

Related

How to run vsftpd as non root in an ubuntu container?

I am trying to rebuild bogem/ftp to make the container run as non root.
I created my own repo where you can find all the files.
I build it locally:
docker build -t bram_ftp:v0.4 .
Sending build context to Docker daemon 8.704kB
Step 1/17 : FROM ubuntu:latest
---> f643c72bc252
Step 2/17 : RUN apt-get update && apt-get install -y --no-install-recommends vsftpd db-util sudo && apt-get clean
---> Using cache
---> 8ab5e8a0d3d7
Step 3/17 : RUN useradd -m ftpuser
---> Using cache
---> 179c738d8a8b
Step 4/17 : ENV FTP_USER admin
---> Using cache
---> 3f55c42bccda
Step 5/17 : ENV FTP_PASS admin
---> Using cache
---> a44874a4d54e
Step 6/17 : ENV PASV_ADDRESS=127.0.0.1
---> Using cache
---> 824c15835a7f
Step 7/17 : COPY vsftpd_virtual /etc/pam.d/
---> Using cache
---> 5045135bb1ca
Step 8/17 : COPY run-vsftpd.sh /usr/sbin/
---> Using cache
---> 30bd2be7d610
Step 9/17 : COPY config-vsftpd.sh /usr/sbin/
---> Using cache
---> 8347833c2f63
Step 10/17 : RUN /usr/sbin/config-vsftpd.sh
---> Using cache
---> 58237fe9a8be
Step 11/17 : COPY vsftpd.conf /etc/vsftpd/
---> Using cache
---> 92c9cbc75356
Step 12/17 : RUN chown -R ftpuser:ftpuser /etc/vsftpd/ && chown ftpuser:ftpuser /usr/sbin/*-vsftpd.sh && chmod +x /usr/sbin/*-vsftpd.sh && mkdir -p /var/run/vsftpd/empty
---> Running in 91f03e3198df
Removing intermediate container 91f03e3198df
---> 94cfaf7209a9
Step 13/17 : VOLUME /home/ftpuser/vsftpd
---> Running in cfdf44372c17
Removing intermediate container cfdf44372c17
---> 5d7416bd2844
Step 14/17 : VOLUME /var/log/vsftpd
---> Running in c2b5121adb49
Removing intermediate container c2b5121adb49
---> 620cc085a235
Step 15/17 : EXPOSE 20 21
---> Running in f12d22af36cc
Removing intermediate container f12d22af36cc
---> 1dd7698c18b3
Step 16/17 : USER ftpuser
---> Running in d7a2cdcc3aa1
Removing intermediate container d7a2cdcc3aa1
---> 3a88a4a89ac8
Step 17/17 : CMD ["/usr/sbin/run-vsftpd.sh"]
---> Running in 86f5dec18f71
Removing intermediate container 86f5dec18f71
---> 50fdae730864
Successfully built 50fdae730864
Successfully tagged bram_ftp:v0.4
When I run it locally as described in the README then the container just keeps restarting and I do not see any logs/errors.
When I run the container interactively (so -it instead of -d) instead of as daemon I get this error:
docker run -it -v /tmp/vsftpd:/home/ftpuser/vsftpd \
-p 20:20 -p 21:21 -p 47400-47470:47400-47470 \
-e FTP_USER=admin \
-e FTP_PASS=admin \
-e PASV_ADDRESS=127.0.0.1 \
--name ftp \
--restart=always \bram_ftp:v0.4
500 OOPS: config file not owned by correct user, or not a file
But when I check with what user the container is running and the vsftpd.conf permissions are everything seems to be fine:
docker run bram_ftp:v0.4 id
uid=1000(ftpuser) gid=1000(ftpuser) groups=1000(ftpuser)
docker run bram_ftp:v0.4 ls -la /etc/vsftpd
total 28
drwxr-xr-x 1 ftpuser ftpuser 4096 Dec 31 13:12 .
drwxr-xr-x 1 root root 4096 Dec 31 14:28 ..
-rw-r--r-- 1 ftpuser ftpuser 12288 Dec 31 13:12 virtual_users.db
-rw-r--r-- 1 ftpuser ftpuser 12 Dec 31 13:12 virtual_users.txt
-rw-r--r-- 1 ftpuser ftpuser 1734 Dec 31 13:09 vsftpd.conf
When I run the container like below I can get in the container wothout issues:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$
I then start vsftpd manually:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$ vsftpd /etc/vsftpd/vsftpd.conf
If I then check what processes are running in the container I see this:
docker exec 5358b2368c55 ps -ef
UID PID PPID C STIME TTY TIME CMD
ftpuser 1 0 0 14:31 pts/0 00:00:00 bash
ftpuser 10 1 0 14:32 pts/0 00:00:00 vsftpd /etc/vsftpd/vsftpd.conf
ftpuser 11 0 0 14:33 ? 00:00:00 ps -ef
I don't have any experience with vsftpd so I have no clue what I am doing wrong here. Hope someone can help me out.

Docker copy command failed?

I try to add to docker file, i simplified to very short one, that show the issue.
Simple Dockerfile - ddtest
FROM maven AS testdocker
COPY . /server
WORKDIR /server/admin
RUN mkdir target; echo "hello world" > ./target/test.txt
RUN pwd
RUN ls -la ./target/
COPY ./target/test.txt /test.txt
CMD ["/usr/bin/java", "-jar", "/server.jar"]
Building with command docker build . -f ddtest
Execution log:
docker build . -f ddtest
Sending build context to Docker daemon 245.8kB
Step 1/8 : FROM maven AS testdocker
---> e85864b4079a
Step 2/8 : COPY . /server
---> e6c9c6d55be1
Step 3/8 : WORKDIR /server/admin
---> Running in d30bab5d6b6b
Removing intermediate container d30bab5d6b6b
---> 7409cbc70fac
Step 4/8 : RUN mkdir target; echo "hello world" > ./target/test.txt
---> Running in ad507dfc604b
Removing intermediate container ad507dfc604b
---> 0d69df30d041
Step 5/8 : RUN pwd
---> Running in 72becb9ae3ba
/server/admin
Removing intermediate container 72becb9ae3ba
---> 7bde9ccae4c6
Step 6/8 : RUN ls -la ./target/
---> Running in ceb5c222f3c0
total 12
drwxr-xr-x 2 root root 4096 Aug 9 05:50 .
drwxr-xr-x 1 root root 4096 Aug 9 05:50 ..
-rw-r--r-- 1 root root 12 Aug 9 05:50 test.txt
Removing intermediate container ceb5c222f3c0
---> 3b4dbcb794ad
Step 7/8 : COPY ./target/test.txt /test.txt
COPY failed: stat /var/lib/docker/tmp/docker-builder015566471/target/test.txt: no such file or directory
copy to destination docker test.txt failed, why?
That ls command you invoked is running inside an intermediate container, which is not your host. So according to your WORKDIR you're listing files inside /server/admin/target of your container.
As the output of ls shows, this test.txt file is already inside your image
COPY src dst command copies a file/directory from your host to your image, and it seems that you don't have any file named test.txt inside ./target directory of your host.

File not found, but appears in directory

I am trying to run a docker build command to set up a container. When I run it, a file I need can't be found. I receive this error
/bin/sh: 1: ./downloadScript: not found
The command '/bin/sh -c ./downloadScript' returned a non-zero code: 127
I run ls -la before the script should run and it appears.
drwxr-xr-x 1 root root 4096 Apr 30 18:05 .
drwxr-xr-x 1 root root 4096 Apr 30 18:05 ..
-rwxr-xr-x 1 root root 714 Apr 29 18:20 downloadScript
drwxr-xr-x 4 root root 4096 Apr 30 18:05 gradleProject
I've tried a few things
chmod 777 the file and directory
cp the file, the chmod the new one (this new file can't be found either)
reset docker credentials
My current script:
...
WORKDIR /dir1
RUN ls -la # For manual verification
RUN ./downloadScript
...
Docker output
Sending build context to Docker daemon 120.3kB
Step 1/17 : FROM openjdk:8-slim as builder
---> e2581abdea18
Step 2/17 : WORKDIR /dir1
---> Using cache
---> 4ef289b22b45
Step 3/17 : ADD WCE_Docker/res /dir1
---> Using cache
---> 3d757ec6caba
Step 4/17 : ADD /GradleProject/ /dir1/gradleProject/
---> Using cache
---> 7e46cc77290d
Step 5/17 : WORKDIR /dir1
---> Using cache
---> fe7570221d31
Step 6/17 : RUN ls -la
---> Using cache
---> 019ef7d640da
Step 7/17 : RUN ./downloadScript
---> Running in 9870fd1e3af3
/bin/sh: 1: ./downloadScript: not found
The command '/bin/sh -c ./downloadScript' returned a non-zero code: 127
It runs correctly on Mac and Ubuntu, but not on Windows 10.
Update
I manually setup the container and attempted to run the script and received a slightly different error.
root#835516a24a7f:/dir# ./downloadScript
bash: ./downloadScript: /bin/bash^M: bad interpreter: No such file or directory
Which I've found is DOS based line ends. I am going to fix that then post results.
Update Results
Ran dos2unix downloadScript and it worked correctly. The problem was DOS line ends.

Dockerfile: chmod on root directory not working

I build a docker image based on following Dockerfile on Ubuntu:
FROM openjdk:8-jre-alpine
USER root
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
RUN chmod 777 /
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
ENTRYPOINT [ "sh", "-c", "echo test" ]
I'm expecting that the root path obtains the set permissions but building the docker image outputs following (consider the output of ls -ald /):
docker build . -f Dockerfile
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM openjdk:8-jre-alpine
---> b76bbdb2809f
Step 2/6 : USER root
---> Using cache
---> 18045a1e2d82
Step 3/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in 2309a8753729
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container 2309a8753729
---> 809221ec8f71
Step 4/6 : RUN chmod 777 /
---> Running in 81df09ec266c
Removing intermediate container 81df09ec266c
---> 9ea5e2282356
Step 5/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in ef91613577da
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container ef91613577da
---> cd7914160661
Step 6/6 : ENTRYPOINT [ "sh", "-c", "echo test" ]
---> Running in 3d724aca37fe
Removing intermediate container 3d724aca37fe
---> 143e46ec55a8
Successfully built 143e46ec55a8
How can I determine the permissions?
UPDATE: I have specific reasons why I'm temporarily forced to set these permissions on root folder: Unfortunately, I'm running a specific application within the container with another user than root and this application writes something directly into /. Currently, this isn't configurable.
If I do it on another folder under root, it works as expected:
...
Step 6/8 : RUN mkdir -p /mytest && chmod 777 /mytest
---> Running in 7aa3c7b288fd
Removing intermediate container 7aa3c7b288fd
---> 1717229e5ac0
Step 7/8 : RUN echo ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ && ls -ald /mytest
---> Running in 2238987e1dd6
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
drwxrwxrwx 2 root root 4096 Mar 19 14:42 /mytest
...
On execution of container:
drwxrwxrwx 2 root root 4096 Mar 19 14:42 mytest
To check the permission of your root folder bash inside your container, perform following opertations
docker exec -it container_id bash
cd /
ls -ald

Docker run command is not applied to the image

I have a Dockerfile as follows:
FROM jenkins/jenkins:2.119
USER jenkins
ENV HOME /var/jenkins_home
COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
RUN chmod 700 ${HOME}/.ssh && \
chmod 600 ${HOME}/.ssh/*
The ssh directory has 755/644 on the dir/file on the build machine. However, when I build with
docker build -t my/temp .
and start the image with an ls command
docker run -it --rm my/temp ls -la /var/jenkins_home/.ssh
neither of the chmod commands are applied to the image
drwxr-xr-x 2 jenkins jenkins 4096 May 3 12:46 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 12:46 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
During the build I see
Step 4/6 : COPY --chown=jenkins:jenkins ssh ${HOME}/.ssh/
---> 58e0d8242fac
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
---> bbfc828ace79
It looks like the chmod is discarded. How can I stop this happening?
I'm using latest Docker (Edge) on Mac OSX
Version 18.05.0-ce-rc1-mac63 (24246); edge 3b5a9a44cd
EDIT
With --rm didn't work either (after deleting image and rebuilding) but didn't get remove message
docker build -t my/temp --rm=false .
run -it --rm my/temp ls -la /var/jenkins_home/.ssh
drwxr-xr-x 2 jenkins jenkins 4096 May 3 15:42 .
drwxr-xr-x 4 jenkins jenkins 4096 May 3 15:42 ..
-rw-r--r-- 1 jenkins jenkins 391 May 3 11:42 known_hosts
EDIT 2
So basically a bug in Docker where a base image with a VOLUME causes chmod to fail and similarly RUN mkdir on the volume failed but COPY did, but left the directory with the wrong permissions. Thanks to bkconrad.
EDIT 3
Created fork with a fix here https://github.com/systematicmethods/jenkins-docker
build.sh will build an image locally
This has to do with how Docker handles VOLUMEs for images.
From docker inspect my/temp:
"Volumes": {
"/var/jenkins_home": {}
},
There's a helpful ticket about this from the moby project:
https://github.com/moby/moby/issues/12779
Basically you'll need to do your chmod at run time.
Setting your HOME envvar to a non-volume path like /tmp shows the expected behavior:
$ docker run -it --rm my/temp ls -la /tmp/.ssh
total 8
drwx------ 2 jenkins jenkins 4096 May 3 17:31 .
drwxrwxrwt 6 root root 4096 May 3 17:31 ..
-rw------- 1 jenkins jenkins 0 May 3 17:24 dummy
Step 5/6 : RUN chmod 700 ${HOME}/.ssh && chmod 600 ${HOME}/.ssh/*
---> Running in 0c805d4d4252
Removing intermediate container 0c805d4d4252
As you can see "intermediate container " is being removed , which is a normal behavior of the docker to keep , if you wanted to keep those use below command.
docker build -t my/temp --rm=false .
Its also been explained in one of the post
Why docker build image from docker file will create container when build exit incorrectly?

Resources