How to run vsftpd as non root in an ubuntu container? - docker

I am trying to rebuild bogem/ftp to make the container run as non root.
I created my own repo where you can find all the files.
I build it locally:
docker build -t bram_ftp:v0.4 .
Sending build context to Docker daemon 8.704kB
Step 1/17 : FROM ubuntu:latest
---> f643c72bc252
Step 2/17 : RUN apt-get update && apt-get install -y --no-install-recommends vsftpd db-util sudo && apt-get clean
---> Using cache
---> 8ab5e8a0d3d7
Step 3/17 : RUN useradd -m ftpuser
---> Using cache
---> 179c738d8a8b
Step 4/17 : ENV FTP_USER admin
---> Using cache
---> 3f55c42bccda
Step 5/17 : ENV FTP_PASS admin
---> Using cache
---> a44874a4d54e
Step 6/17 : ENV PASV_ADDRESS=127.0.0.1
---> Using cache
---> 824c15835a7f
Step 7/17 : COPY vsftpd_virtual /etc/pam.d/
---> Using cache
---> 5045135bb1ca
Step 8/17 : COPY run-vsftpd.sh /usr/sbin/
---> Using cache
---> 30bd2be7d610
Step 9/17 : COPY config-vsftpd.sh /usr/sbin/
---> Using cache
---> 8347833c2f63
Step 10/17 : RUN /usr/sbin/config-vsftpd.sh
---> Using cache
---> 58237fe9a8be
Step 11/17 : COPY vsftpd.conf /etc/vsftpd/
---> Using cache
---> 92c9cbc75356
Step 12/17 : RUN chown -R ftpuser:ftpuser /etc/vsftpd/ && chown ftpuser:ftpuser /usr/sbin/*-vsftpd.sh && chmod +x /usr/sbin/*-vsftpd.sh && mkdir -p /var/run/vsftpd/empty
---> Running in 91f03e3198df
Removing intermediate container 91f03e3198df
---> 94cfaf7209a9
Step 13/17 : VOLUME /home/ftpuser/vsftpd
---> Running in cfdf44372c17
Removing intermediate container cfdf44372c17
---> 5d7416bd2844
Step 14/17 : VOLUME /var/log/vsftpd
---> Running in c2b5121adb49
Removing intermediate container c2b5121adb49
---> 620cc085a235
Step 15/17 : EXPOSE 20 21
---> Running in f12d22af36cc
Removing intermediate container f12d22af36cc
---> 1dd7698c18b3
Step 16/17 : USER ftpuser
---> Running in d7a2cdcc3aa1
Removing intermediate container d7a2cdcc3aa1
---> 3a88a4a89ac8
Step 17/17 : CMD ["/usr/sbin/run-vsftpd.sh"]
---> Running in 86f5dec18f71
Removing intermediate container 86f5dec18f71
---> 50fdae730864
Successfully built 50fdae730864
Successfully tagged bram_ftp:v0.4
When I run it locally as described in the README then the container just keeps restarting and I do not see any logs/errors.
When I run the container interactively (so -it instead of -d) instead of as daemon I get this error:
docker run -it -v /tmp/vsftpd:/home/ftpuser/vsftpd \
-p 20:20 -p 21:21 -p 47400-47470:47400-47470 \
-e FTP_USER=admin \
-e FTP_PASS=admin \
-e PASV_ADDRESS=127.0.0.1 \
--name ftp \
--restart=always \bram_ftp:v0.4
500 OOPS: config file not owned by correct user, or not a file
But when I check with what user the container is running and the vsftpd.conf permissions are everything seems to be fine:
docker run bram_ftp:v0.4 id
uid=1000(ftpuser) gid=1000(ftpuser) groups=1000(ftpuser)
docker run bram_ftp:v0.4 ls -la /etc/vsftpd
total 28
drwxr-xr-x 1 ftpuser ftpuser 4096 Dec 31 13:12 .
drwxr-xr-x 1 root root 4096 Dec 31 14:28 ..
-rw-r--r-- 1 ftpuser ftpuser 12288 Dec 31 13:12 virtual_users.db
-rw-r--r-- 1 ftpuser ftpuser 12 Dec 31 13:12 virtual_users.txt
-rw-r--r-- 1 ftpuser ftpuser 1734 Dec 31 13:09 vsftpd.conf
When I run the container like below I can get in the container wothout issues:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$
I then start vsftpd manually:
docker run -it bram_ftp:v0.4 bash
ftpuser#5358b2368c55:/$ vsftpd /etc/vsftpd/vsftpd.conf
If I then check what processes are running in the container I see this:
docker exec 5358b2368c55 ps -ef
UID PID PPID C STIME TTY TIME CMD
ftpuser 1 0 0 14:31 pts/0 00:00:00 bash
ftpuser 10 1 0 14:32 pts/0 00:00:00 vsftpd /etc/vsftpd/vsftpd.conf
ftpuser 11 0 0 14:33 ? 00:00:00 ps -ef
I don't have any experience with vsftpd so I have no clue what I am doing wrong here. Hope someone can help me out.

Related

Docker copy command failed?

I try to add to docker file, i simplified to very short one, that show the issue.
Simple Dockerfile - ddtest
FROM maven AS testdocker
COPY . /server
WORKDIR /server/admin
RUN mkdir target; echo "hello world" > ./target/test.txt
RUN pwd
RUN ls -la ./target/
COPY ./target/test.txt /test.txt
CMD ["/usr/bin/java", "-jar", "/server.jar"]
Building with command docker build . -f ddtest
Execution log:
docker build . -f ddtest
Sending build context to Docker daemon 245.8kB
Step 1/8 : FROM maven AS testdocker
---> e85864b4079a
Step 2/8 : COPY . /server
---> e6c9c6d55be1
Step 3/8 : WORKDIR /server/admin
---> Running in d30bab5d6b6b
Removing intermediate container d30bab5d6b6b
---> 7409cbc70fac
Step 4/8 : RUN mkdir target; echo "hello world" > ./target/test.txt
---> Running in ad507dfc604b
Removing intermediate container ad507dfc604b
---> 0d69df30d041
Step 5/8 : RUN pwd
---> Running in 72becb9ae3ba
/server/admin
Removing intermediate container 72becb9ae3ba
---> 7bde9ccae4c6
Step 6/8 : RUN ls -la ./target/
---> Running in ceb5c222f3c0
total 12
drwxr-xr-x 2 root root 4096 Aug 9 05:50 .
drwxr-xr-x 1 root root 4096 Aug 9 05:50 ..
-rw-r--r-- 1 root root 12 Aug 9 05:50 test.txt
Removing intermediate container ceb5c222f3c0
---> 3b4dbcb794ad
Step 7/8 : COPY ./target/test.txt /test.txt
COPY failed: stat /var/lib/docker/tmp/docker-builder015566471/target/test.txt: no such file or directory
copy to destination docker test.txt failed, why?
That ls command you invoked is running inside an intermediate container, which is not your host. So according to your WORKDIR you're listing files inside /server/admin/target of your container.
As the output of ls shows, this test.txt file is already inside your image
COPY src dst command copies a file/directory from your host to your image, and it seems that you don't have any file named test.txt inside ./target directory of your host.

File not found, but appears in directory

I am trying to run a docker build command to set up a container. When I run it, a file I need can't be found. I receive this error
/bin/sh: 1: ./downloadScript: not found
The command '/bin/sh -c ./downloadScript' returned a non-zero code: 127
I run ls -la before the script should run and it appears.
drwxr-xr-x 1 root root 4096 Apr 30 18:05 .
drwxr-xr-x 1 root root 4096 Apr 30 18:05 ..
-rwxr-xr-x 1 root root 714 Apr 29 18:20 downloadScript
drwxr-xr-x 4 root root 4096 Apr 30 18:05 gradleProject
I've tried a few things
chmod 777 the file and directory
cp the file, the chmod the new one (this new file can't be found either)
reset docker credentials
My current script:
...
WORKDIR /dir1
RUN ls -la # For manual verification
RUN ./downloadScript
...
Docker output
Sending build context to Docker daemon 120.3kB
Step 1/17 : FROM openjdk:8-slim as builder
---> e2581abdea18
Step 2/17 : WORKDIR /dir1
---> Using cache
---> 4ef289b22b45
Step 3/17 : ADD WCE_Docker/res /dir1
---> Using cache
---> 3d757ec6caba
Step 4/17 : ADD /GradleProject/ /dir1/gradleProject/
---> Using cache
---> 7e46cc77290d
Step 5/17 : WORKDIR /dir1
---> Using cache
---> fe7570221d31
Step 6/17 : RUN ls -la
---> Using cache
---> 019ef7d640da
Step 7/17 : RUN ./downloadScript
---> Running in 9870fd1e3af3
/bin/sh: 1: ./downloadScript: not found
The command '/bin/sh -c ./downloadScript' returned a non-zero code: 127
It runs correctly on Mac and Ubuntu, but not on Windows 10.
Update
I manually setup the container and attempted to run the script and received a slightly different error.
root#835516a24a7f:/dir# ./downloadScript
bash: ./downloadScript: /bin/bash^M: bad interpreter: No such file or directory
Which I've found is DOS based line ends. I am going to fix that then post results.
Update Results
Ran dos2unix downloadScript and it worked correctly. The problem was DOS line ends.

Dockerfile: chmod on root directory not working

I build a docker image based on following Dockerfile on Ubuntu:
FROM openjdk:8-jre-alpine
USER root
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
RUN chmod 777 /
RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
ENTRYPOINT [ "sh", "-c", "echo test" ]
I'm expecting that the root path obtains the set permissions but building the docker image outputs following (consider the output of ls -ald /):
docker build . -f Dockerfile
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM openjdk:8-jre-alpine
---> b76bbdb2809f
Step 2/6 : USER root
---> Using cache
---> 18045a1e2d82
Step 3/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in 2309a8753729
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container 2309a8753729
---> 809221ec8f71
Step 4/6 : RUN chmod 777 /
---> Running in 81df09ec266c
Removing intermediate container 81df09ec266c
---> 9ea5e2282356
Step 5/6 : RUN echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX && ls -ald /
---> Running in ef91613577da
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
drwxr-xr-x 1 root root 4096 Mar 19 13:50 /
Removing intermediate container ef91613577da
---> cd7914160661
Step 6/6 : ENTRYPOINT [ "sh", "-c", "echo test" ]
---> Running in 3d724aca37fe
Removing intermediate container 3d724aca37fe
---> 143e46ec55a8
Successfully built 143e46ec55a8
How can I determine the permissions?
UPDATE: I have specific reasons why I'm temporarily forced to set these permissions on root folder: Unfortunately, I'm running a specific application within the container with another user than root and this application writes something directly into /. Currently, this isn't configurable.
If I do it on another folder under root, it works as expected:
...
Step 6/8 : RUN mkdir -p /mytest && chmod 777 /mytest
---> Running in 7aa3c7b288fd
Removing intermediate container 7aa3c7b288fd
---> 1717229e5ac0
Step 7/8 : RUN echo ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ && ls -ald /mytest
---> Running in 2238987e1dd6
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
drwxrwxrwx 2 root root 4096 Mar 19 14:42 /mytest
...
On execution of container:
drwxrwxrwx 2 root root 4096 Mar 19 14:42 mytest
To check the permission of your root folder bash inside your container, perform following opertations
docker exec -it container_id bash
cd /
ls -ald

Dockerfile COPY places file to an unexpected directory

I am on Ubuntu 18.04.1 LTS using Docker version 18.09.0, build 4d60db4.
Dockerfile
FROM registry.hub.docker.com/verdaccio/verdaccio
RUN echo "whoami" && whoami
RUN echo "echo \$HOME" && echo $HOME
RUN mkdir -p $HOME/my_dir
WORKDIR $HOME/my_dir
COPY ./my_verdaccio.conf.yaml $HOME/my_dir/my_verdaccio.conf.yaml
RUN echo "pwd && ls -la" && pwd && ls -la
RUN echo "echo \$HOME && ls -la \$HOME" && echo $HOME && ls -la $HOME
RUN echo "echo \$HOME/my_dir && ls -la \$HOME/my_dir" && echo $HOME/my_dir && ls -la $HOME/my_dir
CMD $APPDIR/bin/verdaccio --config $HOME/my_dir/my_verdaccio.conf.yaml --listen $PROTOCOL://0.0.0.0:${PORT}
Build output
$ docker build -t my_test -f Dockerfile.test my_dir/
Sending build context to Docker daemon 1.834MB
Step 1/10 : FROM registry.hub.docker.com/verdaccio/verdaccio
---> 721ec0ff4795
Step 2/10 : RUN echo "whoami" && whoami
---> Running in 259851ba7eaa
whoami
verdaccio
Removing intermediate container 259851ba7eaa
---> 822165aeff40
Step 3/10 : RUN echo "echo \$HOME" && echo $HOME
---> Running in f2199c917ed8
echo $HOME
/home/verdaccio
Removing intermediate container f2199c917ed8
---> 9fa74aa672ec
Step 4/10 : RUN mkdir -p $HOME/my_dir
---> Running in a9072d9bfabb
Removing intermediate container a9072d9bfabb
---> 297ba12349fc
Step 5/10 : WORKDIR $HOME/my_dir
---> Running in 1966dddaea2e
Removing intermediate container 1966dddaea2e
---> 955bb4903b14
Step 6/10 : COPY ./my_verdaccio.conf.yaml $HOME/my_dir/my_verdaccio.conf.yaml
---> abcddd168468
Step 7/10 : RUN echo "pwd && ls -la" && pwd && ls -la
---> Running in 08a2be791971
pwd && ls -la
/my_dir
total 12
drwxr-xr-x 1 root root 4096 Dec 3 17:47 .
drwxr-xr-x 1 root root 4096 Dec 3 17:47 ..
-rw-rw-r-- 1 root root 1927 Dec 3 16:16 my_verdaccio.conf.yaml
Removing intermediate container 08a2be791971
---> 10aa4c1774b2
Step 8/10 : RUN echo "echo \$HOME && ls -la \$HOME" && echo $HOME && ls -la $HOME
---> Running in 054c7c857d0d
echo $HOME && ls -la $HOME
/home/verdaccio
total 12
drwxr-sr-x 1 verdacci verdacci 4096 Dec 3 17:47 .
drwxr-xr-x 1 root root 4096 Nov 15 19:58 ..
drwxr-sr-x 2 verdacci verdacci 4096 Dec 3 17:47 my_dir
Removing intermediate container 054c7c857d0d
---> fe11c625ee85
Step 9/10 : RUN echo "echo \$HOME/my_dir && ls -la \$HOME/my_dir" && echo $HOME/my_dir && ls -la $HOME/my_dir
---> Running in 887cd2eaa002
echo $HOME/my_dir && ls -la $HOME/my_dir
/home/verdaccio/my_dir
total 8
drwxr-sr-x 2 verdacci verdacci 4096 Dec 3 17:47 .
drwxr-sr-x 1 verdacci verdacci 4096 Dec 3 17:47 ..
Removing intermediate container 887cd2eaa002
---> 6808c030fa5a
Step 10/10 : CMD $APPDIR/bin/verdaccio --config $HOME/my_dir/my_verdaccio.conf.yaml --listen $PROTOCOL://0.0.0.0:${PORT}
---> Running in e4e9a35b36cb
Removing intermediate container e4e9a35b36cb
---> 47e25b18af95
Successfully built 47e25b18af95
Successfully tagged my_test:latest
Why is my_verdaccio.conf.yaml copied to /my_dir and not to /home/verdaccio/my_dir?
Why pwd points to /my_dir and why is /my_dir even created?
verdaccio's Dockerfile: https://github.com/verdaccio/verdaccio/blob/master/Dockerfile.
Try setting
ENV HOME /root/ <Or your home directory>
in the dockerfile before running RUN mkdir -p $HOME/my_dir
The docker build does not inherit the environment variables of the shell it was launched from, so HOME is not defined, as evident from the build output.
For $HOME to be accessible in the docker build, you could use the --build-arg parameter combined with ENV and ARG in your Dockerfile. For example, add the following to your Dockerfile:
ARG home_arg
ENV HOME=$home_arg
RUN echo $HOME # test
Then, add --build-arg home_arg=$HOME to the docker build command line.

Docker container command not found or does not exist

Im having troubles with Docker.
Here is my DockerFile
FROM alpine:3.3
WORKDIR /
COPY myinit.sh /myinit.sh
ENTRYPOINT ["/myinit.sh"]
myinit.sh
#!/bin/bash
set -e
echo 123
Thats the way i build my image
docker build -t test --no-cache
Run logs
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM alpine:3.3
---> d7a513a663c1
Step 2 : WORKDIR /
---> Running in eda8d25fe880
---> 1dcad4f11062
Removing intermediate container eda8d25fe880
Step 3 : COPY myinit.sh /myinit.sh
---> 49234cc3903a
Removing intermediate container ffe6227c921f
Step 4 : ENTRYPOINT /myinit.sh
---> Running in 56d3d748b396
---> 060f6da19513
Removing intermediate container 56d3d748b396
Successfully built 060f6da19513
Thats how i run the container
docker run --rm --name test2 test
docker: Error response from daemon: Container command '/myinit.sh' not found or does not exist..
myinit.sh exist for sure. Here is ls -al
ls -al
total 16
drwxr-xr-x 4 lorddaedra staff 136 10 май 19:43 .
drwxr-xr-x 25 lorddaedra staff 850 10 май 19:42 ..
-rw-r--r-- 1 lorddaedra staff 82 10 май 19:51 Dockerfile
-rwxr-xr-x 1 lorddaedra staff 29 10 май 19:51 myinit.sh
Why it cant see my entrypoint script? Any solutions?
Thanks
It's not the entrypoint script it can't find, but the shell it's referencing -- alpine:3.3 doesn't by default have bash inside it. Change your myinit.sh to:
#!/bin/sh
set -e
echo 123
i.e. referencing /bin/sh instead of /bin/bash

Resources