docker-compose cannot recognize sudoers container file - docker

I have PHP Dockerfile:
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo php-fpm${PHP_VERSION} --nodaemonize"]
...
It works via docker:
$ docker run -dit php7.4-fpm
$ docker exec -it 2e9331162630 ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
php-7-4 1 0.0 0.0 2384 764 pts/0 Ss+ 15:48 0:00 sh -c sudo php-
root 6 0.0 0.0 6592 3224 pts/0 S+ 15:48 0:00 sudo php-fpm7.4
root 7 0.0 0.3 635904 33796 ? Ss 15:48 0:00 php-fpm: master
www-data 8 0.0 0.0 635904 7968 ? S 15:48 0:00 php-fpm: pool w
And not working via docker-compose:
$ docker-compose up
php_1 |
php_1 | We trust you have received the usual lecture from the local System
php_1 | Administrator. It usually boils down to these three things:
php_1 |
php_1 | #1) Respect the privacy of others.
php_1 | #2) Think before you type.
php_1 | #3) With great power comes great responsibility.
php_1 |
php_1 | sudo: no tty present and no askpass program specified
docker_php_1 exited with code 1
How to avoid sudo password prompt in docker-compose ?

Generally you don't use sudo in Docker at all: it's all but impossible to safely set a user password, and whenever you run a container, you can directly specify the user ID it uses (with the docker run -u option). Containers only run one process and usually don't have multiple users.
In the particular example you have here, you're in theory running the container as a non-root user, but the main container process is a sudo invocation that immediately switches back to the root user. You can eliminate the intermediate step here and just specify
USER root
CMD php-fpm${PHP_VERSION} --nodaemonize

Note that you allocate pseudo tty with -t.
Do same in docker-compose with tty: true.

Not sure if explanation is correct, but it works:
Environment variable DEBIAN_FRONTEND=noninteractive is guilty, need to share it
PHP Dockerfile:
ENV ...
# Avoid 'debconf: unable to initialize frontend: Dialog'
DEBIAN_FRONTEND=noninteractive
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD:SETENV: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo --preserve-env=DEBIAN_FRONTEND php-fpm${PHP_VERSION} --nodaemonize"]
Thanks to https://superuser.com/a/1001684/192832

Related

Container not running

could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:

How to redirect command output from docker container

Just another topic on this matter, but what's the best way of outputting docker container command STDOUT/ERR to a file other than running the command such as
bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
What I don't like of the above is the fact that it results in 1 additional process, so finally I get 2 processes instead of 1, and my master cluster process is not the one with PID=1.
If I try
exec node cluster.js >> /var/log/cluster/console.log 2>&1
I get this error:
Error response from daemon: Cannot start container node:
exec: "node cluster.js >> /var/log/cluster/console.log 2>&1": executable file not found in $PATH
I am starting my container via docker-compose:
version: '3'
services:
node:
image: custom
build:
context: .
args:
ENVIRONMENT: production
restart: always
volumes:
- ./logs:/var/log/cluster
command: bash -c "node cluster.js >> /var/log/cluster/console.log 2>&1"
ports:
- "443:443"
- "80:80"
When I docker-compose exec node ps -fax | grep -v grep | grep node I get 1 extra process:
1 ? Ss 0:00 bash -c node cluster.js >> /srv/app/cluster/cluster.js
5 ? Sl 0:00 node cluster.js
15 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
20 ? Sl 0:01 \_ /usr/local/bin/node /srv/app/cluster/cluster.js
As you can see, the bash -c starts 1 process which on the other hand forks the main node process. In docker container the process started by the command always has PID=1, that's what I want the node process to be. But it will be 5, 6, etc.
Thanks for the reply. I managed to solve the issue by creating a bash file that starts my node cluster with exec:
# start-cluster.sh
exec node cluster.js >> /var/log/cluster/console.log 2>&1
And in docker-compose file:
# docker-compose.yml
command: bash -c "./start-cluster.sh"
Starting the cluster with exec replaces the shell with node process and this way it has always PID=1 and my logs are output to file.

cron task in docker container not being executed

I have this Dockerfile (where I am using miniconda just because I would like to schedule some python scripts, but it's a debian:jessie docker image):
FROM continuumio/miniconda:4.2.12
RUN mkdir -p /workspace
WORKDIR /workspace
ADD volume .
RUN apt-get update
RUN apt-get install -y cron
ENTRYPOINT ["/bin/sh", "/workspace/conf/entrypoint.sh"]
The script entrypoint.sh that keeps the container alive is this one:
#!/usr/bin/env bash
echo ">>> Configuring cron"
service cron start
touch /var/log/cron.log
mv /workspace/conf/root /var/spool/cron/crontabs/root
chmod +x /var/spool/cron/crontabs/root
crontab /var/spool/cron/crontabs/root
echo ">>> Done!"
tail -f /var/log/cron.log
From the docker documentation about supervisor (https://docs.docker.com/engine/admin/using_supervisord/) it looks like that could be an option as well as the bash script option (as in my example), that's why I decided to go for the bash script and to ignore supervisor.
And the content of the cron details /workspace/conf/root is this:
* * * * * root echo "Hello world: $(date +%H:%M:%S)" >> /var/log/cron.log 2>&1
(with at the bottom as an empty line \n)
I can not find a way to see that Hello world: $(date +%H:%M:%S) each minute appended to /var/log/cron.log, but to me all the cron/crontab settings are correct.
When I check the logs of the container I can see:
>>> Configuring cron
[ ok ] Starting periodic command scheduler: cron.
>>> Done!
Also, when logging into the running container I can see the cron daemon running:
root#2330ced4daa9:/workspace# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 1580 ? Ss+ 13:06 0:00 /bin/sh /workspace/conf/entrypoint.sh
root 14 0.0 0.0 27592 2096 ? Ss 13:06 0:00 /usr/sbin/cron
root 36 0.0 0.0 5956 740 ? S+ 13:06 0:00 tail -f /var/log/cron.log
root 108 0.5 0.1 21948 3692 ? Ss 13:14 0:00 bash
root 114 0.0 0.1 19188 2416 ? R+ 13:14 0:00 ps aux
What am I doing wrong?
Are you sure the Cronjob has execution rights?
chmod 0644 /var/spool/cron/crontabs/root

Add a new entrypoint to a docker image

Recently, we decided to move one of our services to docker container. The service is product of another company and they have provided us the docker image. However, we need to do some extra configuration steps in the container entrypoint.
The first thing I tried, was to create a DockerFile from the base image and then add commands to do the extra steps, like this:
From baseimage:tag
RUN chmod a+w /path/to/entrypoint_creates_this_file
But, it failed, because these extra steps must be run after running the base container entrypoint.
Is there any way to extend entrypoint of a base image? if not, what is the correct way to do this?
Thanks
I finally ended up calling the original entrypoint bash script in my new entrypoint bash script, before doing other extra configuration steps.
You do not need to even create a new Dockerfile. To modify the entrypoint you can just run the image using the command such as below:
docker run --entrypoint new-entry-point-cmd baseimage:tag <optional-args-to-entrypoint>
create your custom entry-point file
-> add this to image
-> specify this as your entrypoint file
FROM image:base
COPY /path/to/my-entry-point.sh /my-entry-point.sh
// do sth here
ENTRYPOINT ["/my-entry-point.sh"]
Let me take an example with certbot. Using the excellent answer from Anoop, we can get an interactive shell (-ti) into a temporary container (--rm) with this image like so:
$ docker run --rm -ti --entrypoint /bin/sh certbot/certbot:latest
But what if we want to run a command after the original entry point, like the OP requested? We could run a shell and join the commands as in the following example:
$ docker run --rm --entrypoint /bin/sh certbot/certbot:latest \
-c "certbot --version && touch i-can-do-nice-things-here && ls -lah"
certbot 1.30.0
total 28K
drwxr-xr-x 1 root root 4.0K Oct 5 15:10 .
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 ..
-rw-r--r-- 1 root root 0 Oct 5 15:10 i-can-do-nice-things-here
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 src
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 tools
Background
If I run it with the original entrypoint I will get this:
$ docker run --rm certbot/certbot:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log Certbot
doesn't know how to automatically configure the web server on this
system. However, it can still get a certificate for you. Please run
"certbot certonly" to do so. You'll need to manually configure your
web server to use the resulting certificate.
Or:
$ docker run --rm certbot/certbot:latest --version
certbot 1.30.0
I can see the entrypoint with docker inspect:
$ docker inspect certbot/certbot:latest | grep -i entry -C 2
},
"WorkingDir": "/opt/certbot",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
--
},
"WorkingDir": "/opt/certbot",
"Entrypoint": [
"certbot"
],
If /bin/sh doesn't work in your container, try /bin/bash.

Access named volume from container when not running as root?

I'm running Celery under Docker Compose. I'd like to make Celery's Flower persistent. So I do:
version: '2'
volumes:
[...]
flower_data: {}
[...]
flower:
image: [base code image]
ports:
- "5555:5555"
volumes:
- flower_data:/flower
command:
celery -A proj flower --port=5555 --persistent=True --db=/flower/flower
However, then I get:
IOError: [Errno 13] Permission denied: 'flower.dat'
I ran the following to elucidate why:
bash -c "ls -al /flower; whoami; celery -A proj flower --persistent=True --db=/flower/flower"
This made it clear why:
flower_1 | drwxr-xr-x 3 root root 4096 Mar 10 23:05 .
flower_1 | drwxr-xr-x 7 root root 4096 Mar 10 23:05 ..
Namely, the directory is mounted as root, yet in [base code image] I ensure the user running is not root, as per Celery's docks to never run as root:
FROM python:2.7
...
RUN groupadd user && useradd --create-home --home-dir /usrc/src/app -g user user
USER user
What would be the best way for Celery Flower to continue to run not as root, yet be able to use this named volume?
The following works: In the Dockerfile, install sudo and add user to the sudo group, requiring a password:
RUN apt-get update
RUN apt-get -y install sudo
RUN echo "user:SECRET" | chpasswd && adduser user sudo
Then, in the Docker Compose config, the command will be:
bash -c "echo SECRET | sudo -S chown user:user /flower; celery -A proj flower --power=5555 --persistent --db=/flower/flower"
I'm not sure if this is the best way, though, or what the security implications of this are.

Resources