Xvfb command in docker supervisor conf not working - docker

I have a Docker image based on Ubuntu that runs a supervisor script as the CMD at the end of the Dockerfile. This successfully runs uwsgi and nginx in the container on start up. However, the following appended at the end of the supervisor-app.conf does not work:
[program:Xvfb]
command=/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
When I open a shell into a running docker instance there is no X instance running:
root#9221694363ea:/# ps aux | grep X
root 39 0.0 0.0 8868 784 ? S+ 15:32 0:00 grep --color=auto X
However, running exactly the same command as in the supervisor-app.conf works
root#9221694363ea:/# /usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
[1] 40
root#9221694363ea:/# ps aux | grep X
root 40 1.2 0.1 170128 21604 ? Sl 15:33 0:00 /usr/bin/Xvfb :1 -screen 0 1024x768x16
root 48 0.0 0.0 8868 792 ? S+ 15:33 0:00 grep --color=auto X
so what's wrong with the line in the supervisor-app.conf?

Supervisor does not handle bash specific operators such as the-run-in-the -background '&' or redirections like '>' as per my original failing config line.
I solved it by using bash -c thus:
[program:Xvfb]
command=bash -c "/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log"
Now when I get into the docker bash shell the Xvfb window is created waiting for me to use it elsewhere in the code.

Related

docker-compose cannot recognize sudoers container file

I have PHP Dockerfile:
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo php-fpm${PHP_VERSION} --nodaemonize"]
...
It works via docker:
$ docker run -dit php7.4-fpm
$ docker exec -it 2e9331162630 ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
php-7-4 1 0.0 0.0 2384 764 pts/0 Ss+ 15:48 0:00 sh -c sudo php-
root 6 0.0 0.0 6592 3224 pts/0 S+ 15:48 0:00 sudo php-fpm7.4
root 7 0.0 0.3 635904 33796 ? Ss 15:48 0:00 php-fpm: master
www-data 8 0.0 0.0 635904 7968 ? S 15:48 0:00 php-fpm: pool w
And not working via docker-compose:
$ docker-compose up
php_1 |
php_1 | We trust you have received the usual lecture from the local System
php_1 | Administrator. It usually boils down to these three things:
php_1 |
php_1 | #1) Respect the privacy of others.
php_1 | #2) Think before you type.
php_1 | #3) With great power comes great responsibility.
php_1 |
php_1 | sudo: no tty present and no askpass program specified
docker_php_1 exited with code 1
How to avoid sudo password prompt in docker-compose ?
Generally you don't use sudo in Docker at all: it's all but impossible to safely set a user password, and whenever you run a container, you can directly specify the user ID it uses (with the docker run -u option). Containers only run one process and usually don't have multiple users.
In the particular example you have here, you're in theory running the container as a non-root user, but the main container process is a sudo invocation that immediately switches back to the root user. You can eliminate the intermediate step here and just specify
USER root
CMD php-fpm${PHP_VERSION} --nodaemonize
Note that you allocate pseudo tty with -t.
Do same in docker-compose with tty: true.
Not sure if explanation is correct, but it works:
Environment variable DEBIAN_FRONTEND=noninteractive is guilty, need to share it
PHP Dockerfile:
ENV ...
# Avoid 'debconf: unable to initialize frontend: Dialog'
DEBIAN_FRONTEND=noninteractive
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD:SETENV: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo --preserve-env=DEBIAN_FRONTEND php-fpm${PHP_VERSION} --nodaemonize"]
Thanks to https://superuser.com/a/1001684/192832

STDOUT logs not working when using symlink for log file to /proc/1/fd/1 on Kubernetes

I have a cronjob that runs every minute which will redirect output to a log file /var/log/cronjob/cron.log. Since this is running in Kubernetes, I want to redirect the log to STDOUT.
The approach I took was to use a symlink using RUN ln -sf /proc/1/fd/1 /var/log/cronjob/cron.log:
# ls -la /var/log/cronjob/cron.log
lrwxrwxrwx 1 root root 12 Jan 21 19:23 /var/log/cronjob/cron.log -> /proc/1/fd/1
When I run kubectl logs it has no output.
If I (within the container), delete the symlink and create as a normal file, my output as expected appears in the /var/log/cronjob/cron.log file.
# tail -f /var/log/cronjob/cron.log
Running scheduled command: '/usr/bin/php7.3' 'artisan' sync:health_check > '/dev/null' 2>&1
Running scheduled command: ('/usr/bin/php7.3' 'artisan' compute:user_preferences > '/dev/null' 2>&1 ; '/usr/bin/php7.3' 'artisan' schedule:finish "framework/schedule-9019c9dc22ad7439efd038277fe8f370f56958e7") > '/dev/null' 2>&1 &
How can I get the my log via symlink write to STDOUT?
I have tried other things such as:
Use /dev/stdout for the symlink
Tail the /var/log/cronjob/cron.log file within the entrypoint
Edit: More information about files/scripts:
crontab:
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* * * * * /usr/local/bin/schedule-run.sh
# An empty line is required at the end of this file for a valid cron file
/usr/local/bin/schedule-run.sh:
#!/bin/bash
# Source container environment variables
source /tmp/export
# Run Laravel scheduler
php /var/www/api/artisan schedule:run >> /var/log/cronjob/cron.log 2>&1
Edit #2:
Currently my CMD looks like this which spawns multiple child processes:
CMD export >> /tmp/export && crontab /etc/cron.d/realty-cron && cron && tail -f /var/log/cronjob/cron.log
root#workspace-dev-condos-ca-765dc6686-h8vdl:/var/www/api# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 21:55 ? 00:00:00 /bin/sh -c export >> /tmp/export && crontab /etc/cron.d/realty-cron && cron && tail -f /var/log/cronjob/cron.log
root 8 1 0 21:55 ? 00:00:00 cron
root 9 1 0 21:55 ? 00:00:00 tail -f /var/log/cronjob/cron.log
root 170 1 0 21:59 ? 00:00:00 ssh-agent -s
root 233 0 0 22:00 pts/0 00:00:00 bash
root 249 1 0 22:00 ? 00:00:00 ssh-agent -s
root 1277 233 0 22:26 pts/0 00:00:00 ps -ef
I'm not sure if that is relevant but through trial and error testing, I noticed that sometimes echo "test1" >> /proc/1/fd/1 or echo "test2" >> /proc/1/fd/2 will output to stdout (kubectl logs) but not both at the same time. I feel like the child processes are related but don't know why.

getting "GDK_BACKEND does not match available displays" even though Xvfb display is present

I'm running selenium tests on headless firefox inside docker. For that I have installed firefox and Xvfb inside docker and export display using Xvfb command when I run the container.
The issue is, when I run the docker container locally, selenium is able to find the display but when I run the docker container on Jenkins it's giving "GDK_BACKEND does not match available displays"
For Xvfb I do
sh "export DISPLAY=:1"
sh "Xvfb :1 -screen 0 1440x900x24 &"
I checked available displays just before starting my test case using "ps aux | grep X". Below is the output
root 31 0.0 0.4 172336 18644 ? Sl 16:36 0:00 Xvfb :1 -screen 0 1440x900x24
root 147 0.0 0.0 12812 980 ? S 16:36 0:00 grep X

cron task in docker container not being executed

I have this Dockerfile (where I am using miniconda just because I would like to schedule some python scripts, but it's a debian:jessie docker image):
FROM continuumio/miniconda:4.2.12
RUN mkdir -p /workspace
WORKDIR /workspace
ADD volume .
RUN apt-get update
RUN apt-get install -y cron
ENTRYPOINT ["/bin/sh", "/workspace/conf/entrypoint.sh"]
The script entrypoint.sh that keeps the container alive is this one:
#!/usr/bin/env bash
echo ">>> Configuring cron"
service cron start
touch /var/log/cron.log
mv /workspace/conf/root /var/spool/cron/crontabs/root
chmod +x /var/spool/cron/crontabs/root
crontab /var/spool/cron/crontabs/root
echo ">>> Done!"
tail -f /var/log/cron.log
From the docker documentation about supervisor (https://docs.docker.com/engine/admin/using_supervisord/) it looks like that could be an option as well as the bash script option (as in my example), that's why I decided to go for the bash script and to ignore supervisor.
And the content of the cron details /workspace/conf/root is this:
* * * * * root echo "Hello world: $(date +%H:%M:%S)" >> /var/log/cron.log 2>&1
(with at the bottom as an empty line \n)
I can not find a way to see that Hello world: $(date +%H:%M:%S) each minute appended to /var/log/cron.log, but to me all the cron/crontab settings are correct.
When I check the logs of the container I can see:
>>> Configuring cron
[ ok ] Starting periodic command scheduler: cron.
>>> Done!
Also, when logging into the running container I can see the cron daemon running:
root#2330ced4daa9:/workspace# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 1580 ? Ss+ 13:06 0:00 /bin/sh /workspace/conf/entrypoint.sh
root 14 0.0 0.0 27592 2096 ? Ss 13:06 0:00 /usr/sbin/cron
root 36 0.0 0.0 5956 740 ? S+ 13:06 0:00 tail -f /var/log/cron.log
root 108 0.5 0.1 21948 3692 ? Ss 13:14 0:00 bash
root 114 0.0 0.1 19188 2416 ? R+ 13:14 0:00 ps aux
What am I doing wrong?
Are you sure the Cronjob has execution rights?
chmod 0644 /var/spool/cron/crontabs/root

Duplicate process in cron job using Whenever gem for Rails

Using Rails 3.2.21, whenever gem. This is the list of my crontab:
Begin Whenever generated tasks for: abc
0 * * * * /bin/bash -l -c 'cd /home/deployer/abc/releases/20141201171336 &&
RAILS_ENV=production bundle exec rake backup:perform --silent'
Here's the output when the scheduled job is run:
deployer#localhost:~$ ps aux | grep rake
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash -l -c
'cd /home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent'
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash -l -c cd
/home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent
deployer 25631 69.2 4.4 409680 90072 ? Sl 12:00 0:06 ruby /home/deployer/abc/
shared/bundle/ruby/1.9.1/bin/rake backup:perform --silent
deployer 25704 0.0 0.0 11720 2012 pts/0 S+ 12:00 0:00 grep --color=auto rake
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently? How do I prevent that?
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash …
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash …
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently?
No, they aren't. The first is a /bin/sh that started the second, the crontab command /bin/bash …. Most probably /bin/sh is just waiting for termination of /bin/bash and not running again before /bin/bash … has finished execution; you can verify this with e. g. strace -p 25593.
Check your scheduled.rb for a duplicate entry, if you find then remove and deploy.
If there is no duplicate entry in scheduled.rb then you need to remove/comment it from cron tab.
To delete or comment jobs in cron take a look at https://help.1and1.com/hosting-c37630/scripts-and-programming-languages-c85099/cron-jobs-c37727/delete-a-cron-job-a757264.html OR http://www.esrl.noaa.gov/gmd/dv/hats/cats/stations/qnxman/crontab.html

Resources