I am trying to create a Docker container with a custom D-Bus bus running inside.
I configured my Dockerfile as follow:
FROM ubuntu:16.04
COPY myCustomDbus.conf /etc/dbus-1/
RUN apt-get update && apt-get install -y dbus
RUN dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf
After building, the socket is created but it is flagged as "file", not as "socket", and I can not use it as a bus...
-rwxrwxrwx 1 root root 0 Mar 20 07:25 myCustomDbus.sock
If I remove this file and run the dbus-daemon command again in a terminal, the socket is successfully created :
srwxrwxrwx 1 root root 0 Mar 20 07:35 myCustomDbus.sock
I am not sure if it is a D-Bus problem or a docker one.
Instead of using the "RUN" command, you should use the "ENTRYPOINT" one to run a startup script.
The Dockerfile should look like that :
FROM ubuntu:14.04
COPY myCustomDbus.conf /etc/dbus-1/
COPY run.sh /etc/init/
RUN apt-get update && apt-get install -y dbus
ENTRYPOINT ["/etc/init/run.sh"]
And run.sh :
#!/bin/bash
dbus-daemon --config-file=/etc/dbus-1/myCustomDbus.conf --print-address
You should use a startup script. The "run" command is executed only when the container is created and then stopped.
my run.sh:
if ! pgrep -x "dbus-daemon" > /dev/null
then
# export DBUS_SESSION_BUS_ADDRESS=$(dbus-daemon --config-file=/usr/share/dbus-1/system.conf --print-address | cut -d, -f1)
# or:
dbus-daemon --config-file=/usr/share/dbus-1/system.conf
# and put in Dockerfile:
# ENV DBUS_SESSION_BUS_ADDRESS="unix:path=/var/run/dbus/system_bus_socket"
else
echo "dbus-daemon already running"
fi
if ! pgrep -x "/usr/lib/upower/upowerd" > /dev/null
then
/usr/lib/upower/upowerd &
else
echo "upowerd already running"
fi
then chrome runs with
--use-gl=swiftshader
without errors
Related
I have setup a cron inside a docker php:8.1.0-fpm-buster. It's running like expected, but there is no log showing up inside the docker desktop, it's a black screen.
Here is the docker file
FROM php:8.1.0-fpm-buster
ARG ENV
RUN apt-get update && apt-get -y install cron
RUN touch /var/log/cron.log
RUN chmod 777 /var/log/cron.log
COPY ./crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
RUN /usr/bin/crontab /etc/cron.d/crontab
CMD [ "cron", "-f", "-L", "2" ]
What I was expecting inside the logs was something similar to linux logs of cron, like this example:
Jan 20 09:32:01 ns555555 CRON[21051]: (root) CMD (echo 'my command')
I tried differents commands:
I added bash -c before the cron command
I remove the -L 2
I have also found other stackoverflow posts, but eachtime it's not the same cron:
See cron output via docker logs, without using an extra file
Docker - Using PHP Cli base image for Cron container
What am I doing wrong ? Did I install the wrong cron ?
I found out the solution inside this post:
How to run a cron job inside a docker container?
I added > /proc/1/fd/1 2>/proc/1/fd/2 after the command, now I have the command output inside the logs
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:
I'm trying to install nvm within a Dockerfile. It seems like it installs OK, but the nvm command is not working.
Dockerfile:
# Install nvm
RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
RUN chmod -R 777 /root/.nvm/;
RUN sh /root/.nvm/install.sh;
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
RUN nvm ls-remote;
Build output:
Step 23/39 : RUN git clone http://github.com/creationix/nvm.git /root/.nvm;
---> Running in ca485a68b9aa
Cloning into '/root/.nvm'...
---> a6f61d486443
Removing intermediate container ca485a68b9aa
Step 24/39 : RUN chmod -R 777 /root/.nvm/
---> Running in 6d4432926745
---> 30e7efc5bd41
Removing intermediate container 6d4432926745
Step 25/39 : RUN sh /root/.nvm/install.sh;
---> Running in 79b517430285
=> Downloading nvm from git to '$HOME/.nvm'
=> Cloning into '$HOME/.nvm'...
* (HEAD detached at v0.33.0)
master
=> Compressing and cleaning up git repository
=> Appending nvm source string to /root/.profile
=> bash_completion source string already in /root/.profile
npm info it worked if it ends with ok
npm info using npm#3.10.10
npm info using node#v6.9.5
npm info ok
=> Installing Node.js version 6.9.5
Downloading and installing node v6.9.5...
Downloading https://nodejs.org/dist/v6.9.5/node-v6.9.5-linux-x64.tar.xz...
######################################################################## 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v6.9.5 (npm v3.10.10)
Creating default alias: default -> 6.9.5 (-> v6.9.5 *)
/root/.nvm/install.sh: 136: [: v6.9.5: unexpected operator
Failed to install Node.js 6.9.5
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
---> 9f6f3e74cd19
Removing intermediate container 79b517430285
Step 26/39 : RUN export NVM_DIR="$HOME/.nvm";
---> Running in 1d768138e3d5
---> 8039dfb4311c
Removing intermediate container 1d768138e3d5
Step 27/39 : RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
---> Running in d91126b7de62
---> 52313e09866e
Removing intermediate container d91126b7de62
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The error:
Step 28/39 : RUN nvm ls-remote;
---> Running in f13c1ed42b3a
/bin/sh: 1: nvm: not found
The command '/bin/sh -c nvm ls-remote;' returned a non-zero code: 127
The end of my /root/.bashrc file looks like:
[[ -s /root/.nvm/nvm.sh ]] && . /root/.nvm/nvm.sh
Everything else in the Dockerfile works. Adding the nvm stuff is what broke it. Here is the full file.
I made the following changes to your Dockerfile to make it work:
First, replace...
RUN sh /root/.nvm/install.sh;
...with:
RUN bash /root/.nvm/install.sh;
Why? On Redhat-based systems, /bin/sh is a symlink to /bin/bash. But on Ubuntu, /bin/sh is a symlink to /bin/dash. And this is what happens with dash:
root#52d54205a137:/# bash -c '[ 1 == 1 ] && echo yes!'
yes!
root#52d54205a137:/# dash -c '[ 1 == 1 ] && echo yes!'
dash: 1: [: 1: unexpected operator
Second, replace...
RUN nvm ls-remote;
...with:
RUN bash -i -c 'nvm ls-remote';
Why? Because, the default .bashrc for a user in Ubuntu (almost at the top) contains:
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
And the source-ing of nvm's scripts takes place at the bottom. So we need to make sure that bash is invoked interactively by passing the argument -i.
Third, you could skip the following lines in your Dockerfile:
RUN export NVM_DIR="$HOME/.nvm";
RUN echo "[[ -s $HOME/.nvm/nvm.sh ]] && . $HOME/.nvm/nvm.sh" >> $HOME/.bashrc;
Why? Because bash /root/.nvm/install.sh; will automatically do it for you:
[fedora#myhost ~]$ sudo docker run --rm -it 2a283d6e2173 tail -2 /root/.bashrc
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
Instalation of nvm on ubuntu in dockerfile
In the case of Ubuntu 20.04 you can use only these commands and everything will be alright
FROM ubuntu:20.04
RUN apt update -y && apt upgrade -y && apt install wget bash -y
RUN wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
RUN bash -i -c 'nvm ls-remote'
hopefully it will work
I use supervisor to run cron and nginx, the problem is when i try to COPY or VOLUME mount my cron files, it does not run my cron files in /etc/cron.d
But when I exec -it <container_id> bash into the container and create the exact same cron file from inside, it is immediately recognized and runs as it should.
Dockerfile :
FROM phusion/baseimage:latest
ENV TERM xterm
ENV HOME /root
RUN apt-get update && apt-get install -y \
nginx \
supervisor \
curl \
nano \
net-tools
RUN rm -rf /etc/nginx/*
COPY nginx_conf /etc/nginx
COPY supervisor_conf /etc/supervisor/
RUN mkdir -p /var/log/supervisor
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
CMD /usr/bin/supervisord
The cron itself
* * * * * root curl --silent http://127.0.0.1/cronjob/cron_test_docker.php >> /var/www/html/log/docker_test.log 2>&1
cron and nginx run through supervisor
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx -g "daemon off;"
autostart = true
[program:cron]
command = /usr/sbin/cron -f
autostart = true
The logs inside /var/log/supervisor/ relating to cron for stdout and stderr are empty.
I also tried stripping out supervisor and running cron on its own through phusion and CMD cron -f but got the same issue of it not working when the source is external(COPY or VOLUME) and magically works when created inside the container.
Initially believed it to be a permissions issue and tried chmod 644 (as this was the permission a file created in the container had) on all files that were the result of COPY into.
RUN chmod 644 /etc/cron.d/
After which tried every possible combination of permissions with rwx to no avail.
Also, tried to append the line of the cronjob into /etc/crontab but it is not recognized in crontab -l.
COPY crontab /tmp/crontab
RUN cat crontab >> /etc/crontab
It would be really handy if it worked just when it was created through COPY or VOLUME as it is a hassle to create it manually in the container everytime.
Any help would be greatly appreciated!
Edit 1 :
Some additional information about the file permissions after COPY or VOLUME.
When I perform
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
Inside the container running ls -l inside /etc/cron.d/ shows
-rw-r--r-- 1 root root 118 Jul 20 11:03 wwwcron-cron-docker_test
When I mount the folder through my docker-compose through VOLUME
volumes:
- ./server/crontabs:/etc/cron.d
ls -l shows
-rwxrwxrwx 1 1000 staff 118 Jul 20 11:03 wwwcron-cron-docker_test
In addition if I manually create the cron file in the container it looks like this and this works
-rw-r--r-- 1 root root 118 Jul 22 15:50 wwwcron-cron-docker_test_inside_docker
Clearly there are very different permissions and ownership when making COPY or VOLUME. But making a COPY with exact permissions does not work but seems to work when created in the container.
Thanks to #BMitch was able to find the issue which was related to line endings since my host machine was windows and the cron file origin was windows as well there was a disparity in the line endings thereby cron did not pick it up automatically.
I added this line to my Dockerfile and it works like a charm
RUN find /etc/cron.d/ -type f -print0 | xargs -0 dos2unix
And iterating on that the size of the file is indeed 1 byte smaller when a dos2unix is run, so you can verify if this operation indeed occurred.
-rw-r--r-- 1 root root 117 Jul 25 08:33 wwwcron-cron-docker_test
Have you tried installing the crontab as a separate command in the Dockerfile?
i.e.
...
COPY crontabs /path/to/crontab.txt
RUN crontab -u myUser /path/to/crontab.txt
...
I have an Ubuntu 14.04 docker image that I want to schedule a python script within to execute every minute. My DockerFile contains CMD ["cron","-f"] in order to start the cron daemon. The crontab entry looks like this:
0,1 * * * * root python /opt/com.org.project/main.py >> /opt/com.org.project/var/log/cron.log
/opt/com.org.project/main.py is completely accessible and owned by root and has 744 privileges set; so can be executed.
Nothing is showing up in my /opt/com.org.project/var/log/cron.log file, nor /var/log/cron.log. Yet ps aux | grep cron shows cron -f running at PID 1.
What am I missing? Why is my cron job not running within the container?
Here are my DockerFile contents as requested:
FROM ubuntu
# Update the os and install the dependencies needed for the container
RUN apt-get update \
&& apt-get install -y \
nano \
python \
python-setuptools \
python-dev \
xvfb \
firefox
# Install PIP for python package management
RUN easy_install pip
CMD ["cron", "-f"]
Why use cron? Just write a shell script like this:
#!/bin/bash
while true; do
python /opt/com.org.project/main.py >> /opt/com.org.project/var/log/cron.log
sleep 60
done
Then just set it as entrypoint.
ENTRYPOINT ["/bin/bash", "/loop_main.sh" ]
Where did you use crontab -e? On the host running docker or in the container itself?
I can't see that you are adding an crontab entry in the dockerfile you provided. I recommend you to add an external crontab file like this:
ADD crontabfile /app/crontab
RUN crontab /app/crontab
CMD ["cron", "-f"]
The file crontabfile has to be located next to Dockerfile.
image_folder
|
|- Dockerfile
|- crontabfile
Example content of crontabfile:
# m h dom mon dow command
30 4 * * * /app/myscript.py