I'm trying to get Jenkins to run Docker that runs SystemD.
So far I've been able to run systemd inside docker locally without Jenkins. Here are the steps to run it locally without jenkins:
# pull unop/fedora-systemd and create and run the container for it
sudo docker run --cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i unop/fedora-systemd
# on a different terminal window, I can:
# get the container id of the "unop/fedora-systemd" image
sudo docker ps
# then exec bash on it
sudo docker container exec -t -i a98aa2bcd19e bash # where a98aa2bcd19e is the container id found above
# once inside the container, I can run systemd without any problems. examples:
systemctl status
systemctl start dbus.service
systemctl status dbus.service
The above works locally and I am able to run systemd inside the docker container.
The problem I get is when I try the same thing, but inside Jenkins.
I've tried to tweak Jenkinsfile several times, but not of my previous tries seemed to work. I always get an error when running under Jenkins similar to this:
+ systemctl status
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
This is my latest Jenkinsfile that I've tried
pipeline {
agent {
docker {
image 'unop/fedora-systemd'
args '--cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i'
}
}
stages {
stage('test') {
steps {
sh "echo hello world"
sh "systemctl status"
sh "systemctl start dbus.service"
sh "systemctl dbus.service"
}
}
}
}
On previous iterations of the Jenkinsfile, I've tried to replace -cap-add=SYS_ADMIN -e container=docker for --privileged, but that didn't help, I still got the same errors
Anyone have an idea of how can I get this to work? Why does the above work locally, but not on Jenkins? what am I missing here?
Note: Jenkins version: 2.150.2 and this is the Dockerfile used by unop/fedora-systemd
FROM fedora:rawhide
MAINTAINER http://fedoraproject.org/wiki/Cloud
ENV container docker
RUN dnf -y update && dnf clean all
RUN dnf -y install systemd && dnf clean all && \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup", "/tmp", "/run" ]
CMD ["/usr/sbin/init"]
PS: I've seen a related question, but what they were asking is different
I did not know about the related question. Let me point out again that you do not need to run a systemd daemon in a systemd controlled container if it is just about running multiple services in it. Simply overwrite /usr/bin/systemctl with the docker-systemctl-replacement script. Then go to register it with CMD ["/usr/bin/systemctl"] as the init process of the container.
That's it. Now you can run any systemctl-start process from the operating system. It works to the extent that even provisioning with ansible/puppet scripts have no problem at all. An specficially, I am using that to provision Jenkins images with the operating system that the developers like to have as a basis. No priviledged mode required.
You may try an image that has Fedora with System D already active with this command:
docker run -d --name systemd-fedora --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-fedora
Then you just need to run:
docker exec -it systemd-fedora /bin/bash
and there you can just install, start and restart any service you need.
Related
Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.
I am going to leave the question with answer to help anyone who has similar problem with configuring docker and systemd together.
How we can start and stop docker container with systemd?
First we will create docker image
FROM debian:stretch
RUN apt-get update \
&& apt-get install -y systemd \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/sysinit.target.wants/systemd-tmpfiles-setup* \
/lib/systemd/system/systemd-update-utmp*
# systemd should be started with PID 1
CMD [ "/lib/systemd/systemd" ]
Let's build our docker image
docker build -t test_image path_to_docker_file
Now we can create and start new container
docker run --name test_container -it -d --privileged --stop-signal RTMIN+3 test_image
Or you can start existing container
docker start test_image
Now you can attach to running container to execute some bash command for example
docker exec -it test_container bash
To stop container
docker stop test_container
IMPORTANT!!!
Do not specify ENTRYPOINT when you start container it should be always SYSTEMD, if you want to execute something do it by attaching to container
docker exec -it test_container bash (or other any commands)
You cannot start systemd during Docker image build.
Solution was tested for Docker version 2.0.0.3 on macOS
Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container
I'm running a container based on ubuntu:14.04, and I need to be able to use avahi-browse inside it. However:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-browse -a
Failed to create client object: Daemon not running
(.env)root#8faa2c44e53e:/opt/cluster-manager# service avahi-daemon status
Avahi mDNS/DNS-SD Daemon is running
The actual problem I have is a pybonjour error; pybonjour.BonjourError: (-65537, 'unknown') but I've read that is linked to the problem with the avahi-daemon.
So; how do I connect to the avahi-daemon from the container ?
P.S. I have to switch dbus off in the avahi-daemon.conf fill to make it possible to start it, otherwise avahi-daemon won't start, with a dbus error like this:
(.env)root#8faa2c44e53e:/opt/cluster-manager# avahi-daemon
Found user 'avahi' (UID 103) and group 'avahi' (GID 107).
Successfully dropped root privileges.
avahi-daemon 0.6.31 starting up.
dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
WARNING: Failed to contact D-Bus daemon.
avahi-daemon 0.6.31 exiting.
As far I can test you can use host's avahi-daemon through Unix socket for mDNS to resolve and /var/run/dbus for avali-browse to work.
E.g.:
docker run -v /var/run/dbus:/var/run/dbus -v /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket -ti debian:10-slim bash
To test inside container:
apt-get update && apt-get install avahi-utils iputils-ping -y
ping whatever.local
avahi-browse -a
Avahi requires D-BUS in order to communicate with clients. Sounds like your docker container isn't starting the system D-BUS. If you do that, then Avahi should work.
You need D-BUS for most of Avahi's functionality (including avahi-browse) so disabling it won't really help.
There is a docker image supposedly supporting avahi from within the container. The trick seems to be to mount /var/run/dbus from the host into the container.
Note that I couldn't make it work to run this image on my 16.04. host.
I ran into the same problem getting avahi and dbus to operate correctly on Ubuntu 14.04 (specifically, I was trying to use ROS TurtleBot). I solved it by incorporating a modified version of the instructions in docker-systemd into my Dockerfile:
FROM ubuntu:14.04
RUN apt-get update &&\
apt-get install -y avahi-utils avahi-daemon libnss-mdns systemd
RUN cd /lib/systemd/system/sysinit.target.wants/;\
ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1 \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*; \
rm -f /lib/systemd/system/plymouth*; \
rm -f /lib/systemd/system/systemd-update-utmp*
RUN mkdir -p /var/run/dbus
ENV init /lib/systemd/systemd
After modifying your Dockerfile to include these instructions, you should create a container using the following command:
docker run --rm --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -it <DOCKER_IMAGE> /bin/bash
Finally, once you're inside the container, you must execute the following commands before attempting to use avahi-browse (directly or indirectly):
$ dbus-service --system
$ /etc/init.d/avahi-daemon start
Another solution is to use mdns-repeater on the host to forward mDNS packets to the Docker network
mdns-repeater eth1 docker0
I needed to add 2 parameters in my call to docker run command for avahi-browse -at command to run inside the container:
--privileged and -v /var/run/dbus:/var/run/dbus
I have a workflow as follows for publishing webapps to my dev server. The server has a single docker host and I'm using docker-compose for managing containers.
Push changes in my app to a private gitlab (running in docker). The app includes a Dockerfile and docker-compose.yml
Gitlab triggers a jenkins build (jenkins is also running in docker), which does some normal build stuff (e.g. run test)
Jenkins then needs to build a new docker image and deploy it using docker-compose.
The problem I have is in step 3. The way I have it set up, the jenkins container has access to the host docker so that running any docker command in the build script is essentially the same as running it on the host. This is done using the following DockerFile for jenkins:
FROM jenkins
USER root
# Give jenkins access to docker
RUN groupadd -g 997 docker
RUN gpasswd -a jenkins docker
# Install docker-compose
RUN curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
USER jenkins
and mapping the following volumes to the jenkins container:
-v /var/run/docker.sock:/var/run/docker.sock
-v /usr/bin/docker:/usr/bin/docker
A typical build script in jenkins looks something like this:
docker-compose build
docker-compose up
This works ok, but there are two problems:
It really feels like a hack. But the only other options I've found is to use the docker plugin for jenkins, publish to a registry and then have some way of letting the host know it needs to restart. This is quite a lot more moving parts, and the docker-jenkins plugin required that the docker host is on an open port, which I don't really want to expose.
The jenkins DockerFile includes groupadd -g 997 docker which is needed to give the jenkins user access to docker. However, the GID (997) is the GID on the host machine, and is therefore not portable.
I'm not really sure what solution I'm looking for. I can't see any practical way to get around this approach, but it would be nice if there was a way to allow running docker commands inside the jenkins container without having to hard code the GID in the DockerFile. Does anyone have any suggestions about this?
My previous answer was more generic, telling how you can modify the GID inside the container at runtime. Now, by coincidence, someone from my close colleagues asked for a jenkins instance that can do docker development so I created this:
FROM bdruemen/jenkins-uid-from-volume
RUN apt-get -yqq update && apt-get -yqq install docker.io && usermod -g docker jenkins
VOLUME /var/run/docker.sock
ENTRYPOINT groupmod -g $(stat -c "%g" /var/run/docker.sock) docker && usermod -u $(stat -c "%u" /var/jenkins_home) jenkins && gosu jenkins /bin/tini -- /usr/local/bin/jenkins.sh
(The parent Dockerfile is the same one I have described in my answer to: Changing the user's uid in a pre-build docker container (jenkins))
To use it, mount both, jenkins_home and docker.sock.
docker run -d /home/jenkins:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock <IMAGE>
The jenkins process in the container will have the same UID as the mounted host directory. Assuming the docker socket is accessible to the docker group on the host, there is a group created in the container, also named docker, with the same GID.
I ran into the same issues. I ended up giving Jenkins passwordless sudo privileges because of the GID problem. I wrote more about this here: https://blog.container-solutions.com/running-docker-in-jenkins-in-docker
This doesn't really affect security as having docker privileges is effectively equivalent to sudo rights.
Please take a look at this docker file I just posted:
https://github.com/bdruemen/jenkins-docker-uid-from-volume/blob/master/gid-from-volume/Dockerfile
Here the GID extracted from a mounted volume (host directory), with
stat -c '%g' <VOLUME-PATH>
Then the GID of the group of the container user is changed to the same value with
groupmod -g <GID>
This has to be done as root, but then root privileges are dropped with
gosu <USERNAME> <COMMAND>
Everything is done in the ENTRYPOINT, so the real GID is unknown until you run
docker run -d -v <HOST-DIRECTORY>:<VOLUME-PATH> ...
Note that after changing the GID, there might be other files in the container no longer accessible for the process, so you might need a
chgrp -R <GROUPNAME> <SOME-PATH>
before the gosu command.
You can also change the UID, see my answer here Changing the user's uid in a pre-build docker container (jenkins)
and maybe you want to change both to increase security.
I solved a similar problem in the following way.
Docker is installed on the host. Jenkins is deployed in the docker container of the host. Jenkins must build and run containers with web applications on the host.
Jenkins master connects to the docker host using REST APIs. So we need to enable the remote API for our docker host.
Log in to the host and open the docker service file /lib/systemd/system/docker.service. Search for ExecStart and replace that line with the following.
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Reload and restart docker service
sudo systemctl daemon-reload
sudo service docker restart
Docker file for Jenkins
FROM jenkins/jenkins:lts
USER root
# Install the latest Docker CE binaries and add user `jenkins` to the docker group
RUN apt-get update
RUN apt-get -y --no-install-recommends install apt-transport-https \
apt-utils ca-certificates curl gnupg2 software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
RUN apt-get update && apt-get install -y docker-ce-cli docker-ce && \
apt-get clean && \
usermod -aG docker jenkins
USER jenkins
RUN jenkins-plugin-cli --plugins "blueocean:1.25.6 docker-workflow:1.29 ansicolor"
Build jenkins docker image
docker build -t you-jenkins-name .
Run Jenkins
docker run --name you-jenkins-name --restart=on-failure --detach \
--network jenkins \
--env DOCKER_HOST=tcp://172.17.0.1:4243 \
--publish 8080:8080 --publish 50000:50000 \
--volume jenkins-data:/var/jenkins_home \
--volume jenkins-docker-certs:/certs/client:ro \
you-jenkins-name
Your web application has a repository at the root of which is jenkins and a docker file.
Jenkinsfile for web app:
pipeline {
agent any
environment {
PRODUCT = 'web-app'
HTTP_PORT = 8082
DEVICE_CONF_HOST_PATH = '/var/web-app'
}
options {
ansiColor('xterm')
skipDefaultCheckout()
}
stages {
stage('Checkout') {
steps {
script {
//BRANCH_NAME = env.CHANGE_BRANCH ? env.CHANGE_BRANCH : env.BRANCH_NAME
deleteDir()
//git url: "git#<host>:<org>/${env.PRODUCT}.git", branch: BRANCH_NAME
}
checkout scm
}
}
stage('Stop and remove old') {
steps {
script {
try {
sh "docker stop ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker rm ${env.PRODUCT}"
} catch (Exception e) {}
try {
sh "docker image rm ${env.PRODUCT}"
} catch (Exception e) {}
}
}
}
stage('Build') {
steps {
sh "docker build . -t ${env.PRODUCT}"
}
}
// ④ Run the test using the built docker image
stage('Run new') {
steps {
script {
sh """docker run
--detach
--name ${env.PRODUCT} \
--publish ${env.HTTP_PORT}:8080 \
--volume ${env.DEVICE_CONF_HOST_PATH}:/var/web-app \
${env.PRODUCT} """
}
}
}
}
}